Skip to main content

Microsoft's Bing Killer feature and why people dunk on Bard

 All LLMs are equivalent, I have not done any statistical studies on the error rates but I intuitively know that they are equivalent, meaning that they have similar rates of failure. So why is everyone dunking on Bard? Well, it's because people expected more from the search leader of the world. 

But the problem with Bard is not really Google's fault, it is more about human cognitive bias, the very bias that made Google stay at the top of the search engine race even as they kept adding excessive amounts of advertising that degenerated search results. The reason Google stayed on top for so long wasn't that their search engine was excessively better than others but because they had become a verb for anything that requires obtaining information from the internet. They also had the power of incumbency, so it did not matter how many ads people were flooded with, people just went to Google anytime they wanted to search for information because humans are lazy and hate change.

So why are people more tolerant of false information from Bing or chatGPT and more critical of Bard? it has absolutely nothing to do with the performance of the underlying model, which I argue is equivalent. Still, it has more to do with the perceived superiority of anything Google launches and thus people felt that since Google was playing catchup to the LLM game, remember that the original Transformer architecture was actually invented at Google, people felt that Google would fix the errors that chatGPT was producing because en masse they don't know that just like any other algorithmic system, LLMs are at their ultimate form, they can be patched but fundamentally they cannot be improved, to improve on Transformer based LLMs would actually require inventing new architectures. It's like insertion sort, it is the way it is, you can add more computers but if you really want more performance with fewer resources you would have to invent something like Mergesort, Quicksort or more realistically Timsort.

OpenAI guys know this that's why they focused on patching errors, they see that architecture wise there is little room for improvement, and that's why they hired programmers to help chatGPT 'learn'  how to code, which basically means, manually solving problems that were beyond their system's capability. 

And with the addition of Wolfram Alpha as a plugin to their system it is clearly evident that they acknowledge the fact that you cannot use LLMs to perform math robustly so outsource it to something that has been almost hand engineered for more than 20 years. 

Google need not fret, if they want to get back on top then they have to replicate everything OpenAI is doing, and they can do it fast because they have money. And now to the killer feature of Bing, my favourite feature, I will write it out in the next paragraph and end this post with it. 

Every conversational LLM user interface must insert links in their result directly!!! Many people will not want to Google it!

Thank you

Comments

Popular posts from this blog

Virtual Reality is the next platform

VR Headset. Source: theverge.com It's been a while now since we started trying to develop Virtual Reality systems but so far we have not witnessed the explosion of use that inspired the development of such systems. Although there are always going to be some diehard fans of Virtual Reality who will stick to improving the medium and trying out stuff with the hopes of building a killer app, for the rest of us Virtual Reality still seems like a medium that promises to arrive soon but never really hits the spot.

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

New Information interfaces, the true promise of chatGPT, Bing, Bard, etc.

LLMs like chatGPT are the latest coolest innovation in town. Many people are even speculating with high confidence that these new tools are already Generally intelligent. Well, as with every new hype from self-driving cars based on deeplearning to the current LLMs are AGI, we often tend to miss the importance of these new technologies because we are often engulfed in too much hype which gets investors hyper interested and then lose interest in the whole field of AI when the promises do not pan out. The most important thing about chatGPT and co is that they are showing us a new way to access information. Most of the time we are not interested in going too deep into typical list-based search engine results to get answers and with the abuse of search results using SEO optimizations and the general trend towards too many ads, finding answers online has become a laborious task.  Why people are gobbling up stuff like chatGPT is not really about AGI, but it is about a new and novel way to rap