Skip to main content

Microsoft's Bing Killer feature and why people dunk on Bard

 All LLMs are equivalent, I have not done any statistical studies on the error rates but I intuitively know that they are equivalent, meaning that they have similar rates of failure. So why is everyone dunking on Bard? Well, it's because people expected more from the search leader of the world. 

But the problem with Bard is not really Google's fault, it is more about human cognitive bias, the very bias that made Google stay at the top of the search engine race even as they kept adding excessive amounts of advertising that degenerated search results. The reason Google stayed on top for so long wasn't that their search engine was excessively better than others but because they had become a verb for anything that requires obtaining information from the internet. They also had the power of incumbency, so it did not matter how many ads people were flooded with, people just went to Google anytime they wanted to search for information because humans are lazy and hate change.

So why are people more tolerant of false information from Bing or chatGPT and more critical of Bard? it has absolutely nothing to do with the performance of the underlying model, which I argue is equivalent. Still, it has more to do with the perceived superiority of anything Google launches and thus people felt that since Google was playing catchup to the LLM game, remember that the original Transformer architecture was actually invented at Google, people felt that Google would fix the errors that chatGPT was producing because en masse they don't know that just like any other algorithmic system, LLMs are at their ultimate form, they can be patched but fundamentally they cannot be improved, to improve on Transformer based LLMs would actually require inventing new architectures. It's like insertion sort, it is the way it is, you can add more computers but if you really want more performance with fewer resources you would have to invent something like Mergesort, Quicksort or more realistically Timsort.

OpenAI guys know this that's why they focused on patching errors, they see that architecture wise there is little room for improvement, and that's why they hired programmers to help chatGPT 'learn'  how to code, which basically means, manually solving problems that were beyond their system's capability. 

And with the addition of Wolfram Alpha as a plugin to their system it is clearly evident that they acknowledge the fact that you cannot use LLMs to perform math robustly so outsource it to something that has been almost hand engineered for more than 20 years. 

Google need not fret, if they want to get back on top then they have to replicate everything OpenAI is doing, and they can do it fast because they have money. And now to the killer feature of Bing, my favourite feature, I will write it out in the next paragraph and end this post with it. 

Every conversational LLM user interface must insert links in their result directly!!! Many people will not want to Google it!

Thank you


Popular posts from this blog

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

Virtual Reality is the next platform

VR Headset. Source: It's been a while now since we started trying to develop Virtual Reality systems but so far we have not witnessed the explosion of use that inspired the development of such systems. Although there are always going to be some diehard fans of Virtual Reality who will stick to improving the medium and trying out stuff with the hopes of building a killer app, for the rest of us Virtual Reality still seems like a medium that promises to arrive soon but never really hits the spot.

What is Intelligence: Software writing Software

Sometimes I wonder why programmers are hell-bent on writing programs that can communicate in natural language and not even putting adequate effort into writing programs that write other programs. Maybe is because of the natural tendency to protect one's source of livelihood by not attempting to automate it away or maybe because writing programs is hard enough such that contemplating of writing some program that writes programs might be even harder.