Skip to main content

Getting Serious about AGI (Artificial General Intelligence)

Image result for principia mathematica
Excerpt from Principia Mathematica (Whitehead and Russel)
I think its time to think more accurately about AGI so that we can all have a clear direction to guide our research towards building the kind of Strong AI which we as AI researchers have all dreamt of. Its time to get away from too much fantasy and focus on the right vision, and with this vision maybe we could have a better shot at creating the most powerful technology that humanity will ever have to create, a human-like mind unencumbered by the limits of a biological brain.


Science fiction is really inspiring to real scientist and engineers. The main reason I watch science fiction is to gain insight into some kind of possibility that isn't yet possible but within feasibility. Although not everything has to be realistic in science fiction or else it becomes boring. The key task of good science fiction writers and moviemakers is to create an abstract world on screen or paper that is rationally consistent within its own rules that it creates.

Science fiction doesn't have to be completely compatible with reality just like pure mathematics doesn't have to be compatible with reality. But the most important requirement is that the rules proposed in science fiction and pure mathematics must be consistent within their framework. Don't take the pure math reasoning too far as to include completeness because we know that Kurt Gödel's work proved that a system cannot be both consistent and complete at the same time similar to the uncertainty principle that states that you cannot know the position and momentum of a particle at the same time, etc.

Consistency is better than completeness in both science fiction and mathematics because our human mind likes to perceive a world that is together at least to some degree even if this fictional reality does not have a clean relationship with the real world.

From science fiction, we have seen many technologies that have made inroads into consumer products like video calling and now we are even getting close to the tricorder technology from the Star Trek movie series.

This ability to bring ideas from the fictional worlds created in science fiction to real-life has made us think that anything a sci-fi writer creates in his worlds can actually exist in our physical world. This is due to the fact that some sci-fi writers are really skilled at what they do, the worlds they create are so rationally consistent we actually think that whatever they create in their worlds can be created in the physical world.

While ideas like interstellar travel is possible in sci-fi and maybe we will be able to do it in real life someday, when we come back to reality after watching Star Wars or Star Trek, we are hopeful of the future and at the back of our minds, we wish to actually someday building some space ship that can achieve faster than light speeds, because this is the only way we can actually reach some of the extremely distant planets in this vast universe, many of us don't actually get on with some practical research towards achieving this.

Interstellar travel is not the subject of global debate. While organizations with deep pockets might spend some time contemplating this topic and even have a team doing some kind of research, it is not really a do or die matter that they must succeed with this kind of research.

Interstellar travel research is on the same grade with particle physics research, but particle physics has greater priority because of the practicality with which we can build machines on earth to do this research and because the questions we have answered about the subatomic world are enough to give us confidence about tackling other deep secrets of nature. But when it comes to interstellar travel, there are too many unanswered questions and thus our confidence is lacking in searching for answers.

This brings me to AGI which is the biggest issue of our time alongside climate change. With the success of deep learning at things like image recognition, speech translation and the success of a project like Alpha Go we are beginning to contemplate seriously the possibility of building machines that not only think like us but surpass us in every intellectual engagement possible.

Although the dream of building intelligent machines did not begin with sci-fi, the version we have focused on seems to have been fueled by the depictions of intelligent machines in sci-fi. Thinking about building intelligent machines is simple and even people like Rene Descartes have all conceived thoughts about it since antiquity. We observe intelligent human beings so we want to directly replicate what we see in another medium, such as an automaton, since we are creatures of mimicry

This is no different from the long-held dream of humanity to fly like the birds that we observe in the sky, in my book What is Intelligence I talk more about the similarity between the search for AGI and the dream of flying. My focus in this current writing is to refocus our dreams, fantasies and imaginations about AGI. To turn our minds towards proper problem definition so that we can race ahead faster towards the goal of creating the most powerful tool humanity will ever need to create.

The reason why sci-fi movies and books about AGI has misinformed us is that it has made us focused on the expression of intelligence rather than the roots of intelligence. We are more focused on creating agents that fake intelligence rather than creating agents that are truly intelligent in the deepest sense of the word.

In sci-fi the most prominent theme is some AI talking or capable of holding a conversation. Another theme is one of the evil AI. We never stop to think that the true purpose of making AI appear evil in movies is for the purpose of making the movie entertaining and not in any way connected to what intelligence consists of.

Sci-fi writers in the desire to entertain have created some very convincing narratives about the nature of AI and most of us have bought into this narrative so deeply it prevents us from truly understanding the nature of intelligence and how we can strive towards creating one.

The purpose of this writing is not to criticize but to bring all of us back to focus. To view AGI in a new and objective fashion. To employ the skills of the pure mathematician and view AGI from an axiomatic point of view and a little bit less like an empirical scientist.

Rather than focusing too much on systems that excel in one particular task like building image models, language models, etc. We should think and ask questions like what are the deep axioms or rules that can generate any kind of model we seek.

Modern AI is successful and has been applied super broadly in many fields. I am not against modern AI in any way because it's usefulness is very obvious in the world today but the way I view it is quite different from the way many people do.

I view modern AI as an extension of regular programming algorithms. I don't see it as the path to AGI but I also understand that whatever AGI eventually looks like it must be able to do all that we can currently do with modern AI techniques like deep learning and more.

While we develop and perfect regular AI techniques like deep learning to be more accurate and work towards establishing some kind of ethical control over the misuse of that technology. We should not think that we could extend it till it becomes the kind of AGI we see in movies.

The AGI in movies are all great but they can be very misleading to the practical researcher. If we think about the history of flight we can see that humanity was able to build gliders that had no engines but look like birds and did fly for a short while before crashing their pilots to death.

It was not until we had the internal combustion engine, originally developed for cars, that we had even the glimmer of hope of achieving real powered flight. All that was needed was the ingenuity and courage of the Wright brothers and the powered flight was achieved.

Modern AI tools like deep learning techniques are similar to the early gliders that preceded powered flight, they do fly but because they have no internal power driving them they crashed. Even designs that tried to replicate actual birds by including flapping wings also failed to fly and this should serve as a lesson to all those who hope that engineering a brain by directly copying an existing one is the way to achieve AGI.

The gliders could fly but there was no internal power. Deep learning helps us build tools that speak, recognize images, etc. but there is no internal power and as such no matter how sophisticatedly we tailor the wing designs of a glider it will never take a flight of its own.

My conception of AGI is not like the terminator, rather it looks more like Peano arithmetic or to talk of something more practical to observe, like the evolution of class 4 cellular automata. Just like the authors of Principia Mathematica sought to find the smallest set of axioms that could generate all mathematics, my goal is to find those primitive rules that could generate the complex thing we call the human mind.

Comments

Popular posts from this blog

Virtual Reality is the next platform

VR Headset. Source: theverge.com It's been a while now since we started trying to develop Virtual Reality systems but so far we have not witnessed the explosion of use that inspired the development of such systems. Although there are always going to be some diehard fans of Virtual Reality who will stick to improving the medium and trying out stuff with the hopes of building a killer app, for the rest of us Virtual Reality still seems like a medium that promises to arrive soon but never really hits the spot.

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

New Information interfaces, the true promise of chatGPT, Bing, Bard, etc.

LLMs like chatGPT are the latest coolest innovation in town. Many people are even speculating with high confidence that these new tools are already Generally intelligent. Well, as with every new hype from self-driving cars based on deeplearning to the current LLMs are AGI, we often tend to miss the importance of these new technologies because we are often engulfed in too much hype which gets investors hyper interested and then lose interest in the whole field of AI when the promises do not pan out. The most important thing about chatGPT and co is that they are showing us a new way to access information. Most of the time we are not interested in going too deep into typical list-based search engine results to get answers and with the abuse of search results using SEO optimizations and the general trend towards too many ads, finding answers online has become a laborious task.  Why people are gobbling up stuff like chatGPT is not really about AGI, but it is about a new and novel way to rap