What is Intelligence: Why the public misunderstands AI


In the early days of AI research, when researchers were trying to replicate human cognition as it is. It was necessary to get scared because we already saw the overwhelming power of computers at doing tasks that were formidably difficult to do by humans.

Although in the early days of AI research the public had not yet been directly exposed to computation like we are in these days of mobile computing. They had seen how simple electronic calculators (electronic brains) were capable of so much computation than they were. Couple that with the claims that some researchers were making that computers will outsmart humans in just a decade (this was said in the 60s) and you can realize why people were scared of artificial intelligence. Back then they were scared for real and that fear was completely justified.

The computer was a programmable device that you could program a calculator to outsmart any human at calculation, why won’t you then be able to program a computer that is smarter than a human at any task. This was the root of the fear the public and even AI researchers had. But they couldn’t tell the future, they couldn’t see that an AI winter would ensue due to the fact that they didn’t have stuff like deep learning and would find it amazingly hard to tell a cat from a dog when presented a picture of it.

They felt that with the amazing power of programming which stems from its generality, they could express any kind of program. If they could program a spaceship to land on the moon, why not program a human mind that was here on earth. They didn’t realize that they were working from the wrong premise and that building a system that could learn a representation of its input data was supposed to be a first step in building anything that closely resembles human intelligence.

Years later and we have deep learning which worked thanks to back propagation, and we are now at the same point that we were last time when before the AI winters, overenthusiasm over a technology whose limits we have not properly explored. And without a Godel to prove to us if our current AI machinery will lead us to our expected goal AGI, we have to wonder in the dark again, bumping into walls and entering dead ends.

Modern AI beat humans at object recognition, and from then onwards, marketers and scientist alike who think that this is the end of humanity have taken the scene and are screaming doom and gloom for humanity.

Even very smart people undershoot or overshoot a target, it's not their fault, we are not prophets just humans. If AI beats humans at object recognition then soon it will beat humans at every other cognitive task! That is the mantra of the singularity crowd and all those who think that AI in its current implementation will soon beat humans at every cognitive task.

The problem is that we have not yet identified what intelligence really is first of all and that is why I am writing this book.

When regular people think of AI the first picture that comes to their mind is probably the image of the Terminator. You cannot blame these people for conjuring up these images, they know no better. It's no different from what the public experienced in the 50s and 60s with all the new things that were being announced in the news about what AI has achieved. No one told them these were toy problems and didn’t scale well in practice.

But now our fears are even more justified because we are no longer dealing with toy problems but we are dealing with algorithms that are beating humans at specific tasks. Now the very smart and well-informed people have completely different reasons for fearing AI quite different from the uninformed public.

The informed are actually scared of their imaginary “what if?” scenarios, not actually about any of the deep learning algorithms that are breaking benchmarks here and there. When you hear of AlphaGo beating a human, you feel the same fear the world felt when DeepBlue beat Garry Kasparov. The very smart people with a lot of imagination, take that result and extrapolate beyond measure and because of their preexisting deductive skills which makes them very smart in the first place, they are able to deduce all kinds of possibilities from the results.

This is not actually bad until it starts making one paranoid about AI research. You find out that you are not scared of the real results being achieved in day to day research but your own imagination of possibilities.

It is this kind of thinking that gets very smart people to lose tons of money in financial markets. It is not because they were ignorant but because of over-enthusiasm and sometimes excessive faith in their own power to deduce outcomes.

Newton lost a fortune in the south sea bubble so do not put your faith in someone’s ability to predict outcomes just because they are smart. Digital Equipment Corp. founder Ken Olsen's in 1977 said: "There is no reason for any individual to have a computer in his home." And Thomas Watson, chairman of IBM in 1943 said: "I think there is a world market for maybe five computers." So next time you are believing a prediction just because the predictor is a smart person remember these quotes.

The truth is that if you go beyond the news and ask a sincere AI researcher about what are we able to do with AI now, the answer would be pattern recognition and some very modest generative tasks.

At the core of deep learning is the ability to learn representations from data, and that is a very powerful addition to our software arsenal because before we did that by hand engineering features. But the aforesaid ability of modern AI to eventually overtake humans is so absurd I don’t even read articles talking about such.

The base system is just feature extraction and although when one sees all kinds of results being announced that show all kinds of networks achieving all kinds of results one might get overwhelmed that something beyond human is actually happening and then mix all that with the AlphaGo results, the Dota 2 results and all other success in game playing AI and even the sincerest of researchers cannot resist the temptation of making extraordinary claims.

“Those who forget history are bound to repeat it”, this statement is actually becoming more real these days because of the severe AI hype we are currently experiencing. We are back in the days of outlandish claims about AI achievement and this is no different from the overblown extrapolations made in the 60s after computer programs solving toy problems.

The new toy problem is the game playing AI, cycles repeat themselves and past things come back with new faces.

I am not denying the extraordinary results that are being achieved with the utilization of AI techniques in many fields, like aiding in diagnoses and language translation, etc. as we find more fields that need some superior pattern recognition muscle, we will apply AI there and achieve outstanding results. This is the right view of AI, as a tool to be applied to different fields and solve problems that were difficult for human programmers to solve using the rudimentary tools available in programming languages. AI is a new paradigm for the solving of problems that are data intensive.

AI is sure to empower our economies and aid in the design of better solutions that all of us will benefit but we must maintain the right perspective towards these machines. We should permanently delete the images of Terminator from our minds, that is so far away in the future I can’t even make a meaningful prediction.

The purpose of this book is to work towards a scenario where we not only have access to synthetic intelligence but can eventually incorporate it into something that can pass on as human but might be more than what it means to be humans.

I am not talking about an AI that has a personal agenda, I am talking about basic algorithms that we could set to the task of solving many kinds of problems we now currently solve with our minds beyond pattern recognition tasks.

I think we have nailed it with pattern recognition and the next step forward in AI is to find ways to build robust generation systems. DeepRL provides a platform for agent design and is a powerful starting point, but as a goal, we should imagine a program that could read a software project description as would be given to a human and invent that software. Or be told in natural language to design a better system of propulsion for planes and will go ahead to research all the possible information from numerous sources and even start solving basic physics problems along the way and come up with a better propulsion system, possibly one that does not burn any fuel.


Popular posts from this blog

Its not just about learning how to code

Nigeria and the Computational Future

This powerful CEO works remotely, visits the office only a few times a year