How the next AI winter will happen

From the excessive hype going on about artificial intelligence, it is obvious that we are in a bubble and of course we all know what happens to bubbles, they eventually burst. In this post I will describe how I think the next AI winter will come about and what to do as a company or individual to insulate yourself from whatever negative consequences will result.




Like some kind of background information, I am also an AI researcher and have been thinking of ways of making computers intelligent for the most part of my life. So don't think I am some kind of doomsayer who has no stake in the AI game. I am writing because I am concerned and because I am trying to inject some kind of rationality into the entire AI enterprise so that we have a quiet transition into a new era where some new kind of technology will dominate the world and not have a very sharp depression in the curve that results in more devastation than necessary.

Before I go on, I will say that those who don't remember history are bound to repeat it. And this has been the case for the current AI landscape. Although it is easy to relate the current cycle of events in AI to that of the dot-com bubble that ended in great devastation in the early 2000s, this cycle is more like what was observed in the very early days of AI research, I am referring to that short lived era in the 60s after the invention of the LISP language in the 50s and the ensuing decade of enthusiasm when researchers first tried to write hand coded program to give computers intelligence. That era can also be known as the GOFAI (Good Old Fashion AI) era.

Deep learning has helped us solve problems that were difficult in the past to solve with hand-coded programs. In the past things like image recognition was very difficult and even though very intense effort resulted in some great programs, those have all been surpassed by deep learning in modern times.

What I seek to call attention to here is that these problems have always existed and people have tried numerous ways of achieving a solution with very little success because they did not have deep learning and the general neural network paradigm was still undeveloped. But nowadays we are able to tackle some of those confounding problems using deep learning.

The problem is that we are overhyping deep learning and that is what is going to lead to the next AI winter, but it is not going to happen in the way most bubbles burst but in a very different fashion which I will talk about later.

When large electronic calculators like ENIAC were built and were found to be faster than humans at calculation, the human imagination ran off with that little fact and we started seeing the first "AI" hypes. But it was not as intense as what we are experiencing now because of the lack of global communication.

The early hype masters were movie producers who made movies showing how computers with "electronic brains" will soon take over the world. Authors of science fiction books also took advantage of this ambient technology to create some really good stories which also fueled the hype for the general public. Sooner or later all that died down as the new source of doom for humans to fear was the reality of a global nuclear war.

Sometimes later as computers became faster and programming languages became sophisticated, humans were entrenched in another "computers are going to take over the world" scenario. The new power of computers provided fuel for the human imagination, and with announcements about how computers were matching up with humans in several board games and several demonstrations of "general intelligence", people were once again afraid of artificial intelligence besides also being afraid of total nuclear annihilation.

As the promises of AI researchers were not realized, investors pulled out money and the whole AI enterprise seemed to go quiet for a while. Even in the 70s when the paper about backpropagation was released no one seemed to pay attention.

In the 80s the fever of AI was born again with several attempts to build "expert systems", we all know how this very scenario ended, but the AI winter that ensued was soon sidetracked by the invention of the World Wide Web and all that followed ending in the dot-com crash.

We are now experiencing a new AI rage because some resesarcher found a way to make backpropagation work. But we are overestimating this paradigm with our imagination. People are imagining the AI annihilation scenario once again, only that this time they have access to fancy graphics programs to create stunning visuals of the terminator.

AI has overpromised and soon all the low hanging fruit of image recognition and all forms of classification, which is where deep learning has seen the greatest application will soon be plucked.

With the famous demonstrations of Alpha Go beating humans at the game of Go using every kind of software and hardware muscle available from deep learning to GOFAI techniques like Monte Carlo Tree Search, we are back to the exhibition stage where the public is literally being begged to believe in artificial intelligence through all these grand exhibitions.

It really looks like a magician on stage trying really hard to convince an audience that the tricks he is pulling on stage are real.

These grand exhibitions while being misinterpreted as progress is actually a sign that we are at the end of the road. Below I will elaborate on how the end will come, I hope it is not sometime in 2020 because from my observation we are getting closer with every new "announcement".

1. It will be hard to generalize Reinforcement learning to the real world: This is expected because the path of Reinforcement learning has been tried before without much success. Its modern version as DeepRL is only aided by the neural network for scene recognition, all the new policies are just minor algorithmic twists from the original Bellman equations.

2. Generative AI like Recurrent neural networks while excelling in language translation tasks will be poor at producing meaningful content, and despite the current touting of the success of OpenAIs text summarization software, we will not get far beyond that point.

Generally, there will be a slow down in the outpouring of fundamental results and the number of news articles about AI will keep reducing as interest in the field generally declines due to lack of exciting announcements.

Rather than a global AI crash, we will just see gradual disinterestedness in AI because the terminator and superintelligence will seem not to be arriving anytime soon, then boom hardware will become the new thing.

The next boom will be in new forms of computer hardware, and by this, I don't mean expensive cold quantum computers I mean fundamental new technologies that will enable us to create new types of highly energy efficient processors with close to 1000 times the speed and power of current systems.

The computer industry will shift towards hardware and away from software for a while. Most current datacenter hardware will now seem like old school mainframe computers as these new chips come to revolutionize everything.

This will occupy us for the next decade and in 2030 software will come back into dominance but AI, as we think of it now, will recede into the background because there will more fundamental announcements in hardware diverting the attention of the media from publishing articles on the arrival of superintelligence.

So how do you insulate yourself from this coming downturn of events?

Whether you are a company or an individual researcher, the simple advice is to diversify!

If you're a company do not over-invest in AI sounding stuff. Make sure you don't have an army of AI engineers with no solid software engineering and computer science skills. Don't have too many data scientist because while data science is currently being hyped as the career of the future, most of it will soon be automated away leaving you with a pool of data scientists with nothing to do.

Make sure your tech workforce have fundamental skills and start gathering your arsenal of hardware engineers, the kind of people who study "COMPUTER ENGINEERING" not just computer science so that you will take advantage of the coming hardware boom.

As an individual tech person, equip yourself with fundamental skills in computer science, Algorithms, Mathematics and hardware. Mastering the fundamentals will enable you to weather any storm that is coming ahead. Technology is not going anywhere anytime soon, in fact, we live in some kind of technology called nature. Specific fields of technology will come and go in cycles but the fundamental skills will stay strong, that is why a book like The Art of Computer Programming by Donald Knuth will remain relevant for centuries.

Comments

Popular posts from this blog

Software of the future

This powerful CEO works remotely, visits the office only a few times a year

At the edge of a cliff - Quantum Computing