Skip to main content

What is Intelligence: Computational Equivalence


CellularAutomaton[<|"Dimension" -> 2, 
"GrowthCases" -> {3, 6}|>, {{Table[1, 5]}, 0}, {{{200}}}]]
Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication (Wolfram 2002, pp. 5 and 716-717).

As we talk of building intelligent machines one might stop to think about if it is even possible to build intelligent machines in the first place. There is a story from the history of mathematics and I will not go ahead to elaborate it in details but just call to mind the salient points and the lessons we can learn from that as we go ahead to try to build intelligent machines.

It is known that Alfred White Northhead in collaboration with Bertrand Russel set out to do an enormous work. The work was titled Principia Mathematica and was an attempt to describe a set of axioms and inference rules in symbolic logic from which all mathematical truths could in principle be proven.

When they set out to do this, they were sure from the start that their task was achievable because the very consistency of mathematical thought implied that it would be possible to build a coherent system from which all mathematics could be derived.

But in 1931, Kurt Gödel with his invention of the incompleteness theorem proved that the goal sought for with the development of Principia Mathematica or with any other attempt like it was bound to fail.

This means that if Gödel had met with the authors of PM (Principia Mathematica) before they published it, that is before 1910, and they had a discussion of what they planned to do he would have told them from the start not to even attempt it because his theorem would prove that their effort was bound to fail.

This applies to the field of intelligence research and to the big question, will it ever be possible to create human-level intelligence in a machine? Such that it will be practically impossible for an observer to differentiate human intelligence from that of a machine?

The big question here is what is intelligence in the first place? In a chapter on human intelligence, I will discuss why words like intelligence can be difficult to understand in an explicit way due to the mushiness of human language.

You might think I am contradicting myself because this work is titled: what is intelligence? and therefore I must have “defined” intelligence in some part of it, especially where I talked about AGI. But I have not given any specific definition of what intelligence is. I am only using the word intelligence here as a tag to refer to that broad swath of activities that we denote as intelligence.

When I even said that AGI was to be found in understanding the fundamental network structures of the brain and that studying generalized networks in their most abstract sense is the foundation for building general intelligence, I was referring to a mode of intelligence, that which for all practical purposes can be pointed at by a human observer as intelligent activity because it is recognizable and fits our mental conceptions. Because at the lowest level nothing really looks intelligent and might seem from casual observation as mindless action. Even the atoms that constitute the neurons of the brain are just existing, doing what atoms do. It is when they conglomerate as neurons and when these neurons become a brain, that we can say hey! that is intelligence!! But humans are not the only entities that possess brains nor do we possess the biggest possible brain to say that mere possession of a brain or a big brain at that indicates intelligence.

We are intelligent because we possess a certain kind of mental software, and by the word intelligent here I am referring to the human cognitive activities that we think of as demonstrating intelligence just to maintain a meaningful conversation with other humans that are going to read this work. But nothing stops an atom from being intelligent just because we cannot understand what it is doing and thus cannot ascribe the tag intelligent to it.

The principle of computational equivalence as stated in the first paragraph of this chapter tells us that unlike the unfortunate authors of PM, whose hopes where shattered because of Godel’s theorem, we will be able to create machines that are intelligent because the same computation going on in the brain and the computation going on in a computer are equivalent and thus if a brain is capable of demonstrating intelligence, a computer can be designed to demonstrate it.

Why would I make such a bold claim? Well, another principle comes to my rescue called the principle of universality. Universality makes it possible for us to build general computers and general programming languages. A universal system is one that is capable of representing all kinds of possible programs and it is known that the principle of computational equivalence states that if a system is beyond a certain threshold of simplicity it is equivalent with any other system of similar sophistication. So since the human brain is performing activities that are not obviously simple, it is as sophisticated as any other system we can find in nature as it is, or nature as augmented by humans in the design of digital computers.

The thing to inquire about is this: is the human brain really computing? If you define computing very narrowly as arithmetic and logic then you might not accept the fact that the human brain is computing. Indeed many people say that the human brain is not a computer, but a computer doesn’t necessarily have to be digital or VonNeuman to be called a computer, computation is a concept beyond any particular implementation of computation on any substrate. Computation is any process that transforms a set of inputs to outputs via a rule. The brain transforms inputs to outputs so, therefore, it is computing and despite the fact that the human brain and the digital computer are composed of completely different architectural units the fact that they are performing computation of equivalent sophistication means that they are computational equivalent.

Therefore it will be possible for us to build human-like intelligence on machines. In order to achieve this, it is not necessary that we focus on trying to replicate human intelligence, although it would seem like that is the avenue where we can gain most in the shortest period of time. What we should be looking at if we ever hope to create an agent that evolves its own intelligence rather than something patched together with unsustainable trickery, is to build our research from a fundamental level like the one offered by networks as the ultimate form of representation.

In my opinion, AGI is a much more general phenomena than human cognition. Human cognition is just an instance of generalized intelligence and at a certain level, even human neurons are just another specialized instance of intelligent architecture.

The fact that modern AI research has veered off significantly from the typical operations of the original perceptron which was like the glider stage of human flight development is a sign that intelligence might be a much more generalized structure than we have thought about so far. And that intelligence might be much more alien than any specific biological instance like the human brain that we have been dealing with.

At the beginning of AI research, we were fascinated with the abilities of the human mind, we saw that we were capable of much innovation like building planes, nuclear reactors, etc. But the early era of AI was working on very high-level problems of cognition. Like I said previously they were trying to copy human mind software into computer software. It looked quite possible because we had general computer languages and we did not have a Godel who could look into the future and tell us that our early attempts will fail, even though they worked on toy problems.

The kinds of stuff we are doing in the modern AI era of deep learning doesn't even look like intelligence when viewed from the vantage point of cognition. Backpropagation of error and Stochastic gradient descent do not look like they are performing any intelligent actions and yet they help us build image classifiers that rival humans in performance. People who are relics of the earlier AI days are still working on cognitive architectures which are trying to replicate in computers the kind of operating system that operates in human minds. Their hope is that with the right architecture AGI will erupt, this is a weak approach.

Modern AI researchers have cleverly remodelled the AI problem as one of perception/response and given computers some of the basic abilities that we humans posses like the ability to identify a cat or a dog, etc. in an image shown to them.

If you were to give the software code of the most impressive image recognition system to an early AI researcher working in the 60s they would dismiss it as being very basic and unintelligent, because it will be filled with algorithms that don't directly answer any question of cognition. And I also think that if you demonstrated the image classification abilities of these systems, they would dismiss it as mere trickery since they can't inspect the code and see any modules that do feature extraction probably by using a library of hand engineered features.

Modern AI researchers are building the nuts and bolts, working with the foundations. Thanks to neural networks (or a network of interconnected functions (neurons) which transform their inputs into outputs following certain rules) we are able to work at the lowest levels of human cognition. And from there we can build all kinds of cognitive software for achieving specific human-oriented goals like the early AI people tried to do first. They had the machines, good programming languages and the smarts, but they didn’t have our platform of neural networks with backpropagation which is what is responsible for most of our current progress.

Humans and computers are computationally equivalent because they are performing computations that is above the required threshold of simplicity. And because of this equivalence, independent of the fact that humans have a different architecture for achieving these computational goals it will be possible to build intelligent machines that first of all approximate human intelligence and eventually transcend it.


Popular posts from this blog

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

How to become an AI researcher

Artificial Intelligence is all the rage these days. Everyone is getting on the bandwagon but there seems to be a shortage of AI researchers everywhere these days. Although many people are talking about doing AI not many people are actually doing AI research.