What is Intelligence: Cellular Intelligence

CELLULAR INTELLIGENCE

Hebbian learning has eaten the world! Many people don’t want to know what is going on in the cell body of the Neuron. We have all narrowed our thinking to focus on connection strengths alone. Although modern deeplearning systems look nothing like human brains, the root idea behind the neural network based systems was that the human brain learns by adjusting connection strengths between brain cells. This core idea has been abstracted away to what we have now but it is still the root dimension that began this whole race of neural networks. In general, it is called the connectionist view of intelligence.
Another point of view that most people hold is that there are other intelligent processes within the neuron itself, beyond that obtained from neural connections alone. The most famous of these other paradigms is the microtubular theory of consciousness that posits that the microtubules which form the cytoskeletal framework of the neuron are mediators of intelligent processes. There is ongoing research in trying to build intelligent systems based on this paradigm of microtubules.

I am of a slightly different bent, to me, looking at one brain structure after the other is all well and good but the thing of primary importance is that intelligence is a meta-level activity and cannot be found in just one particular cellular structure be it synapses, dendrites or microtubules. These components all come together to make it possible to run the software of the mind on the hardware of the brain. But of course, we want to know how this software is represented which is a noble pursuit.

Rather than thinking of the cell as serving purely metabolic functions and intelligence residing in connection strengths between neurons, I ask the question, what is the foundation of these connection strengths? Of course, it is based on metabolic components within the cell which include proteins, neurotransmitters, ions, etc., just viewing everything structurally like “neurons that fire together wire together so therefore this is all there is to learning” is kind of half baked. Metabolics are everywhere. I bet if you were a molecule rolling around in the brain in response to electrical stimulus from some sense organ you would not know that you were participating in some meta-level process that humans call thinking. To you, you are just some molecule on a mission to bind to some receptor, etc.

The microtubule consciousness guy Stuart Hameroff has some really cool idea about how intelligence could be running in the brain. Rather than focus on connection strengths alone, he sees each neuron as a whole computer!

Microtubule model of consciousness

So in the earlier example where I talked about how given some computer hardware, we could connect probes to start obtaining information and then displaying it to gain some idea about the electrical activity on the hardware but a user with some desktop environment connected to the same hardware could be interacting with a word processor and us with our tools would never be able to at least in the lifetime of the universe recreate the desktop environment the user was faced with because it is based on a different interpretation of data and we can never have access to that “interpretation”.

 Similarly, if we were to dead-stop a neuron and obtain all metabolic information in some kind of electrical format we will still not know the interpretation that the brain is using to transform the electrical impulses coming from the senses into the internal representation that it maintains as metabolic variables within the neuron.

Just like the memory cells of a digital computer holds either an electrical charge (about 5 volts)  represented by 1 or no charge (near 0 volts) represented by 0, the neuron is acting just like a memory cell in the computer but this time it is not a very simple issue of holding a charge or no charge. It holds much complex information than that.

A gang of neurons connect together to hold a state and “computation” might actually involve something like what goes on in a deep learning neural network, but this time around they are not passing simple floating point numbers or binary numbers but actually much richer data structures.

Just like the input layer of a deep neural network deals with the raw visual representation of the information, should it be a picture, sound, etc. The hidden layers deal with the much more abstract representation. It is my opinion that the raw sensorial information held in a single neuron is abstracted in neurons that are in a kind of hidden layer. The prefrontal cortex that performs the highest executive function of the brain could be dealing with information at the most abstract level of representation ever, while the other neurons that do direct sensory processing in other areas of the brain are dealing with much direct representation.

In my opinion, I don’t imagine any computations like multiplication or addition or whatever actually goes on directly in the circuitry of the brain. What might actually be going on in the name of computation might actually be simple transformations of data from one structure to another.

What is the bit of the brain?

The computer scientist wants to know what the single unit of information storage in the brain is. This is like asking the question, what is the most fundamental particle in nature? There are so many particles coming out of particle accelerators these days that we can only hope on dealing with things at a particular level of abstraction without seeking any particular fundamental particle.

We can deal with molecules or we can deal with atoms or quarks, etc. This doesn’t make either molecules or atoms fundamental, it’s just about what level we are currently interested in and capable of dealing with. So if we are to look at the organizational hierarchy of matter that constitutes the brain, at what level can we identify a lumped abstraction that is consistent enough to be called a bit? Should we take the entire neuron to represent the bit? Should a tubulin monomer be considered the bit? or the microtubule itself or some other molecular are structural identities?

If you have a cluster of computer chips and one chip seeks to communicate with another chip, it is going to do it in an all or nothing manner. That is the chip will either send information to the other chip of interest or it will not. If we know nothing about computing but are only capable of acting like outside observers, and we observe that a microprocessor either “fires” a request to another processor or not we would be mistaken to say that all the complexity going on inside the microprocessor is irrelevant and thus only the “firing” of request for cooperation to other processors is required to understand the computer cluster.

And following this line of thought, we could come up with something that looks like the early perceptrons to idealize all the activities going on in this cluster of computer microprocessors. But a lot is going on inside the individual processors that constitute the cluster and when one processor needs to coordinate with others it sends data to the other processors in an all or nothing fashion just like neurons.

Modelling systems usually involve the loss of information because we are usually interested in the simplest possible representation of the system so that we can gain understanding. In this process, we throw away a lot of details that we consider "unnecessary" at the moment. If we were to monitor a typical computer cluster having never been exposed to clustered computing but well trained in the art of modelling systems by abstraction, we will throw away the complicated details of the individual processors themselves and focus on only the communication signals passed on from one processor to another, after all, that is what is readily understandable because it is simple for our minds to comprehend.

Those who say that the neuron is merely metabolic support for synaptic connections and thus Hebbian learning is the only way humans learn and thus gain intelligence are wrong despite the success of connectionism. The molecular activities we can observe in the neuron and thus tag as “metabolic support” could actually be performing computation in a format that is completely different from what we understand as computation.

The microtubules in the neurons that provide “structural support” for the cell could also be computational structures. There are researchers looking into these kinds of systems, my advice to them is that they should expand their understanding to include the possibility of a larger “symbol system” and not limit themselves to binarizing, that is trying to reduce everything to a state of 1 or 0. I don't think nature cares about limiting itself to binary activities. It could assume an arbitrary number of states.

We design binary electronic circuitry because it is easy to design! Not because it is the only symbol system in nature that can be used to design computation. Binary is simple because we only have to deal with 2 states, on or off! We can squeeze all possible computation into just using a symbol system that requires only 2 symbols.

In reality, if we want to we can design circuity that handles any amount of states! Just like we could have a universal elementary cellular automaton like Rule 110 that only requires two states (two colours) and, in theory, can be used to perform any possible computation, we can have a 19 colour (states/symbols/alphabets) universal cellular automata that can simulate any possible cellular automata including Rule 110 and thus, is a universal cellular automata and a universal computer.


The brain could be using a 22 symbol system or it could be using 144,000  we might never know for sure but the take-home is that it might not be binary. When we observe individuals with synesthesia, that are able to see pictures with colours representing numbers in their head and some of them have single images to represent numbers like 10,000 as an individual “symbol”, just the same way you have the symbol 9 representing nine units of something. The brain could be operating with an arbitrarily large alphabet!

The microtubular layer may not even be the primary level at which computation-like structural transformation of data could be taking place! Wherever there is an agglomeration of matter and there is some kind of energy exchange taking place, computation-like activities can take place. Computation-like activities could be taking place in the neurofibrils of the brain or even in “ordered” water surrounding structures like microtubules.

The tubulin monomers that make up the microtubules could be the units or bits of computation at a particular “level” and even the locomotion of materials along the microtubules could be indicative of some kind of computation. We are so focused on the synapses that we forget the fact that nature doesn’t really care about what we want things to be like and goes ahead and does it “stuff”. Our job is to hack nature! To find pattern stabilities that we can take advantage of for our own purposes.

Some researches are even considering dendritic computation, and I am happy about all these approaches to finding the solution to the same problem.

If we were to look at our own human microchips and if we can zoom it to be really large and slow down time so that we can actually view things going on in slow motion. Use your imagination to also see that this processor we are observing is actually connected to some motherboard that is being used by somebody to say browse Facebook or search Google. Now lets light up this circuit board so that we can see the trace of activities that will actually be going on, what do you think you will see?

If light were to represent the electric current, you will see spots of light that stay static for a while, this would be the registers of the CPU, holding data until it is requested by some other process. In other areas of the circuit, you will see lines of light zipping and zapping around and stopping at different spots, passing through circuits that constitute logic gates and either going on or off, etc.. This is all you will see on the lighted, time-slowed, zoomed out circuit board. You just see interaction! But we know that the so-called interaction is actually computation!

Computation in the human brain could even start out at the atomic level, or even the quark level and spiral out till we get to the molecular level like microtubules, etc. and right up to much mega structures like synapses, dendrites, even the glial cells especially the astrocytes could be doing some kind of computation, or the soup of neurotransmitters around the cells.

Some of the enlightening lectures I have had are the MIT 6.002 lectures about circuits. In the class, there are mega demonstrations where logic gates are actually constructed to be as big as a coffee mug and the wires are on some kind of massive bread board. The states of the gates are actually a real light emitting diode or some kind of light bulb that actually lights up. It is so cool to see how varying the input manually by turning on some large switch will actually cause the circuit to light up in some manner or the other. This shows that computation can take place at any scale large or small, therefore scale indifference.

Atomic computers will eventually become a reality, but I doubt if the current form factor of quantum computers like D-Wave will be the actual form that these atomic computers will take on. These large room size quantum computers remind me of the days of ENIAC and UNIVAC! One day some crazy fellow or fellows will actually find a way to build atomic (quantum computers) without the need to super cool atoms! I am suggesting that we should really look into the ideas that Stephen Wolfram described in his book A New Kind of Science! Those ideas could really help in thinking about atomic computers.

Comments

Popular posts from this blog

Its not just about learning how to code

Nigeria and the Computational Future

This powerful CEO works remotely, visits the office only a few times a year