Skip to main content

What is Intelligence: Brain and Mind


The brain is the hardware and the mind is the software! Many people will ask what programming language the mind is written in but I don't think that question is as important as it sounds. If we think of the brain as all there is then we will fall into the trap of thinking that by replicating it we will have something that thinks. The brain is only physical hardware, albeit a very advanced one.

If we were given some exposed computer hardware, that is one without all the plastic covering and we had someone use this computer with peripheral devices connected at a distance. Let's say that you had some circuit board that is a computer in your hands, and it had wires running away from it into a wall, you don't know where the wires are going to. All you have to deal with is the board, you have probes and all sort of circuit analysis tools, etc.

Now the person in the other room has a screen and a keyboard and can't see any of those messy wires and circuits that you are exposed to, they just see the nice screen with some wires going into the wall to who knows where alongside a keyboard and a mouse.

The person using the computer in the room with the screen/keyboard/mouse interface is interacting with the computer by using “software”, they may have a word processor software, browser software, spreadsheets, etc. they have a beautiful GUI system they are interacting with. They type in characters with the keyboard and click items with the mouse. They have an understanding of what “software” is, the graphical things that they click on that enable them to do their tasks.

The person in the other room with the circuits has no conception of what software is, all they have is a bunch of electrical probes to collect data and analyze it. They measure the electricity on the board using different tools and they are able to obtain charts of electrical patterns! They perform some statistics on this electrical data stream and the start obtaining insights by observing the charts.

Sometimes there is no data! This indicates that the system is off, at other times the waveform is of a particular nature due to what applications are running and the person at the backend analyzing the circuits starts fitting the different activity types in classes. When the person in the frontend is typing some documents, the electrical activity on the circuits is of a different type than when the person is watching movies, listening to music, or doing some spreadsheet work. The backend guy does not know of the existence of each of these pieces of software but can clearly identify which “class” of activity is going on by observing the patterns of activity on the circuit board.

But the truth is the person in the backend holding the circuitry in their hands is actually holding everything about the computer, the software and everything. The person viewing the screen and interacting with the computer is actually just an observer looking at what is going on the raw circuitry via their beautiful interface.

All the software, memory, graphics and everything is actually just a bunch of electrical patterns on the circuit board, the interface is just a way of peeking into this electrical world in a form that is easily digestible by a human, the interactions of the person at the interface with software using the peripheral tools causes changes to occur at the backend, but the guy at the backend cannot see any of the fancy things going on in the screen but only sees signals registered on his electrical analysis tools.

This is the same thing going on with our brain and mind. When we are thinking and seeing mental pictures, hearing mental sounds and concocting thoughts and emotions we are actually interfacing with our brain using our minds. Our mind is the screen, keyboard and mouse of our mental computer while the physical brain itself is the hardware.

The brain is an electrical computer and just like you can analyze some electronic circuitry with probes you can also analyze electrical activity in the brain. These activities will reflect the kind of “thoughts” that are going on in the mind of the individual.

Just like it will be difficult to know the design of the hardware from just observing what software does at the interface level, because software is magic, a mountain of illusions, it will also be difficult to find out what is performed with software at the interface level even though we can see the exact electrical patterns on the hardware.

If we will draw some more analogy from the computer system abstraction hierarchy, we know that there is this mountain of abstraction below the final presentation of software that is ready for interaction at some visual display. If the brain possesses this kind of hierarchy, we do not know, but we know that there is some language by which patterns are encoded and combined with other patterns and result in the definite images and sound we see in our head.

Since it will be difficult to understand the human brain/mind by probing and analyzing the brain, because all we can see is the effect of the mind on the brain and we can never arrive at some kind of static source code to study, it will be better to use our creative imagination and abstract thinking skills to come up with models and algorithms to build synthetic intelligence.

Just like the apple seed within the fruit of the apple is capable of producing an apple tree that bears fruit, the seed of intelligence is within the fruit of intelligence, which is the creative imagination and abstract thinking. With the fruit of the creative imagination and abstract thinking, we can access the seed of intelligence and create a synthetic intelligence that grows to achieve creative imagination and abstract thinking.

Although we borrowed from Hebbian learning research to build the first neural networks, the newer neural networks with backpropagation as the core algorithm is fundamentally different from any real activity going on in the brain/mind. It is the ingenuity of the creative imagination and abstract thinking, feeding on varying patterns from a wide variety of fields and synthesizing the backpropagation algorithm that has enabled progress in all that pattern recognition work that is powering the so-called modern AI revolution.

The brain is an interesting organ to study and can lead to a lot of insight into human brain diseases. It could also serve as inspiration for patterns that we can extract and apply in our own systems, but just like it was an error to try to mimic biological flight in birds rather than do aerodynamics research, it is an error to try to replicate the brain to come up with an intelligent entity.

If you possessed some computer circuit of an active running computer and you were trying to come up with a visual understanding of the activities going on within and assuming you possessed some ingenuity to actually invent and manufacture a visual display to observe the data, there is a very little chance that you will be able to organize the patterns of activity into the form that the person in the other room with the monitor and keyboard will observe.

You won't be able to see a desktop environment with all the file and folders because you have no idea of the amount of transformation that has gone on from the most primitive machine language program, right into a high-level programming language that enables you to create GUIs. This is all the while possessing the full stack of data that is available from observing the patterns of electrical activity going on in the circuit. The meaning you will derive from this data will be completely different from the meaning that is being displayed to the person viewing the GUI.

You might end up with some kind of line graphs and you can by point to peaks or troughs in this graph and say that this is when program of class 1 is running, we have this kind of graph but you can't have a mental picture of how a spreadsheet GUI looks like assuming it is represented by the class 1 program.

Because of this, it will be hideously difficult to go from electrical patterns of activity in a brain to be able to simulate consciousness in the manner that it was done in the movie Transcendence. We could have all the electrical data by connecting probes to the brain but going from that data to what a mind looks like will be difficult.

Which part of that data will we call software? Which will we call raw data? In a typical Von Neumann architecture computer, program and data are mixed up in memory. It is when we fetch some data from memory that we can decode it as either program or data, but in essence, they are all bits of 1s and 0s.


Popular posts from this blog

Next Steps Towards Strong Artificial Intelligence

If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

Software of the future

From the appearance of the first programmable computer, the software has evolved in power and complexity. We went from manually toggling electronic switches to loading punch cards and eventually entering C code on the early PDP computers. Video screens came in and empowered the programmer very much, rather than waiting for printouts to debug our programs, we were directly manipulating code on the screen.

At the edge of a cliff - Quantum Computing

Quantum computing reminds me of the early days of computer development (the 50s - 60s) where we had different companies come up with their different computer architectures. You had to learn how to use one architecture then move to another then another as you hoped to explore different systems that were better for one task than they were for another task.