Skip to main content

Understanding understanding

This post is about understanding understanding and of course as it applies to engineering artificially intelligent systems that display the human quality of understanding. If you have a better title for this post you could suggest it, but I couldn't think of anyone better that really captures the difficulty imposed on us by the words of natural language that we have to use in our daily communication.


In my book what is intelligence I spoke at some length about the limitations of natural language in expressing ideas, and how programming could be the base for engineering a more direct less ambiguous language for expressing many of the ideas we have in a form that will be directly understood without confusion.

In this post I will just speak briefly on some ideas I have about understanding the process of human understanding in a way that could inspire someone battling with the problem of building systems that understand the world.

The good news is that there is a very simple description of the process of understanding which I find most helpful and it is very neat and avoids a lot of complication.

Each human being has an internal model of theirs. This model contains all the facts they have about the world and the connections between these facts. Well, we must use the word facts here very carefully because determining what is really factual requires a thorough analysis of accuracy. What I mean by a fact in this sense is any piece of information that a human being believes to be true beyond much doubt. This facts might not even be verifiable as true but the human holds dear to this knowledge independent of the external reality.

I am using fact in this sense to refer to a definite piece of knowledge not to the scientific accuracy of the that piece of knowledge.

As the mind-brain makes connections between these individual pieces of knowledge the human mind build a single software image of their internal reality which is the sum total of all the knowledge they have received and accepted as fact from the outside world. Understanding in this sense can be seen as the total amount of coherency of one's internal database of information.

When presented with a new piece of knowledge the human tries to understand this piece of knowledge, which requires incorporating this external piece of new knowledge to that which is already known, that is incorporating new data into the software image of their entire life expression.

If the newly received information is very novel and greatly varies with the structure of what has already being known, most people will just ignore or reject this new information as the effort needed to incorporate this new piece of information to their already consistent internal software image might be too expensive.

As children, humans tend to have a very small software image going on inside their head so it is easy to incorporate new knowledge into this image. in fact, in most children the images are so malleable that most children can be presented with information that massively contradicts their view of the world and they will just quickly alter their internal image to incorporate this radical piece of information and thus build a new internal software image that gives them new views of life.

As we get older and the size of the software image expands, it becomes very difficult to change this internal image and we start preferring only those facts that can be quickly incorporated into our base software image. anything that is too outlandish is discarded because too much effort is required to incorporate it into our base software image.

In some people even though they have not spent much time filling up their knowledge banks with new knowledge, the mere fact that they are ageing humans makes them less inclined to receive new information they have not encountered before even though their internal software image is very small due to minimal input through the course of their lives.

Even though I have talked about facts and knowledge being stored one must understand that it is not the raw data content of the knowledge that is important. It is more of the internal structure of the piece of knowledge that gets stored in the internal mental software image because it is that structure that will be useful later in the applicative sense, even though storing the raw data and facts are also important.

As a quick example, when learning to solve quadratic equations your brain is actually trying to extract the core structure behind quadratic equations which is what you incorporate into your base internal software image that already contains things like the structure behind addition, subtraction, multiplications, factoring, etc. successfully learning how to solve quadratic equations implies that you have been able to incorporate the structure behind quadratic equations into your internal software image.

With your internal software image updated, when you encounter a new situation that requires some knowledge of quadratic equations you will simply apply your knowledge to the new situation. But when you encounter some new system like linear algebra you would have to extract new structures and incorporate it into your existing infrastructure, which is how you understand things.

But there are some new structures that are so out removed from our internal software image that we can't easily see structures in them because the structures we see in things sometimes originate from our internal software image and using what we know we link up with new structures that we do not already possess.

But some knowledge possesses a structure that is so out removed from any previous structure we have encountered that it is very difficult to incorporate it into our existing structure. This is where very intelligent people outshine average individuals. They have another mechanism which is called "getting used to it" which is just another way to say repeat till you get familiar via memorization.

When a new structure we encounter in some knowledge containing system is so far away from anything in our internal software image, the only way to go by is just to create a new subsystem where this new structure is at the root. This includes raw allocation of new memory in which we just brute store the new facts and structure about this new system and from there we can start building another tree of knowledge as more and more structures and forced into this new image which we can maintain independently from our primary knowledge base.

This is the hardest phase of understanding because sometimes we can even encounter new knowledge structures that can overturn all that we have already stored in our internal image, stuff that we might already hold dear. Very intelligent humans do not necessarily force new knowledge into their preexisting images all the time, which can be very difficult as one's knowledge base expands.

The sheer amount of nodes one has to disconnect and reconnect to rebuild one's internal software image might be too much so intelligent people just create a new memory area where the very new fundamental fact that is greatly at variance to all they already have known is stored, they just get used to it and build another tree from there. Later on, in life they might discover some massively unifying insight, an edge connecting a bunch of previously isolated knowledge networks to their main knowledge networks and with some efficient edge relaxations, they are able to incorporate the new knowledge network into the old solid, rooted in childhood experiences software image.

Not everyone has the luxury of seeing all the disparate knowledge networks in their minds unified but it is the ability to quickly get used to things that might seem incoherent when viewed in the light of previous experience that's more important in rapid understanding.

Comments

Popular posts from this blog

Next Steps Towards Strong Artificial Intelligence

If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

Software of the future

From the appearance of the first programmable computer, the software has evolved in power and complexity. We went from manually toggling electronic switches to loading punch cards and eventually entering C code on the early PDP computers. Video screens came in and empowered the programmer very much, rather than waiting for printouts to debug our programs, we were directly manipulating code on the screen.


At the edge of a cliff - Quantum Computing

Source: https://ai.googleblog.com/2018/05/the-question-of-quantum-supremacy.html
Quantum computing reminds me of the early days of computer development (the 50s - 60s) where we had different companies come up with their different computer architectures. You had to learn how to use one architecture then move to another then another as you hoped to explore different systems that were better for one task than they were for another task.