Skip to main content

Why we desperately need AGI (Artificial General Intelligence)

Using our raw intellect to comprehend the structure of the entire universe is like trying to run around the entire world using your legs. Human mathematics in all its sophistication and profoundness is very dependent on the very structure of our physical brains, if not so, we would not be able to use it as a form of reasoning.

But this our human mathematics produced from our human mind which has enabled us to comprehend a single drop of water out of the ocean which is the complete knowledge of the structure of the universe is no different from the muscles of our body, and trying to use it alone to comprehend the universe is no different from trying to use our raw muscular power to run around the globe. 

Even our current use of computing in physics in trying to comprehend the universe is somewhat similar to equipping an athlete with good running shoes, proper hydration, nutrition, etc. Might offer a few improvements as to how far or how fast the athlete runs but still bounded by very hard parameters because our current use of computers in physics discovery is still bounded by our human mathematics.

We use computers like steroids to enhance our human mathematics just like an athlete on steroids might outperform natural athletes but both groups are still bounded by hard parameters like total theoretical possible capabilities of the human body.

To comprehend physical reality in its entirety successfully we would have to build a mind on a machine, simply put we really really need AGI which will be similar to building an airplane to circumnavigate the planet rather than relying on our muscles alone. By doing this we would be massively amplifying the abilities of our physical bodies, it is a paradigm shift from trying to optimize our capabilities on one level, the level of enhancing our physical structure, to a level where using only a tiny aspect of our physical selves, our brains and most importantly the neocortex, to build something that transcends our muscle power, an airplane, which enables us to achieve a goal that would be unthinkable with our physical bodies, circumnavigating the world.

AGI could invent new mathematics that while incomprehensible to humans would be capable of modelling the entire structure of the entire universe, and maybe even though the new super mathematics would be incomprehensible to us, AGI could compile the most important ideas in the new super mathematics to our human mathematics or computer code, enabling us to understand the most important aspect of the ideas and structures contained in this super mathematics, just like our computer monitors gives us some kind of picture of the underlying structure of our computer memories in a digestible form. 

Although human mathematics tries to be as abstract and representative as possible, it is bounded deeply by our own human brains, it is comprehensible to us, and so far it is able to represent the physical universe to a reasonable degree, but the kind of mathematics that might be able to fully represent the complete structure of the universe might not be comprehensible by we humans with our 3-dimensional brain, if we think that our mathematics in all forms is capable of fully representing the universe because it has so far succeeded in representing so many aspects of it, then we would be devolving to a geocentric view of the universe again, this time around our human mathematics tied to the structure of our human brains would be the centre of the universe.

I think that our mathematics is reasonable and powerful enough to represent aspects of the universe that is meaningful to us as humans and even at that it is still limited by invisible blocks like those uncovered by Kurt Goedel but if we think that it is sufficient to uncover all aspect of the structure of the universe then we are also implying that our brain structure, which gave birth to our mathematics, is the best structure for representing intelligence universally, the idea of which is morbidly human-centric reminiscent of the geocentric theory. But if we are to admit that we might be smaller bits of intelligence orbiting larger more central intelligence, reminiscent of heliocentrism, then we could really understand our place in the scheme of things and our limitations and think of ways to overcome our limitations, which might involve creating a mind on a machine that might evolve faster than our mind to a more general mind-structure, that could be superior in its comprehension of the universe than ours. 

How can we create a mind that is more powerful than our minds?

It is obvious that we cannot directly code a more powerful mind than our own because our programming can only automate processes that we can already comprehend. No matter how complex an algorithm is, even it can't be understood by many people, as far as it can be comprehended by one person, probably the person who coded it up, it can be understood and therefore cannot depart from the structure and capabilities of the human brain. 

So how do we proceed to create a mind more powerful than ours, we must start with something we comprehend that also has the capability of evolving structures and functions that are powerful and actionable but might be incomprehensible to someone with a 3-dimensional brain.

Just like our intellect is incomprehensible to the lower primates that we evolved from despite possessing very similar mental structures. Now we must also understand that incomprehensibility does not necessarily imply superiority, if someone presents some memory dump from some computer program running on some unknown custom architecture it might be incomprehensible at least for a while, but the fact that it is incomprehensible does not necessarily mean that it is superior to the computer architectures we already know. 

Another thing we must put into consideration is that obfuscation can produce incomprehensible data but that doesn't necessarily mean that it is superior.

What I mean by a superior mind being incomprehensible but capable of producing actionable artifacts like code or equations is that a superior mind could for instance build something fascinating like a fully functioning quantum computer, at least in software simulation but with full instructions for physical realizability, but upon inspecting the underlying code through which it achieved it we would have absolutely no clue about how it functions because of how out removed it is from human reasoning. But yet this complicated executional trace could be compiled, at least in part, to some human comprehensive language like mathematics or computer code but never in its entirety because it has structures that cannot be represented with the axiom system of any known mathematics or computer language. 

Even if this kind of artificial mind invents a human-comprehensible language to represent its ideas, as far as it is limited to representing these ideas within structures that could be comprehended by the human brain, it will not succeed in doing that because of some core limitations the human mind.

A similar scenario is this: Although we could train a gorilla on some kind of rudimentary form of human communication, we would not be able to teach it group theory no matter how hard we tried. Not because of any failing on our part or any lack of raw physical capability in the gorilla, like, digestion or metabolism, but because we cannot represent those abstractions of group theory in a form that can be communicated to this creature.

So even if the creature understands some basic human communication such as what Jane Goodall was able to achieve with those primates of hers, there is no foundation upon which abstract ideas could be communicated.

An artificial mind that is as general as the human mind could evolve to degrees beyond which any human can ever attain and if it tries to show us some theory which it has discovered either in its own mathematics or in how it is able to represent the complexities of our physical universe, it would be similar to giving a chimpanzee a book on algebraic geometry and expecting it to make any atom of sense to it. 

So even though we have the same roots as the chimp, we have gone so far beyond it. That is the same way it would be if we created an artificial mind of general capabilities, starting from the roots of our programming and mathematics, it will branch off to degrees so far beyond us, that we would not be able to comprehend its internal actions, only maybe minor parts of the results it provides for us that are meaningful to our own nature. 

Searching for Intelligence

Every attempt to directly code out an intelligence superior to ours will fail because our mental limitations will be by design infused into whatever we try to directly engineer. The programming languages that enable us to create software only help us automate fixed structures we have in our minds. Even if we involve randomization in our design, we are not searching through the infinity of possible structures out there but only within the tight geography of structures that our problem is exploring. 

Therefore, in order to create a program that can evolve beyond our intelligence, we would have to search for that program, by explicitly coding just the machinery which creates the base for our geography of exploration and then allowing it to search for other structures using as our fitness criteria a program that is able to create a program that solves some insanely hard problem that can be easily verified, otherwise we might not believe that our program successfully created a program that solved the problem. 

The most concrete example of what I am trying to describe above is genetic programming. In genetic programming, we are trying to evolve a program that solves a particular problem using processes that occur in biological cells like genetic recombination, mutation, etc. We apply these processes to programs, that is recombine sections of programs with other programs and perform random mutations until we are able to produce a solution to a problem.

As profound as the above process described in the previous paragraph seems it has been used to successfully discover, rather than explicitly code, the solutions to a lot of problems that might be insanely difficult for a human to explicitly code. 

In genetic programming, the machinery we have to explicitly code is the whole setup for actually doing the genetic recombination and mutation of programs, just like an operating system on a computer gives us the machinery for doing all kinds of simulations in application programming (something we usually don't think about is that application programs are sometimes doing some kind of simulation of physical systems. For example, your word processor is simulating some physical typewriter albeit not very directly but you catch the gist) The underlying genetic programming infrastructure gives us this kind of machinery, which creates the geography in which we can search for the solutions to our problems presented as a program. 

Genetic programming is merely one form of searching for computational structures which solve our problems, we also have cellular automatons and other universal systems of computation that can enable us to navigate the computational universe of possible structures in search of solutions to problems. The advantage of genetic programming is that the solution is usually in the form of an executable program like a LISP program that you can actually run on a computer.

This genetic programming is merely a gateway into the world of computational structures represented as executable computer programs. It shows us the path to eventually searching and finding a program that could not just represent the solution of some specific everyday problem, but a program that can produce programs that solve all kinds and varieties of problems. 

If programs represent bits of intelligence, then there could a program that produces other programs, just like a compiler translates higher-level code to lower-level code, this program could translate higher-level abstractions of ideas, imperceptible to us, to lower-level code in a programming language or mathematical symbolism we can understand. But I doubt that it will be possible to translate every possible upper-level abstraction of ideas to forms that are representable in human comprehensible computer programs, therefore making this program of programs superior to human intelligence.

Genetic programming could only be a start because if we properly generalize what is really going on in genetic programming, which is the transformation of graph structures of which a tree is a specialization (A LISP program is like a tree) we could engineer a system that could start out as a recognizable structure a graph but capable of evolving towards something incomprehensible to human brains once it is freed of the need to reduce its activities towards human comprehensibility. Such a system might be able to reduce some of its structure to something comprehensible like a LISP program but would not be able to do that in its entirety because along the course of its evolution it might evolve processes that cannot be representable in any human-comprehensible form.

Difficult to solve easy to verify

How can we be sure that we have created a superior intelligence? We have to test it, the best kind of test is with something that is hard to solve like guessing the hash value of some data stream, but easy to verify, like once the hash is guessed it is easy to verify that it is the correct hash (this is basically what is going on in cryptocurrency mining like bitcoin). 

To increase your chances of finding the correct hash you could pile on more physical hardware resources to keep guessing at a high rate till you find the correct hash but if we want to test that we have superior intelligence, then it must be able to create some algorithm (program) that will consistently guess the correct hash faster than any system we humans are capable of creating, even if we have access to infinite physical computing resources.

There are a lot of other tests we can design to check if we possess a machine or artifact that has an intelligence superior to ours but the hash test is one very easy one to design.

Why we desperately need AGI

Because it is so hard and so far we are getting along fine as a species it might seem like AGI is optional, but it isn't. It is incredibly important if we are to survive for long as a species. While we must applaud the effort by Elon Musk and others trying to make us interplanetary, and while a second planet like Mars could be a huge bonus, we would eventually need AGI to solve many deep problems that plague us as a species. 

We must support Elon Musk and give him all the resources he needs for his Mars mission, but eventually, the cost of his particular ideas on how to make humanity interplanetary might run off or take too much time and might be too risky with the kind of technology we possess now or any that we might create in an incremental fashion from one we have now. 

AGI might enable us to crack physics and bring about stuff like FTL (faster than light) or even AL (at the speed of light) technologies that would make interplanetary travel more efficient. Creating a backup civilization.

But what about the earth? Many reasonable people propose very reasonable ideas like population control, more frugal resource consumption, etc. But what if AGI would eliminate many inefficient technologies we now rely on for better ones and thus make the earth a healthy place.

For example, a new electrical propulsion system, not based on jets or fans could create new clean silent aircraft with no pollution. We could have better electrical storage beyond lithium batteries to run our cars, faster and efficient food distribution, etc. 

One might imagine that we do not need AGI for these things, but for instance, if we are to move to electrical cars we might need fundamentally better battery cell technologies or breakthroughs in wireless electrical distribution or else we replace pollution from fossil fuels with pollution from depleted lithium batteries. 

Rather than worry about population increase, FTL or AL technologies might transport people to other human habitable planets around the galaxy. AGI could help us use materials from the asteroid belt to create planet-sized space ships to transport large numbers of the human population in safety to the far reaches of our galaxy and other galaxies.

While the life-extending technologies currently under research could lead to fundamental breakthroughs in the near future, augmenting human capabilities by employing a mind free from the limits of our biology could accelerate these discoveries enabling us to live for as long as possible on an infinitude of planets around the universe. 

Just imagine what we have currently done by automating our ideas on vanilla modern-day computing and then imagine what we could do when having an intelligence that is not merely the direct product of our hardcoded ideas but rather something we discover from searching the wild geography of possible computing structures, we could do the unimaginable. 

We these little ideas I think it's clearly obvious why we should desperately chase AGI and not relegate it to the background as a leisure activity.

Comments

Post a Comment

Popular posts from this blog

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

What is Intelligence: Software writing Software

Sometimes I wonder why programmers are hell-bent on writing programs that can communicate in natural language and not even putting adequate effort into writing programs that write other programs. Maybe is because of the natural tendency to protect one's source of livelihood by not attempting to automate it away or maybe because writing programs is hard enough such that contemplating of writing some program that writes programs might be even harder.

Virtual Reality is the next platform

VR Headset. Source: theverge.com It's been a while now since we started trying to develop Virtual Reality systems but so far we have not witnessed the explosion of use that inspired the development of such systems. Although there are always going to be some diehard fans of Virtual Reality who will stick to improving the medium and trying out stuff with the hopes of building a killer app, for the rest of us Virtual Reality still seems like a medium that promises to arrive soon but never really hits the spot.