Skip to main content

We will ingest computers and run code in our brain

Today I decided to take a look into the future of computing and found some very interesting ideas. With the ongoing miniaturization of computers, there is no doubt that we will soon be able to build atomic scale devices, devices where a single component is just the size of an atom. When we are able to engineer at this scales, we will then be able to make very powerful computers no larger than a typical biological cell like the human cell.
When this happens, we will be able to ingest our computers. This means that what you could normally do on a laptop computer today, you will be able to do internally with the power to connect to extra capacity in the cloud.

We will use our eyes to scan code and run it in our brains having the results projected into our own personal augmented reality. It will be possible to use artificial lenses as displays, unlike the augmented reality glasses of today. One of these contact lenses might be all that a person needs to watch movies, browse the internet or do any computational tasks that we normally do with desktops, laptops and mobile phones today. The processor and memory of the computers would have been ingested and will travel to our brains, cleverly bypassing the blood brain barrier to reside in our cortices.

So, imagine this scenario, you would see a person sitting down in a lobby, silently, smiling as she watches her stock price go up in her personal display that is projected directly on her retina from her lens. You will not see any computer anywhere but she is fully interacting with one. Or a kid relaxing in his bed but is playing a highly immersive video game, that fills his entire room, with characters controlled by his thoughts.

You could see new versions of open source software whose code have been written by Artificial Intelligences or humans being advertised on billboards. But unlike the sequential line by line programming language codes we are used to, we could just have some kind of complicated graphical image on the board looking like artwork but it is actually an optimized version of some programming language, that our ingested computers residing in our brains and interfacing through our eyes can run, by maybe scanning the image and interpreting the pixels in it as data and instruction. But come to think about it, if you see the machine instruction of a running program, it will actually just be 1s and Os which if you view with a function like ArrayPlot[] in the Wolfram language, provided it’s a 2 dimensional array will actually look like a graphic in black and white. So, imagine that some program instruction can be converted into this kind of graphic and scanned by some computer and executed like code.

Another way to actually program such computers will be to design a written natural language, just like English but highly optimized to reduce the redundancies and make certain definitions more specific and rigorous. This kind of optimized English language can be used to write instructions for such a computer, by writing out code on some table Pc, phone, monitor, wall or even in sand, and this inbuilt computer will scan it an execute the instruction. We could even interact with our own thoughts or download programs internally from the cloud. I wonder how cool it would be to actually read an email directly from our internal computers, either directly as a thought of actually reading, or some form of audio system that uses the cranial bones to directly send sound to the ears silently or visually on our lens-retina screens.

The possibilities seem endless but with this new system we will face enormous security risks with hackers writing and displaying malicious graphics in public places or boards or just around the street corner. There will be new classes of security software that will try to catch such malicious code or one cannot imagine so far without thinking that some places will be no-go areas because the number of malicious code-graphics around such areas will be so much it will better not to go there. There could also be a market for shades that completely block out and filter any graphic from the environment so that one can use their lenses safely, the implications go on and on.

One cannot argue that the trend towards greater miniaturization will continue for some time even though I have been hearing news about many big chip manufacturers abandoning their smaller scale device projects and the world is currently focusing on massively multicore systems, I know that some manufacturing breakthrough in the near future will bring back smaller nanometer systems back to the drawing board and we will use the experience we have garnered from massively multicore systems to build extremely powerful computers with smaller and smaller components. In the next 20 years, we will be able to put the computational and memory power of a 2016 datacenter in someone’s head or body.


Popular posts from this blog

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

How to become an AI researcher

Artificial Intelligence is all the rage these days. Everyone is getting on the bandwagon but there seems to be a shortage of AI researchers everywhere these days. Although many people are talking about doing AI not many people are actually doing AI research.

What is Intelligence: Patterns

PATTERNS What one person calls a pattern is very different from what another person refers to as one. Can we arrive at some kind of general idea of what a pattern is? It is my attempt to do so in plain words in this section.