Skip to main content

Amorphous Computing

Image result for cellular solids
Source: edx.com
If there is a major defining feature of our current microprocessor engineering capabilities, it is that building these chips requires a great deal of transformation of matter from a rough amorphous state with lots of impurities to a crystalline state at 99% purity.


Building computer chips is a difficult and expensive endeavour. It is also as delicate as brain surgery because slight imperfections in the final product can lead to devastating losses. The purification of the material required is a major investment because impurities cannot be tolerated in the final product.

If one looks into the future and the enormous computing needs that will be required to do things like instantly sequence the human genome on handheld devices, fully detailed simulation of cellular biochemistry of every cell in the human body, and just the general need to do more computation with less and less matter because of all kinds of new computational needs in energy research, particle physics, space travel, etc. We realize that we not only need to build more devices at lower costs but that we must be able to build them at greater speeds and deploy them to very hostile environments that are not friendly to our pristine, crystalline engineering methods.

The need to deploy computation everywhere on everything with the highest amount of miniaturization possible will mean that we deviate from what I call our crystalline engineering practices, to more amorphous systems that have a high tolerance for errors and impurities.

The vision is that by 2100 we will have computers everywhere, doing stuff that we cannot even contemplate at the present, just like we didn't know that cloud computing was gonna be something in the future back in 1940.

But why do we need to pull that future closer? If we are going to have nanoscale amorphous computing dust everywhere in the future, why do we need to start working on such technologies now? I would say that for the sake of our environment.

The sheer amount of energy consumed in processor manufacturing and the low turnover rate of finished product and a large amount of waste produced as a result of the rejection of processors with only very slight imperfections is not a clearly sustainable way to build the future society of ubiquitous computing.

Even if we start building multistate systems that go beyond the binary paradigm of computing which will help reduce the volume of matter that we need for computing, the delicate expensive process of making chips will still be a major source of energy wastage so we need less fragile systems if we are going to meet the demands of the future.

Amorphous computing or the ability to build computing substrates that are more tolerant to noise and impurities will become a thing in the future. At its high point it will enable us use existing systems as they are for computation rather than engineer systems ourselves. Any system in nature that is amenable to definite alteration of its structure could be a computational substrate. This could include using the ocean as a computational substrate or the biology of plants as computational substrates, etc.

In the future we might not have a lot of physical devices engineered by humans, rather your house plant could be a server. This might look like science fiction but it will become a reality as our environment gets filled up with too many computers and waste and we seek to still keep expanding our computational capabilities as a human society but also reduce the amount of energy gouging physical devices that are needed.

Before we scale up to fully amorphous systems, i.e. utilizing matter as we find in nature or human artifacts as computational substrate without having to over-engineer these artifacts we will go through a stage where will be able to build computational devices incrementally rather than reductively as it is currently done.

In clearer language, sometime around 2040 we will grow our computational devices from single atoms rather than take a piece of material and hammer it into shape as is currently done in silicon fabs. After we are able to grow computational substrates, we will start exploring dynamic computational substrates, that is the FPGA of growable computing.

The first computational substrates we will grow will be fixed and specialized but as we gain more understanding we will start building substrates that can change as they are used rather than stay fixed. This will move towards doing more computation in wetware than in hardware because wet stuff is more amenable to variability.

The early wetware will follow the purity requirements that we have come to know and obey when building computational substrates but as we progress we will start building systems that are more tolerable to noise and impurities. This will set the stage for the far future where we will be able to use anything we can find in nature as a computational substrate.

Comments

Popular posts from this blog

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

What is Intelligence: Software writing Software

Sometimes I wonder why programmers are hell-bent on writing programs that can communicate in natural language and not even putting adequate effort into writing programs that write other programs. Maybe is because of the natural tendency to protect one's source of livelihood by not attempting to automate it away or maybe because writing programs is hard enough such that contemplating of writing some program that writes programs might be even harder.

Virtual Reality is the next platform

VR Headset. Source: theverge.com It's been a while now since we started trying to develop Virtual Reality systems but so far we have not witnessed the explosion of use that inspired the development of such systems. Although there are always going to be some diehard fans of Virtual Reality who will stick to improving the medium and trying out stuff with the hopes of building a killer app, for the rest of us Virtual Reality still seems like a medium that promises to arrive soon but never really hits the spot.