Software of the future

From the appearance of the first programmable computer, the software has evolved in power and complexity. We went from manually toggling electronic switches to loading punch cards and eventually entering C code on the early PDP computers. Video screens came in and empowered the programmer very much, rather than waiting for printouts to debug our programs, we were directly manipulating code on the screen.



Every new age in software development has empowered programmers even more to create programs that enable us to simulate all kinds of system and do all sorts of practical stuff. As the screens became better and computers increased in power, we worked hard on improving user interfaces which have been the dominating theme of the current era.

The kinds of programs we have ever written have also changed significantly with improvement in computer technology. Before the GUI days we had to just write programs that worked from a command line terminal. In the GUI days, apart from writing code that solves our problems which are called business logic, we also had to write code that presented some kind of interface to the user interacting with the program.

But the goal of software has always stayed the same even as we added layers of complexity that eased our expression of programs. Apart from writing software for solving problems from the outside world we also see software written for the sole purpose of solving software-centric problems like debuggers, profilers, unit-testing software, etc.

In the next decade, post-2020 we will see a mild revolution in software brought about by the appearance of a new class computer hardware. We will have to rewrite almost all the useful software of the past decade to be compatible with this new kind of hardware.

The job of the software engineer in the next decade will be one of porting old software to the new platform. This has occurred in the past as we had to port most of the programs written for mainframe computers of wild varieties unto the dominant architecture of the decade the x86-64 platform.

With the rise of RISC based computers, we have had to port some of the x86-64 code to RISC based architectures. But the kind of port that we will do in the coming decade is some very fundamentally different kind of porting. This is because the foundations of computing will be shaken by the arrival of very different modes of expressing computation in matter.

Apart from fundamental changes in the way we design hardware from the physical component point view like multistate logic, resistive memories, etc. Computer architecture will see some dramatic changes that go beyond mere iterative improvements. Paradigm changes in both architecture and physical components will give us better and more efficient ways of using matter to compute.

Current quantum computers are just a sign of the changes that are ahead and while the current super cooling approaches to quantum computers that are being led by Google and IBM will continue on the iterative path of improvement by adding more qubits, the kinds of hardware that are coming might not even require using individual atoms. And while I am not ruling out room temperature atomic (single atom gates) computers, what is coming is about rethinking the fundamentals of computing and such ideas will lead to vastly efficient ways of using matter to compute. These new ideas could be implemented on either sub 14nm scales or even as big as the current 22nm but will be faster and more energy efficient than current approaches.

One way to view the coming innovations is that we will think of ways of computing with matter that does not require imposing too many rigid structures on matter and rather, exploiting amorphous states that at not regular. Silicon might pass out of favour as engineered meta-materials without high purity requirements will be used to engineer cheap devices using completely new methods and applying new understandings. All these will result in a fundamental rethink of software and we will have to port a large number of all the useful software systems that we have created in this decade.

Initially these new systems will be niche stuff used only in private research concerns but thier performance against regular systems will be so great it will result in mass adoption and eventually the great porting will start.

I am not talking about FPGA, ASICs like TPUs and the likes. I am talking of a new understanding of fundamental solid state physics that is going to give birth to a whole new class of devices with superior computing power and memory capacity, we must also put in mind architectural paradigm shifts will also occur. So fundamental as to warrant the complete transformation of the hardware and software landscape. The best way to picture this coming transformation is from remembering the end of mainframes and the birth of personal computing. Eventually, the personal computers have been stacked in a modern iteration of mainframe computing called warehouse-scale computers. The kinds that enable Amazon, Google and the rest provide cloud computing services.

Most of these computers will be made redundant by this new paradigm of hardware and computing.

Beyond 2030, miniaturization will make computers ubiquitous and a more fundamental change will come to software. More fundamental than the porting that will take place in the 2020s. The evolution of software has been one of being able to represent the real world as an abstraction on a machine and we have learnt how to create better and better software that is more resilient than previous iterations.

This iterative development of software and the currently blossoming of machine learning will enable us to create robust software that is capable of doing things that we couldn't do with hand coding alone. But all these developments are just a test bed for a time when we will write software that interacts more directly with nature than the simulations on the chip we are currently doing.

Inspired by the DNA from the natural world we will be creating software that compiles into a DNA like sequence in the years' post-2030s. The whole process of programming will be similar to what we have always done, we will write code aided by huge stacks of synthetic intelligence, but the goal of creating software will change dramatically. Of course, there will still be stacks of traditional software systems lying around.

While currently we mostly create software to control systems and provide some functionality to users, in that time we will create software that can sequence atoms into artifacts of complexities beyond anything we can design with current engineering. We will program entire buildings into beings, organic plants and animals, new kinds of devices and all kinds of artifacts. This progress will be powered by our new found ability to stably manipulate and control matter at atomic levels.

The programmer of the future won't spend much time maintaining software systems like web browsers and operating systems. With the birth of meta-programming, synthetic intelligence will do software engineering better than humans, it will just be a matter of specifying what goals we seek to achieve and the ambient stack of technology will make such software systems possible.

The programmer of the future will spend more time contemplating goals than worrying about how to achieve them. And much of these will not be related to how to control virtual worlds but how to control the physical world in a very direct manner. The programmer of the future will be rewarded for its ability to build systems with the longest range of advantages to humanity or the particular organization paying it. It will no longer be about how to efficiently read/write large files or control systems, it will be much more about setting up programs that control and create artifacts that will result in the best outcomes in the longest possible time frames.

In a sense, one can see all the current advancement in software as a kind of preparation for the time when we will use software to manipulate nature directly through minimal buffers.

Does this mean we will not simulate virtual things again and directly express everything physically? No! of course, we will still have to simulate things in virtual for many other reasons. The performance of the software will be taken as a given just like when coding in python you really don't think about all the stacks of code that will be generated to make it run on a computer, you just focus on writing out your code. We will focus more on defining our goals, that is what we really want to achieve and take for granted the means of how we want to achieve it. Doing a virtual simulation of our results will just be for the purpose of streamlining our decision making.

The simulations will be more about the possible outcomes on a certain set of software goals. The bugs will not be about poor programming scenarios but goals that were not broad enough to capture complex transformations of the field of deployment over a long period of time.

Programming will remain an intense intellectual activity, even more intense than the very mechanical kind we have now. We will have solid synthetic intelligence and very robust hardware but we will not have a cut and dried way of solving human problems which will shift in priority from merely material to social.

A programmer will not just write software to control computers but to control societal outcomes. Its hard to picture in detail what this future scenario will be like. Yes, we will write code that compiles to new kinds of plants that can grow and result in the food we can eat. We will write code the compiles to medical procedures to handle ailments in a very customized manner, but there would be larger scale software that determines the outcome of vastly different settings than individual artifacts. Nations will be running on operating systems and the kinds of programs these nations will write will determine their future in very direct ways.

Comments

Post a Comment

Popular posts from this blog

At the edge of a cliff - Quantum Computing

Eliminate past trauma with Kirtan Kriya