Brooklyn Bridge Styled by deepart.io
THE AI ARTIST
When AI researchers try to train a model to do art, for example, visual art in the form of painting, they train the model with all available images of artworks in order to produce new art. The art produced in this manner is of very low quality and despite the hype appears meaningless to many true artists. Does this mean that AI will never be able to do art? Not at all, AI will eventually be able to create true works of art, we just have to understand more of what goes on in the mind of the typical human being not just artists and we will be able to build systems that create art.
Artwork like painting is not just a meaningless slapdash of brushstrokes intended to be purely visually appealing or stimulating. It is more of a symbolical system of communication where the artist attempts to convey to the audience a particular internal state which is the product of the total life experience of the artist. Style transfer is all cool and visually appealing but what is being communicated?
If this communication is successful, the artist rejoices. But most of the times especially in a situation where the artist is not alive to "explain" their work, the deepest meaning of the piece of art remains unfathomable and despite all curious explanations by admirers will remain locked for all time.
This doesn't just apply to visual art but to all kinds of art from music to writing to dancing, etc. Art is a form of communication, it is intended to communicate that which is not obvious or easily expressed in traditional standardized forms of symbology.
Using all the great works of art to train an AI system to produce great art is just the worst way to go about things. Art is much more subtle than that. We must keep in mind that when we are training an AI system we are passing in information so that we can extract some kind of representation from that information. If we are dealing with a generative system then the system will only produce art based on only what it has learnt.
The visual pieces of art we load into AI systems only show AI how to paint stuff that looks like art to humans, it teaches it nothing about the underlying representations that humans are trying to express as art.
First of all, a human artist is trying to create an information based on everything thing that it has learnt. Every single detail about the human experience is represented in its art, what makes the art look special as per dealing with some specific aspect of the artist’s point of view is that the human uses all its knowledge base as the deepest level of its underlying expression and gradually tries to narrow it down to the current scenario.
So trying to copy human art by making AI systems view art and create art is only copying the mechanistic aspect of human artistic expression and such art will contain nothing of the genuine emotion felt through observing a true work of art.
When a human observes some work of art, something is going on in it all the time. It is, of course, trying to obtain an internal representation of what it is seeing and depending on the depth of the networks used internally to abstract away at the visuals into the core essence of the meaning of the art the higher the interpretation of the art.
Do not confuse my usage of the word “depth of the networks” to imply that the human brain is using layers like we have in a typical neural network, this is not the case. The hierarchical ordering of information has nothing to do with the physical structure of the brain. Rather the human mind as the software of the brain is what is using the basic flat architecture of the neocortex or where ever to represent a hierarchical structure of representations starting with the basic level of raw information input to the most abstract idea possible.
The height of this hierarchy that a human is conscious of varies from individual to individual and so our depth of understanding also varies. To some human, observing a flower is just a flower but to some other, a flower contains the entire universe in a single picture.
This ability to observe and understand at different levels by different humans affect our appreciation of art. And also our environmental influences. While a science formula is a fact no matter the representation, a piece of art will be perceived with different understanding by different individuals because the entire history of knowledge of that human comes into play when it is trying to get at the meaning of the piece of art it is observing.
If someone has been trained as an artist in art appreciation, then that individual will see something in the work that an untrained individual will not see, so therefore there is nothing like universal communication in art.
To some, the sight of a beach evokes the highest form of artistic appreciation, while in others only the most abstract scribble of colour on canvas evoke anything. So, therefore, creating the AI artist is a daunting task because there is so much divergence of interpretation to deal with but it will possible for AI to create genuine art if we train the AI agent on more than what we consider as art. We should be able to make the AI gain some kind of self-centred experiencing of the world so that it can start forming its own idiosyncratic interpretations. It is from these idiosyncrasies that truly individual expression can come about and thus we would have a system that is capable of expressing some captured internal state to the world, this is a true artistic expression.
We have to understand that the human being represents all its input in a single unified format. You might say how do I know that it is unified, well the architecture of the brain is the same from human to human, certain variables change like how the neurons are connected to even the health of the particular brain in question but the fact that we all have neurons is a testament to the fact that at a basic level we are all running the same hardware.
The software which is the mind might be different and it is clear that when we have a pure mathematician as compared to a beach bum the kind of software running is radically different even though the underlying hardware architecture is the same, with some aspect more developed in the mathematician than in the bum. The bum also has its special development too.
But the beach bum and the pure mathematician are both using neurons, synapses, dendrites and neurotransmitters so at a particular level they are both equivalent systems.
Even though the core software programs running on both individuals might be radically different in certain areas, there are some aspects that are fundamental and equivalent in both of them, core aspects like the basic pattern recognition/action generating system.
The mathematician might have a powerful ability to see patterns in mathematical equations due to its long training and the fact that it has a huge database of mathematical representations. The beach bum might have a greater ability to socialize on the beach and be more adept at beach life. None is greater than the other its just a matter of perspective.
In any human, whether beach bum, mathematician, classical pianist, etc. all the input systems of the senses represent their data in a unified format, enabling an individual to transfer any kind of sense data representation into any other and thus express concepts that do not have their source explicitly in a particular sense.
Let me explain this further. When we hear some music we extract information from this music and store an internal representation of the features of this music. We do this unconsciously all the time, when we are consciously learning we are only paying more attention to the information being observed but all the time we are absorbing information and extracting features.
When we have an internal representation of this music, there are certain things we can do with it. If we have no musical inclination we could just try to sing our favourite tunes whenever we feel like. This is usually from memory, we try to generate the information we have observed in the musical track as it is.
But when we want to be creative and compose a new musical track we are to some extent, combining all the internal representations we have extracted from all the sounds we have ever listened to. This is how we currently think of an AI artist, we think that to create an AI artist we have to first train it on all that we call music and then it will create music but this is not where it ends for human creators.
Humans, however, transfer even non-musical representations into music because at the base level all the information we are receiving is represented in the same format, as pure pattern structures. So we can literally sonify the information obtained from a day at the beach, not by directly turning the pixel data into sound waves but obtaining some deep general level of representation and then transferring it into our audio generation system.
This system can be approximated using transfer learning in neural networks. Because we train a network on weights, what we store after our networks have converged is just numbers representing the features of the data we have trained the network on. The next step in transfer learning is that we take the network architecture and modify it by cutting off some layers and stitching it with another set of layers to produce different output.
As an example a neural network trained 10,000 classes of images that probably do not include geographic sites, is taken and stitched with some output system that is capable of producing probabilities for geographic sites identified. And then we pass in some information like the picture of the great pyramid and with the specific logic for identifying places with GPS, our system pinpoints that location by providing the coordinates. These and many other tasks are now possible with transfer learning.
The kind of transfer learning a human is capable of doing is way more advanced than this simple image to image system. The human is able to take representations learnt from observing images, to sounds to even tastes and other senses and combine them in such a way that it can express it using the mechanical movements of its body in a dance.
As the first step for AI to start producing good works of art that evoke some kind of deep response in a human being, then it has to go beyond training on the exact kind of art it wants to create. If it wants to create visual art then it has to go beyond being trained on pictures alone to being trained on all kinds of data and then have the different representations obtained combine in a unified form and then expressed in a particular medium of expression.
But I doubt if this method will produce art with the kind of emotional quality that humans are able to produce. It is a method beyond the current one but far from ideal. Until we have AI autonomously navigating our environments, both physical and virtual, and until those AIs can actually have a "personal" experience, then we can hope of having AI communicate using truly artistic expressions.