Most current Artificial Intelligence efforts are geared at replicating aspects of intelligence that we humans share with animals.
Most AI tasks like image and speech recognition or even robot planning are aspects of intelligence we share in common with other animals. Animals can recognize images and hear stuff just like we humans and thus trying to build machines that can hear and see is replicating these aspects of intelligence that we share.
Source: https://neuwritesd.org/2015/10/22/deep-neural-networks-help-us-read-your-mind/
When it comes to language translation, then we are stepping higher but in fact, animals can understand aspects of human language too. Dogs can be trained to understand commands in human language and with some effort, other animals like bears and chimps can learn aspects of human language too.
What about games? Most of our current attention to game-playing AI comes from the idea that games are the most sophisticated aspects of intelligence and thus if we are able to create AI that plays games we have some kind of key to more general aspects of intelligence.
Animals can be trained to play simple games much like humans, and even though animals might not be very sophisticated at playing human-defined games, in the animal world there are some complicated games, like stalking a prey, which contain sufficient amount of variability that they would also be difficult for humans to play or program a robot to play.
If language is the epitome of human intelligence, then we must consider the fact that dolphins possess some very sophisticated system of communicating with themselves.
What about creativity? while certain works of art are strictly human domain stuff, if we view creativity as meaning the ability to create things, then the beaver is an excellent creator.
The aspects of intelligence that might be strictly human could be our ability to reason, a word I will use for want of a better word. While sophisticated stuff like stalking a prey might look like it involves reasoning it actually doesn't and can be likened to any kind of game that can be solved with existing optimization techniques including neural networks.
Reasoning in my view is some kind of abstract stuff that enables humans model systems with varying levels of abstractions without ever justifying our models with physical realizations.
In stalking prey, which I consider a very sophisticated game-like activity carried out by other non-human animals, the animal is playing the role of an agent in a typical RL scenario. While due to the variability of the environment the stalking game cannot be modelled successfully with the kinds of Software engines that play Go like Alpha-Go, it is still within the domain of RL albeit it would be very expensive to model.
Reasoning is too free form to be captured easily within many existing optimization models but I don't think it is beyond modelling. We just need to capture the right primitives and then we could pull out a network-based model that does abstract reasoning like humans do.
In the beginning, the systems we create might reason foolishly but with time they will become more sophisticated as some RL like reward system is used to ascribe scores to good and bad performance.
In humans, reasoning is taking place at the neocortex and is behind our decision-making system that pools data from many sense processors and creates an abstract plan that carries forward its agenda. The next phase of AI would be a kind of rejuvenation of the lost GOFAI art of building reasoning systems.
The beauty of it all is that the aspects of intelligence we share with animals and those that are strictly human are implemented on the same kind of hardware, Neurons and their connection to each other. If vision is implemented with neurons in the brain then maybe reasoning is also implemented with neurons.
In the GOFAI days, people tried to build reasoning systems without the inspiration that comes from neural network kinds of stuff and they failed, for the most part, to get these systems to reason. But with the success demonstrated with neural networks in solving many aspects of intelligence we are inspired with a new direction to chase. The only advice I have to give is that we should be open to newer kinds of network architectures that might differ from what we have for deep neural networks.
A wonderful analogy to look at is CNNs (Convolutional Neural Networks) that perform better than vanilla FCNs (Fully Connected Networks) at image recognition tasks, these show that there could be better network paradigms that will enable us to build reasoning machines based on neural networks.
I talk about the need for more openmindedness in AI research in my book: What is Intelligence available on Amazon. This a blogpost where I discuss the book
Most AI tasks like image and speech recognition or even robot planning are aspects of intelligence we share in common with other animals. Animals can recognize images and hear stuff just like we humans and thus trying to build machines that can hear and see is replicating these aspects of intelligence that we share.
Source: https://neuwritesd.org/2015/10/22/deep-neural-networks-help-us-read-your-mind/
When it comes to language translation, then we are stepping higher but in fact, animals can understand aspects of human language too. Dogs can be trained to understand commands in human language and with some effort, other animals like bears and chimps can learn aspects of human language too.
What about games? Most of our current attention to game-playing AI comes from the idea that games are the most sophisticated aspects of intelligence and thus if we are able to create AI that plays games we have some kind of key to more general aspects of intelligence.
Animals can be trained to play simple games much like humans, and even though animals might not be very sophisticated at playing human-defined games, in the animal world there are some complicated games, like stalking a prey, which contain sufficient amount of variability that they would also be difficult for humans to play or program a robot to play.
If language is the epitome of human intelligence, then we must consider the fact that dolphins possess some very sophisticated system of communicating with themselves.
What about creativity? while certain works of art are strictly human domain stuff, if we view creativity as meaning the ability to create things, then the beaver is an excellent creator.
The aspects of intelligence that might be strictly human could be our ability to reason, a word I will use for want of a better word. While sophisticated stuff like stalking a prey might look like it involves reasoning it actually doesn't and can be likened to any kind of game that can be solved with existing optimization techniques including neural networks.
Reasoning in my view is some kind of abstract stuff that enables humans model systems with varying levels of abstractions without ever justifying our models with physical realizations.
In stalking prey, which I consider a very sophisticated game-like activity carried out by other non-human animals, the animal is playing the role of an agent in a typical RL scenario. While due to the variability of the environment the stalking game cannot be modelled successfully with the kinds of Software engines that play Go like Alpha-Go, it is still within the domain of RL albeit it would be very expensive to model.
Reasoning is too free form to be captured easily within many existing optimization models but I don't think it is beyond modelling. We just need to capture the right primitives and then we could pull out a network-based model that does abstract reasoning like humans do.
In the beginning, the systems we create might reason foolishly but with time they will become more sophisticated as some RL like reward system is used to ascribe scores to good and bad performance.
In humans, reasoning is taking place at the neocortex and is behind our decision-making system that pools data from many sense processors and creates an abstract plan that carries forward its agenda. The next phase of AI would be a kind of rejuvenation of the lost GOFAI art of building reasoning systems.
The beauty of it all is that the aspects of intelligence we share with animals and those that are strictly human are implemented on the same kind of hardware, Neurons and their connection to each other. If vision is implemented with neurons in the brain then maybe reasoning is also implemented with neurons.
In the GOFAI days, people tried to build reasoning systems without the inspiration that comes from neural network kinds of stuff and they failed, for the most part, to get these systems to reason. But with the success demonstrated with neural networks in solving many aspects of intelligence we are inspired with a new direction to chase. The only advice I have to give is that we should be open to newer kinds of network architectures that might differ from what we have for deep neural networks.
A wonderful analogy to look at is CNNs (Convolutional Neural Networks) that perform better than vanilla FCNs (Fully Connected Networks) at image recognition tasks, these show that there could be better network paradigms that will enable us to build reasoning machines based on neural networks.
I talk about the need for more openmindedness in AI research in my book: What is Intelligence available on Amazon. This a blogpost where I discuss the book
Comments
Post a Comment