Skip to main content

What is Intelligence: Patterns


What one person calls a pattern is very different from what another person refers to as one. Can we arrive at some kind of general idea of what a pattern is? It is my attempt to do so in plain words in this section.

When we say we see a pattern, most likely we are talking of the macro-world of everyday activity. There are lots of stuff about and when we pay attention to one stuff we can say that I saw that stuff the other day and now I have seen it 2 more times, I guess its a “pattern”. In this sense, we are describing a pattern as a kind of repetition of sensation. For most people, it will require 3 observations before they detect that they have seen a pattern and thus can now “predict” that they are likely to see a fourth observation soon.

Now how are we able to differentiate one event from another? We do this with the facility we call consciousness which is just the basic ability to differentiate between impulses hitting the nervous system. It is from this simple kind of being able to detect gradients that we are able to derive higher systems of thoughts. If we were looking for some kind of low level primitive mental operation akin to the simple logic going on in the ALU of a computer, then this would be the operation, to differentiate among finer and finer gradients of impulses.

At a macro scale, we have full-blown “pattern” recognition which can differentiate an apple from an orange or a cat. The kind of pattern recognition I talked about earlier that depends on identifying repetition, like seeing the same cat in different places is what we usually refer to when we say we see a pattern. Nobody pays special attention to most of the patterns we recognize in the environment except there is some novelty then we pay attention. But for computers merely identifying objects is a big deal so when computer scientists and other technical people talk about pattern recognition they are talking of actually identifying the existence of certain objects an ability we take it for granted as humans.

For the financial analyst, a “pattern” might mean something else. Looking at their trading charts and spotting patterns of price movements are usually about detecting situations that may have come up in the past, after which certain highly predictable things occur. This is the human grade of pattern recognition because they are trying to identify the repetition of a pattern they have already recognized in a past just like seeing some strange cat multiple times alerts your attention

For a biologist, what a pattern is might be different, it might be the structure of a cell that we have identified and memorized and now use to infer what some new observation might be, etc. A physicist probing the visualization or listening to the sonification of data from a particle accelerator is looking for patterns, most of which she had identified before or look anomalous and thus worthy of attention.

Pattern recognition can also search for novelty or anomalies in the environment, this is how new stuff get identified out of the sea of already known stuff out there. The physicist checking her accelerator data sees patterns of particle formation that she knows of, but most importantly she is looking out anomalies! These anomalies are what I call novelty. This is the most defining feature of human pattern recognition.

Pattern vs. Noise

In the data-centric world, we are trying to identify patterns in data. I call this datacentric world the microworld not because of visual scale but because it is a dimension of pattern recognition that is not in “nature”. It is the world of artificial pattern recognition created out of solely human activity. The human brain can by identify useful objects, sounds, tastes, etc. but when it comes to numerical data the human mind has no direct way to perceive patterns in this data except for naive stuff like a repetition of the same number symbol. This is not always easy and the entire field of Data science and analysis is dedicated to finding systems that extract some kind of useful information from numerical data, regarding the rest as noise.

So what can we really say is a pattern in the numerical data we deal with? When faced with data we can choose to do some statistics, this is essentially producing summaries of the data. Statistics basically produces certain numbers out of some underlying data that give us an idea of what is going on with the entire data. It is very useful and its widely applied in various scientific fields to help scientist make sense of the enormous mountains of data they are faced with in their work.

Statistics is not really an explicit form of extracting patterns from data, it is a kind of blind algorithm that just munches the data and tells us certain things about it. We pass the data through some equational filters and boom some answers and thus insight.

Beyond just feeding in numbers and getting out numbers like standard deviation, variance, etc. sometimes it makes sense to visualize the data. Like Stephen Wolfram says in the chapter on Processes of Perception and Analysis in his book A New Kind of Science, sometimes the best way to spot out patterns in data is just looking at some visualization of the data. So with statistics, we can get lots of numbers out of numbers, but then we could make some plots and see what is going on in a visual sense.

For the most part, humans are visual animals. Even the shapes of the numbers are visual patterns. While sophisticated mathematics might be going on when transforming raw numerical data into more comprehensible numerical data, the human mind is basically seeing some shapes as numbers being transformed by some process into other shapes as numbers!

Plots systems are another kind of machine that takes input in the form of numerical sequences and turns it into another visual presentation of lines, points or whatever shape helps us make sense of the underlying data.


One would assume that randomness is the total absence of patterns but this is false because if we can identify that something is random, then we have identified some kind of pattern in our environment, a random pattern. In order to certify something as random, it will have to pass what is called a statistical test for randomness.

If you had some stream of data, of course, numerical in nature, to certify that the data was random you would have to pass it through some kind statistical tests to prove that it is random. Naturally, we as humans will say that something is random if on cursory examination we cannot find any pattern.

In the field of computational systems, we have a system known as the Rule 30 cellular automaton. Sampling something like the bits of the central column of Rule 30 produces a numerical sequence that passes all statistical tests for randomness but if we look that the 2-dimensional view of the 1-dimensional (Time dimension from top to bottom)  evolution of rule 30 we will see “patterns” of consistency like triangles.

2-dimensional "view" of the 1-dimensional in-time evolution of rule 30 with triangular and other pattern consistencies.

The beauty of the human perceptive system is that if we can even identify something like noise, we have identified a pattern. For there to be no pattern at all means that we are not perceiving anything. Now if you are talking no pattern in terms of you cannot see any repetition to enable you to predict what will happen next, then you are using a very narrow human centred understanding of pattern recognition. In reality, everything we can differentiate from another thing is a pattern and it is by grouping related patterns together into bigger patterns using the process of abstraction that we arrive after many levels at the kind of pattern recognition that we humans have that enables us to understand the world.

Systems like deep-learning are enabling us to tear apart visual patterns, extracting “features” and enabling us to put things in classes. This is how far we have been able to go as humans for now but following the Algorithm mantra of: can we do better? I will say Yes! we can!! If something as hard as multiplication could be made faster and till date, we are still finding faster ways to sort data then with deep learning we are at the cusp of discovering more robust algorithms for automatically extracting patterns or features from data.

In my opinion, the success of deep learning over other methods of pattern recognition and even trumping humans sometimes in image recognition put the research community into what I call deep learning paralysis. Lots of people are optimizing away at deep learning hoping to achieve something mystical with it as if adding more layers and more transformation tricks were going to lead us into something like an AGI. This is a local minima that we have all fallen into and the earlier we shake ourselves out with the question: can we do better? The better for us

When given lots of data and expensive GPUs and lots of time deep learning works well sometimes even better than humans, but it is not robust! Basically, I think we should take deep-learning as a door on a pathway to discovering new classes of algorithms that work the way it does.

Rather than tinkering deep learning to oblivion, what is currently being done in deep-learning “research”, we should try to understand the system in the most meta way possible, looking philosophically at what it does, and trying to engineer other algorithmic paradigms that surpass it.

So much attention has been paid to the metric of trying to compare the accuracy of recognition with humans and that has been a good guideline for development, but now we have to think of other systems, maybe based on deep-learning, if some PhD magician can make it faster, or elsewhere in the computational universe for programs that do pattern recognition with fewer data and do it faster.

The waiting for faster hardware argument or we cannot do without lots of data is not a very intelligent argument. We should think of better algorithms, new systems maybe in my opinion. I’m still amazed at how simple bit shifting could do multiplication! It gives me lots of hope that there are so many many ways to achieve a goal, we just have to explore the solution space. Deeplearning has opened our eyes to another dimension of things and we should not stop here as if it was the end of the universe, rather we should explore other modalities, strange as they may appear at first till we arrive at better and better algorithms.

Pattern recognition is fundamental to human intelligence. When the imagination synthesizes some new image from the enormous store of all the features we have extracted no different than how Goolge dream paints all those funny pictures but more sophisticated, our pattern recognition systems go to work to extract new features from the newly concocted images, sounds, textures, etc. this is going on automatically in our minds all the times, and from time to time we peek in with our conscious cognition and viola we have a new idea!


  1. thank you for the valuable information giving on data science it is very helpful.
    Data Science Training in Hyderabad

  2. your article on data science is very interesting thank you so much.
    Data Science Training in Hyderabad

  3. With the integration of forecast engines business users can generate insights for future scenarios that will help them in adjusting current strategies to deliver the best possible results. I strongly recommend InetSoft to everyone!


Post a Comment

Popular posts from this blog

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

How to become an AI researcher

Artificial Intelligence is all the rage these days. Everyone is getting on the bandwagon but there seems to be a shortage of AI researchers everywhere these days. Although many people are talking about doing AI not many people are actually doing AI research.