top of page

The Biological Evolution of Artificial Intelligence (AI)

Updated: Mar 4, 2019



By Oceane Rime


Ever since I started working at Intel a few years ago, I have become very interested in the IoT technology and its vast possibilities. Since then I have made my way and started to realize that IoT is closely linked to a lot of other disruptive technologies. Of these I consider the most important one to be the Artificial Intelligence.

Artificial Intelligence is the greatest revolution of our century, it aims at replicating human intelligence, the natural way we do things, take decisions, our actions, and reactions, but it also tries to go beyond our own intelligence. It is very profitable as it can make our lives easier by automating processes, reducing physical work and even helping us in our daily decisions. One key enabler of this fascinating development, along with computing power, is the data.

We have recently entered the age of data mainly thanks to an increase in accessibility and connectivity. Data is now governing our world, it is all around us, and one of the greatest added value resides in the ability to make sense of it. It is what we can call the ‘fuel’ of algorithm development. As you might have noticed the development of algorithms is skyrocketing. We have all heard about “smart” algorithms such as Machine Learning or Deep Learning algorithms for instance.

Going back to the roots, the word algorithm is as old as the year 1240 but has only been used with its modern definition since the late 19th century; a set of rules that precisely defines a sequence of operations. So the word and its meaning are far from new, however, the extension and capabilities brought forward are quite spectacular.

This is what I realized during a talk about neuromorphic computing while attending a conference on HPC (high-performance computing) in the south of France. I realized the power, the potential and the outstanding advances we have made about algorithms and computing development and thought that this should be broadly shared.

We have gone far beyond the conventional approach of programming, which is basically telling the computer exactly what to do. Indeed, we are now developing self-learning algorithms; with the decreasing cost of computing power and the increase of data flow and storage capacity, we can feed algorithms with an incredible amount of data so they can learn from a massive dataset and be able to make predictions. In this case, we do not tell the algorithm what to do anymore, it learns from observing data and figure out solutions to a posed problem on its own. Some of these algorithms are based on what we call neural networks, which tend to replicate how neurons process information and how they interact together ( with electricity spikes). It is a very promising type of algorithms as it can solve a large set of problems such as speech recognition, computer vision and natural language processing (real-time translation).

These types of algorithms fall into the Artificial Intelligence frame. AI appears to have an ever changing definition which is constantly pushed by its technological capacities, to put it simply, it refers to a machine that can show signs of intelligence which from a machine perspective means mimicking cognitive functions, such as learning and problem-solving. Learning, for an algorithm, is the method to process data and classify it according to new situations. Decision tree is an example of the learning process, it has an input that follows different variables to reach an outcome.


The two mains categories of learning algorithms are Machine and Deep Learning, both encompasses several other categories of specific learning algorithms, which makes the AI definition very broad and complex.

Machine Learning is already part of our daily lives. It focuses on computer programs or algorithms that can change when exposed to new data. It does not necessarily follow a static and strict pattern and can learn from other algorithms. Considering the example of the decision tree, when the algorithm has an over fitting amount of data in the tree, it transforms the tree into a rule. Machine Learning algorithms are used in your spam filter, the automatic proposition when typing in the Google search bar, the prediction that if you buy sugar and flour, you might buy eggs as well, your movie suggestion on Netflix or your classification whether you are entitled to a loan by your bank or not.

Deep Learning as you might have understood, is going a little bit deeper. This kind of algorithms are considered as a subset of Machine Learning and tend to replicate neuron systems which are composed of multiple layers of neurons, which means multiple processing layers for the algorithm. They can learn from unstructured data and for now are only able to focus on a particular set of data. Deep Learning algorithms are really efficient for image recognition, classification and speech recognition for instance.


Artificial Intelligence is much closer to biology than you think

Thanks to a maturation of the computing industry and an increased interest and research about the brain operations, we stand in a time where biological neural network inspires the most advanced development in algorithms. But not only, it also inspires the development of the most advanced hardware, especially chips. We call them Neuromorphic chips.

Neuromorphic computing refers to a system, which mimics neurobiological architecture present in the nervous system. This kind of architecture tends to model the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli.

It was first developed in the late 80’s but has really taken off in the last 10-20 years with a sharp increase in the number of research, publications, papers and patents. Funding was a major issue in this technology development as the outcomes could not be monetized, but large-scale projects are now in place with the two of the most advanced programs under the Human Brain Project, allocating more than 1$ billion per decade to research. These projects aim to develop brain simulation and brain-inspired computing. As quoted in the Economist,

Computers will help people to understand brains better. And understanding brains will help people to build better computers

To build better computers, neuromorphic engineering is focusing on characteristics that the brain have, but computers do not. One of them is the power consumption, researchers have estimated the brain’s electricity output around 20 watts, whereas supercomputing usually uses megawatts. Then comes the lack of programmation need (brains learn and change spontaneously as they interact with the environment instead of following a fixed pattern of a predetermined algorithm). And finally, the Fault tolerance, while brains lose neurons all the time, losing one transistor can damage the entire function of the microprocessor.


We are not there yet, but this type of computing is literally stunning. Neuromorphic chips are consuming far less energy and have enough computing power to handle Artificial Intelligence algorithms locally without processing it to the cloud, or a server like normal CPU does. The normal way of making chips is evolving with new models of computing, more flexible and more powerful, including FPGA, quantum computing, and neuromorphic computing.

The neuromorphic evolution is largely expanding, it sounds promising but also challenging for the humanity as a whole. Thanks to incredible advances in science and biology we are developing the most advanced technologies of all time. But as for most of the scientific and technological invention, we do not know the extent of their use until they are fully running into our society. The use of an artificial brain could be infinite. These discoveries might redefine our relationship to each other, our sense of esteem and the deepest traits of humankind.

What we should understand is that the time of Artificial Intelligence is now, three main ingredients are needed for creating any sort of AI, the hardware (compute and memory), the software ( algorithms) and the data. All of which we tend to master with the development of neuromorphic chips, deep learning algorithms and a massively available amount of data.

In the last six years, some of the biggest brands (Google, Apple, Intel) has acquired the largest number of innovative tech companies working on AI topics, most of them developing visual, speech and language recognition. Technologies that could be used on real-world problems starting with autonomous driving, improvement of cyber security and much more.

Computing development is going to another level, with systems intrinsically linked to our biology. Some of those systems aim at augmenting human capacity, such as vision, concentration, memory, learning, language. The US Army is testing a concentration helmet for soldiers, to stay focus on a particular target for instance. The health industry is using technology to curate our disabilities and improve our capabilities such as glasses for blind people, electronic prosthesis and real-time connected chip trackers for improving our immune system. We are still at the start of such integration, but the line between biology and technology is shrinking. In the meantime, Machine Learning and Deep Learning algorithms are taking over our world while the likelihood of creating biased algorithms is real.

With such developments, it is critical to consider an ethical or moral sense, in the creation process. We are ahead of great changes but whether those changes will impact humanity positively or negatively depends largely on our current choices, we are the creators, and the code is up to us.

First published on LinkedIn on April 28, 2017

By Océane Rime - Everything IoT Digital Project Manager


16 views0 comments
bottom of page