Belgian researchers have found ways of mimicking the human brain to improve sensors and the way they pass data to central computers
- Pat Brans,Pat Brans Associates/Grenoble Ecole de Management
Published: 06 Apr 2023
The human brain is much more efficient than the world’s most powerful computers. A human brain with an average volume of about 1,260cm3 consumes about 12W (watts) of power.
Using this biological marvel, the average person learns a very large number of faces in very little time. It can then recognise one of those faces right away, regardless of the expression. People can also glance at a picture and recognise objects from a seemingly infinite number of categories.
Compare that to the most powerful supercomputer in the world, Frontier, which runs at Oak Ridge National Laboratory, spanning 372m2 and consuming 40 million watts of power at peak. Frontier processes massive amounts of data to train artificial intelligence (AI) models to recognise a large number of human faces, as long as the faces aren’t showing unusual expressions.
But the training process consumes a lot of energy – and while the resulting models run on smaller computers, they still use a lot of energy. Moreover, the models generated by Frontier can only recognise objects from a few hundred categories – for example, person, dog, car, and so on.
Scientists know some things about how the brain works. They know, for example, that neurons communicate with each other using spikes (thresholds of accumulated potential). Scientists have used brain probes to look deeply into the human cortex and register neuronal activity. Those measurements show that a typical neuron spikes only a few times per second, which is very sparse activation. On a very high level, this and other basic principles are clear. But the way neurons compute, the way they participate in learning, and the way connections are made and remade to form memories is still a mystery.
Nevertheless, many of the principles researchers are working on today are likely to be part of a new generation of chips that replace computer processing units (CPUs) and graphics processing units (GPUs) 10 or more years from now. Computer designs are also likely to change, moving away from what is called the von Neumann architecture, where processing and data are in different locations and share a bus to transfer information.
New architectures will, for example, collocate processing and storage, as in the brain. Researchers are borrowing this concept and other features of the human brain to make computers faster and more power efficient. This field of study is known as neuromorphic computing, and a lot of the work is being done at the Interuniversity Microelectronics Centre(Imec) in Belgium.
“We tend to think that spiking behaviour is the fundamental level of computation within biological neurons. There are much deeper lying computations going on that we don’t understand – probably down to the quantum level,” says Ilja Ocket, programme manager for Neuromorphic Computing at Imec.
“Even between quantum effects and the high-level behavioural model of a neuron, there are other intermediate functions, such as ion channels and dendritic calculations. The brain is much more complicated than we know. But we’ve already found some aspects we can mimic with today’s technology – and we are already getting a very big payback.”
There is a spectrum of techniques and optimisations that are partially neuromorphic and have already been industrialised. For example, GPU designers are already implementing some of what has been learned from the human brain; and computer designers are already reducing bottlenecks by using multilayer memory stacks. Massive parallelism is another bio-inspired principle used in computers – for example, in deep learning.
Nevertheless, it is very hard for researchers in neuromorphic computers to make inroads in computing because there is already too much momentum around traditional architectures. So rather than try to cause disruption in the computer world, Imec has turned its attention to sensors. Researchers at Imec are looking for ways to “sparsify” data and to exploit that sparsity to accelerate processing in sensors and reduce energy consumption at the same time.
“We focus on sensors that are temporal in nature,” says Ocket. “This includes audio, radar and lidar. It also includes event-based vision, which is a new type of vision sensor that isn’t based on frames but works instead on the principle of your retina. Every pixel independently sends a signal if it senses a significant change in the amount of light it receives.
“We borrowed these ideas and developed new algorithms and new hardware to support these spiking neural networks. Our work now is to demonstrate how low power and low latency this can be when integrated onto a sensor.”
Spiking neural networks on a chip
A neuron accumulates input from all the other neurons it is connected to. When the membrane potential reaches a certain threshold, the axon – the connection coming out of the neuron – emits a spike. This is one of the ways your brain performs computation. And this is what Imec now does on a chip, using spiking neural networks.
“We use digital circuits to emulate the leaky, integrate and fire behaviour of biological spiking neurons,” says Ocket. “They are leaky in the sense that while they integrate, they also lose a bit of voltage on their membrane; they are integrating because they accumulate spikes coming in; and they are firing because the output fires when the membrane potential reaches a certain threshold. We mimic that behaviour.”
The benefit of that mode of operation is that until data changes, no events are generated, and no computations are done in the neural network. Consequently, no energy is used. The sparsity of the spikes within the neural network intrinsically offers low power consumption because computing does not occur constantly.
A spiking neural network is said to be recurrent when it has memory. A spike is not just computed once. Instead, it reverberates into the network, creating a form of memory, which allows the network to recognise temporal patterns, similarly to what the brain does.
Using spiking neural network technology, a sensor transmits tuples that include the X coordinate and the Y coordinate of the pixel that’s spiking, the polarity (whether it’s spiking upward or downward) and the time it spikes. When nothing happens, nothing is transmitted. On the other hand, if things change in a lot of places at once, the sensor creates a lot of events, which becomes a problem because of the size of the tuples.
To minimise this surge in transmission, the sensor does some filtering by deciding on the bandwidth it should output based on the dynamics of the scene. For example, in the case of an event-based camera, if everything in a frame changes, the camera sends too much data. A frame-based system would handle that much better because it has a constant data rate. To overcome this problem, designers put a lot of intelligence on sensors to filter data – one more way of mimicking human biology.
“The retina has 100 million receptors, which is like having 100 million pixels in your eye,” says Ocket. “But the optical fibre that goes through your brain only carries a million channels. So, this means the retina carries out a 100 times compression – and this is real computation. Certain features are detected, like motion from left to right, from top to bottom, or little circles. We are trying to mimic the filtering algorithm that goes on the retina in these event-based sensors, which operate on the edge and feeds data back to a central computer. You might think of the computation going on in the retina as a form of edge AI.”
People have been mimicking spiking neurons in silicon since the 1980s. But the main obstacle preventing this technology from reaching a market or any kind of real application was training spiking neural networks as efficiently and conveniently as deep neural networks are trained. “Once you establish good mathematical understanding and good techniques to train spiking neural networks, the hardware implementation is almost trivial,” says Ocket.
In the past, people would build spiking into their network chips and then do a lot of fine-tuning to get the neural networks to do something useful. Imec took another approach, developing algorithms in software that showed that a given configuration of spiking neurons with a given set of connections would perform to a certain level. Then they built the hardware.
This kind of breakthrough in software and algorithms is unconventional for Imec, where progress is usually in the form of hardware innovation. Something else that was unconventional for Imec was that they did all this work in standard CMOS, which means their technology can be quickly industrialised.
The future impact of neuromorphic computing
“The next direction we’re taking is towards sensor fusion, which is a hot topic in automotive, robotics, drones and other domains,” says Ocket. “A good way of achieving very high-fidelity 3D perception is to combine multiple sensory modalities. Spiking neural networks will allow us to do that with low power and low latency. Our new target is to develop a new chip specifically for sensor fusion in 2023.
“We aim to fuse multiple sensor streams into a coherent and complete 3D representation of the world. Like the brain, we don’t want to have to think about what comes from the camera versus what comes from the radar. We are going for an intrinsically fused representation.
“We’re hoping to show some very relevant demos for the automotive industry – and for robotics and drones across industries – where the performance and the low latency of our technology really shines,” says Ocket. “First we’re looking for breakthroughs in solving certain corner cases in automotive perception or robotics perception that are aren’t possible today because the latency is too high, or the power consumption is too high.”
Two other things Imec expects to happen in the market are the use of event-based cameras and sensor fusion. Event-based cameras have a very high dynamic range and a very high temporal resolution. Sensor fusion might take the form of a single module with cameras in the middle, some radar antennas around it, maybe a lidar, and data is fused on the sensor itself, using spiking neural networks.
But even when the market takes up spiking neural networks in sensors, the larger public may not be aware of the underlying technology. That will probably change when the first event-based camera gets integrated into a smartphone.
“Let’s say you want to use a camera to recognise your hand gestures as a form of human-machine interface,” explains Ocket. “If that were done with a regular camera, it would constantly look at each pixel in each frame. It would snap a frame, and then decide what’s happening in the frame. But with an event-based camera, if nothing is happening in its field of view, no processing is carried out. It has an intrinsic wake-up mechanism that you can exploit to only start computing when there’s sufficient activity coming off your sensor.”
Human-machine interfaces could suddenly become a lot more natural, all thanks to neuromorphic sensing.
Read more on Managing IT and business issues
Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.What inspired neural networks? ›
Neural networks are inspired by the way the human brain works. A human brain can process huge amounts of information using data sent by human senses (especially vision).What is a computer chip that imitates the human brain? ›
In a study published in the journal Science, the researchers looked to create a computer chip that could adapt and learn in a way that mimics our brains by using a material called perovskite nickelate, which is very sensitive to hydrogen.What type of computer architecture modeled after the human brains network of neurons? ›
Neuromorphic computing is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system. The term refers to the design of both hardware and software computing elements.Which theory the human brain was described as a neural network? ›
The idea of neural networks began unsurprisingly as a model of how neurons in the brain function, termed 'connectionism' and used connected circuits to simulate intelligent behaviour . In 1943, portrayed with a simple electrical circuit by neurophysiologist Warren McCulloch and mathematician Walter Pitts.What type of neural network is the human brain? ›
The neurons in the human brain perform their functions through a massive inter-connected network known as synapses.Which neural networks algorithms are inspired from the structure and functioning of the brain? ›
Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.What is the first generation of neural networks? ›
The first artificial neural network was invented in 1958 by psychologist Frank Rosenblatt. Called Perceptron, it was intended to model how the human brain processed visual data and learned to recognize objects. Other researchers have since used similar ANNs to study human cognition.How does a neural network imitate a human brain? ›
How does a basic neural network work? A neural network is a network of artificial neurons programmed in software. It tries to simulate the human brain, so it has many layers of "neurons" just like the neurons in our brain. The first layer of neurons will receive inputs like images, video, sound, text, etc.Is Neuralink brain chip safe? ›
The FDA rejected Neuralink's request for approval to begin testing its brain chips in humans, Reuters reported. The agency cited dozens of issues with the device, including concerns it could overheat or move in the brain.
Neuralink's device has a chip that processes and transmits neural signals that could be transmitted to devices like a computer or a phone. The company hopes that a person would potentially be able to control a mouse, keyboard or other computer functions like text messaging with their thoughts.Can a human brain be simulated on a computer? ›
K computer and human brain
In late 2013, researchers in Japan and Germany used the K computer, then 4th fastest supercomputer, and the simulation software NEST to simulate 1% of the human brain. The simulation modeled a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses.
Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations.Which of the following are the computer system inspired by the biological neural neural networks? ›
Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains.Which of the following is modeled after the human brain network? ›
Neural networks are a set of algorithms, modeled loosely after the human brain, that is designed to recognize patterns.Who designed the first neural network program simulating the human brain? ›
The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957.Who is the human brain different from the artificial neural network models? ›
For one, human brains are far more complex and sophisticated than neural networks. Additionally, human brains are able to learn and adapt much more quickly than neural networks. Finally, human brains are able to generate new ideas and concepts, while neural networks are limited to the data they are given.Who came up with the theory called the brain dominance theory where in the brain is divided into 4 quadrants *? ›
Developed by Ned Herrmann, Whole Brain® Thinking divides the brain into four quadrants. Each quadrant represents a different part of the brain: Analytical, Practical, Relational, Experimental.What is the most common type of neural network? ›
The four most common types of neural network layers are Fully connected, Convolution, Deconvolution, and Recurrent, and below you will find what they are and how they can be used.What machine learning model is most closely related to a neuron in a neural network? ›
Multilayer Perceptron (MLP) is a class of feed-forward artificial neural networks. The term perceptron particularly refers to a single neuron model that is a precursor to a larger neural network.
Different types of Neural Networks in Deep Learning
This article focuses on three important types of neural networks that form the basis for most pre-trained models in deep learning: Artificial Neural Networks (ANN) Convolution Neural Networks (CNN) Recurrent Neural Networks (RNN)
Long Short-Term Memory (LSTM) Networks
LSTM is a type of RNN that is designed to handle the vanishing gradient problem that can occur in standard RNNs. It does this by introducing three gating mechanisms that control the flow of information through the network: the input gate, the forget gate, and the output gate.
Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so.Which branch of AI focuses on artificial neural network inspired by the human brain? ›
A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.What is the second generation of neural networks? ›
The second generation of neural networks consists of neurons which apply a non-linear continuous function to the sum of weighted inputs, and hence produce a continuous set of possible output values. Some of the examples of activation functions are the sigmoidal and tanh functions.What are generations in neural network? ›
There are currently three different generations of Artificial Neural Networks (ANNs). While the second generation is associated to Deep Learning, SNNs form the basis of the third generation  . All ANNs share a common structure: they are always built from neurons (nodes) and synapses (connections). ...What is the basic of neural network? ›
What are Neural Networks? Neural networks are used to mimic the basic functioning of the human brain and are inspired by how the human brain interprets information. It is used to solve various real-time tasks because of its ability to perform computations quickly and its fast responses.What is a specific type of machine learning that uses layers of artificial neural networks to mimic brain functions? ›
Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data.Which of the following is an a function that mimics the working of human brain in processing data? ›
deep learning (DL) is a function of AI that mimics the functioning of the human brain in data processing and creating models for their use in decision making.Which of the following is an AI function that mimics the working of the human brain in processing data for use in detecting objects recognizing speech? ›
Deep learning is an AI function that mimics the workings of the human brain in processing data for use in detecting objects, recognizing speech, translating languages, and making decisions.
Neuralink, founded in 2016, has yet to receive FDA approval to test its brain chip in humans. Other implant makers have spent years or decades on research to secure U.S. regulatory approvals.What is the negative side of Neuralink? ›
If Neuralink is implanted incorrectly, it can lead to infections and inflammation in the brain, which may result in an increased risk of developing Alzheimer's disease later in life. Another disadvantage of Neuralink is that it may be abused by people who want to control others or steal information from them.Is Neuralink denied FDA approval? ›
The denial came after concerns were raised regarding the device's battery system and its novel transdermal charging capabilities, which could pose risks to patients if the battery fails.How much money is a Neuralink? ›
's price today is US$0, with a 24-hour trading volume of $N/A. is +0.00% in the last 24 hours.Who controls Neuralink? ›
|Number of employees||c. 300 (2022)|
Musk has said that the entire procedure will be done by a robotic surgeon, a Neuralink robot, and he expects the surgery to cost around a few $1,000 bucks.Can consciousness be simulated? ›
Simulation of consciousness through large-scale brain models seems to be possible in principle, even if present tools for modeling are limited (e.g., mathematics) and our current insufficient understanding of the brain's structure and the functional neural code of consciousness makes such simulation technically and ...Can we digitize our brain? ›
As of December 2022, this kind of technology is almost entirely theoretical. Scientists are yet to discover a way for computers to feel human emotions, and many assert that uploading consciousness is not possible.Can brain memory be transferred? ›
Memory transfer proposes a chemical basis for memory termed memory RNA which can be passed down through flesh instead of an intact nervous system. Since RNA encodes information living cells produce and modify RNA in reaction to external events, it might also be used in neurons to record stimuli.Which computer system can simulate the functioning of a human brain? ›
What Is Neuromorphic Computing? Neuromorphic computing is designed to mimic the human brain, operating in a manner that allows the technology to solve problems in ways our brains would.
Our human brain is made up of neural networks and when we talk about creating brain inspired AI, we basically refer to the process of creating artificial neural networks that work the way the human brain works, fundamentally.Which artificial intelligence system modeled on human brain? ›
There are many different ways that artificial intelligence systems model the human brain. Some systems use artificial neural networks, which are modeled after the way that neurons connect in the brain. Other systems use genetic algorithms, which are modeled after the way that genes evolve in the brain.How is neural network inspired from human brain? ›
Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.What is human brain network theory? ›
Network neuroscience is an approach to understanding the structure and function of the human brain through an approach of network science, through the paradigm of graph theory. A network is a connection of many brain regions that interact with each other to give rise to a particular function.What structure is neural network architecture inspired by? ›
Artificial Neural Networks (ANNs) make up an integral part of the Deep Learning process. They are inspired by the neurological structure of the human brain.What is the origin of deep neural network? ›
The history of deep learning can be traced back to 1943, when Walter Pitts and Warren McCulloch created a computer model based on the neural networks of the human brain.On what structure is an artificial neural network based an artificial neural network is based on interconnected? ›
An artificial neural network is an interconnected group of nodes, inspired by a simplification of neurons in a brain. Here, each circular node represents an artificial neuron and an arrow represents a connection from the output of one artificial neuron to the input of another.What structure are artificial neural networks based on quizlet? ›
To create AI systems using a bottom-up approach, artificial neural networks (ANNs) have been developed. - ANNs are constructed based on the structure of the human brain using loosely connected artificial neurons.What are the 3 components of the neural network? ›
There are typically three parts in a neural network: an input layer, with units representing the input fields; one or more hidden layers; and an output layer, with a unit or units representing the target field(s). The units are connected with varying connection strengths (or weights).What is the architecture of a neural network? ›
What Is Neural Network Architecture? The architecture of neural networks is made up of an input, output, and hidden layer. Neural networks themselves, or artificial neural networks (ANNs), are a subset of machine learning designed to mimic the processing power of a human brain.
A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.How many types of artificial neural networks are there? ›
The 7 Types of Artificial Neural Networks ML Engineers Need to Know.