Artificial brain inspiration learns like humans

Summary: Today’s artificial intelligence can read, speak and analyze data, but it still has major limitations. NeuroAI researchers have designed a new model of artificial intelligence inspired by the efficiency of the human brain.

This model allows AI neurons to receive feedback and adapt in real time, improving learning and memory processes. The innovation could lead to a new generation of more efficient and affordable artificial intelligence, bringing artificial intelligence and neuroscience closer together.

Key facts:

  1. Inspired by the brain: A new model of artificial intelligence is based on how human brains efficiently process and manipulate data.
  2. Real-time editing: AI neurons can receive feedback and adjust on the fly, increasing efficiency.
  3. Potential impact: This breakthrough could pioneer a new generation of artificial intelligence that learns like humans, boosting both the fields of artificial intelligence and neuroscience.

Source: CSHL

it is read. That speaks. It collects mountains of data and recommends business decisions. Today’s AI might seem more human than ever. However, AI still has several major shortcomings.

“As impressive as ChatGPT and all these current AI technologies are, they are still very limited in terms of interacting with the physical world. Even in the things they do, like solving math problems and writing essays, they go through billions and billions of training examples before they get good at it,” explains Cold Spring Harbor Laboratory (CSHL) NeuroAI Scholar Kyle Daruwalla.

Daruwalla looked for new, unconventional ways to design AI that could overcome such computational hurdles. And maybe he just found one.

A new machine learning model provides evidence for a previously unproven theory that correlates working memory with learning and academic performance. Credit: Neuroscience News

The key was moving the data. Today, most of the power consumption of modern computers comes from bouncing data. In artificial neural networks, which are made up of billions of connections, data can travel a very long way.

To find a solution, Daruwalla looked for inspiration in one of the most computationally powerful and energy-efficient machines in existence—the human brain.

Daruwalla proposed a new way for AI algorithms to move and process data much more efficiently based on how our brains take in new information. The design allows individual AI “neurons” to receive feedback and adapt on the fly, rather than waiting for the entire circuit to update simultaneously. Thus, the data does not have to travel as far and is processed in real time.

“In our brains, our connections are constantly changing and adapting,” says Daruwalla. “It’s not like you put everything on hold, adjust, and then continue to be you.”

A new machine learning model provides evidence for a previously unproven theory that correlates working memory with learning and academic performance. Working memory is a cognitive system that allows us to stay on task while recalling stored knowledge and experience.

“There are theories in neuroscience about how working memory circuits might help facilitate learning. But there is nothing so concrete as our rule that binds the two together.

“And that was one of the nice things we came across here. The theory led to a rule where the individual adaptation of each synapse required that working memory to sit next to it,” says Daruwalla.

Daruwalla’s design can help usher in a new generation of AI that learns like us. This would not only make AI more efficient and accessible, but it would also be a full-circle moment for neuroAI. Neuroscience has been feeding AI valuable data long before ChatGPT uttered its first digital syllable. It looks like AI may soon return the favor.

About this AI research report

Author: Sara Giarnieri
Source: CSHL
Contact: Sara Giarnieri – CSHL
Picture: Image is credited to Neuroscience News

Original Research: Open access.
“A Hebbian problem-based learning rule naturally links working memory and synaptic updates” by Kyle Daruwalla et al. Frontiers in Computational Neuroscience


Abstract

Hebb’s learning rule based on the information bottleneck naturally links working memory and synaptic updates

Deep neural feedforward networks are efficient models for a wide range of problems, but the training and deployment of such networks has a significant energy cost. State-of-the-art neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution when properly deployed on neuromorphic computing hardware.

Nevertheless, many applications train SNNs offlineand running network training directly on neuromorphic hardware is an ongoing research problem. The primary obstacle is that the backpropagation that allows such artificial deep networks to be trained is biologically implausible.

Neuroscientists aren’t sure how the brain would propagate the precise error signal back through the network of neurons. Recent progress addresses part of this question, e.g., the weight transport problem, but a complete solution remains elusive.

In contrast, new learning rules based on information bottleneck (IB) train each layer of the network independently, bypassing the need to propagate errors across layers. Instead, propagation is implicit due to the forward connectivity of the layers.

These rules take the form of a three-factor Hebbian update, and the global error signal modulates local synaptic updates in each layer. Unfortunately, the global signal for a given layer requires multiple samples to be processed simultaneously, and the brain only sees one sample at a time.

We propose a novel three-factor update rule where the global signal correctly captures information across samples via an auxiliary memory network. The auxiliary network can be trained a priori independent of the data set used with the primary network.

We demonstrate performance comparable to baselines in image classification tasks. Interestingly, unlike back-propagation schemes where there is no connection between learning and memory, our rule represents a direct connection between working memory and synaptic updates. As far as we know, this is the first rule that this link states as explicit.

We examine these implications in initial experiments examining the effect of memory capacity on learning performance. Going forward, this work proposes an alternative view of learning where each layer balances memory-informed compression and task performance.

This view naturally includes several key aspects of neural computing, including memory, efficiency, and locality.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top