Summary: Today's AI can read, speak, and analyze data, but it still has serious limitations. NeuroAI researchers have created a new AI model inspired by the efficiency of the human brain.
This model allows AI neurons to receive feedback and adjust in real-time, enhancing learning and memory processes. This innovation could lead to a new generation of more efficient and accessible AI, bringing AI and neuroscience closer to each other.
Important facts:
- Inspired by the brainThe new AI model is based on how the human brain efficiently processes and adjusts data.
- Real-time adjustmentAI neurons can receive feedback and make adjustments instantly, thereby improving efficiency.
- potential impactThis breakthrough could lead to a new generation of AI that learns like humans, boosting both the fields of AI and neuroscience.
Source: CSHL
It reads. It speaks. It collects tons of data and recommends business decisions. Today's artificial intelligence may seem more human than ever. However, AI still has several serious shortcomings.
“ChatGPT and all of these existing AI techniques are impressive in terms of interacting with the physical world, yet they are very limited. Even the things they do, like solving math problems and writing essays, they need billions and billions of training examples before they can do them well,” explains Cold Spring Harbor Laboratory (CSHL) NeuroAI Scholar Kyle Daruwalla.
Daruwalla is exploring new, unconventional ways to design AI that can overcome such computational hurdles. And he may have just found a way.
The main thing was moving data around. Nowadays, most of the energy consumption of modern computing comes from bouncing data around. In artificial neural networks, which are made up of billions of connections, data may have to travel a very long way.
So, to find a solution, Daruwalla took inspiration from one of the most computationally powerful and energy-efficient machines – the human brain.
Daruwalla has designed a new way for AI algorithms to transfer and process data more efficiently, based on how our brain takes in new information. This design allows individual AI “neurons” to receive feedback and adjust immediately rather than waiting for the entire circuit to update at once. This way, data doesn't need to travel very far and gets processed in real time.
“In our brains, our connections are changing and adjusting all the time,” says Daruwalla. “It's not like you stop everything, adjust, and then you're back to who you are again.”
The new machine-learning model provides evidence for a yet unproven theory that correlates working memory with learning and academic performance. Working memory is the cognitive system that enables us to stay on task by recalling stored knowledge and experiences.
“There are theories in neuroscience about how working memory circuits might support learning. But there’s nothing as concrete as our rule that actually ties these two together.
“And so that was one of the cool things we found here. The theory led to a rule that each synapse had to be individually adjusted to have this working memory sitting with it,” Daruwalla says.
Daruwalla's design could help pioneer a new generation of AI that learns like us. Not only would it make AI more efficient and accessible — it would also be somewhat of a full-circle moment for neuroAI. Neuroscience has been feeding valuable data to AI long before ChatGPT uttered its first digital syllable. It looks like AI could soon return the favor, too.
About this artificial intelligence research news
Author: Sarah Giarnieri
Source: CSHL
contact: Sarah Giarnieri – CSHL
image: Image credit: Neuroscience News
Original Research: open access.
“Information bottleneck-based Hebbian learning rule naturally links working memory and synaptic updating” by Kyle Daruwalla et al. Advances in Computational Neuroscience
abstract
Information constraint-based Hebbian learning rule naturally links working memory and synaptic updating
Deep neural feedforward networks are effective models for a wide range of problems, but training and deploying such networks is energy-intensive. Spiking neural networks (SNNs), which are modeled after biologically realistic neurons, offer potential solutions when properly deployed on neuromorphic computing hardware.
Nevertheless, many applications train SNNs. OfflineAnd running network training directly on neuromorphic hardware is an ongoing research problem. The primary obstacle is that back-propagation, which makes it possible to train such artificial deep networks, is biologically impossible.
Neuroscientists are unsure how the brain would propagate an accurate error signal backwards through a network of neurons. Recent advances address some part of this question, such as the weight transport problem, but a complete solution is still abstract.
In contrast, novel learning rules based on information constraints (IB) train each layer of the network independently, eliminating the need to propagate errors across layers. Instead, propagation is implicit due to the feedforward connectivity of layers.
These rules take the form of three-factor Hebbian updates, a global error signal controlling local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires multiple samples to be processed simultaneously, and the brain only sees a single sample at a time.
We propose a new three-factor update rule, where the global signal accurately captures the information in the samples through an auxiliary memory network. The auxiliary network can be trained possibly Independent of the dataset being used with the primary network.
We demonstrate comparable performance to the baseline on image classification tasks. Interestingly, unlike schemes such as back-propagation, where there is no connection between learning and memory, our rule introduces a direct link between working memory and synaptic updating. To the best of our knowledge, this is the first rule to make this link explicit.
We explore these implications in preliminary experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternative view of learning where each layer balances memory-informed compression against task performance.
This approach naturally incorporates several key aspects of neural computing, including memory, efficiency, and locality.