An ongoing problem with AI learning
is suggested to be fixed by a research team.
The team investigates a perplexing issue in their work that has long been suspected in deep learning models but has not gotten much attention: for some reason, many deep learning agents involved in constant learning substantially lose their capacity to learn.
Our findings indicate that deep learning eventually gives up when continuous modifications are necessary. It is effectively impossible for you to learn more.”
In addition to losing its capacity for learning new information, he makes the point that the AI agent can no longer be trained to retrieve previously acquired but lost knowledge.
The illness was named “loss of plasticity,” a term from neuroscience that refers to the brain’s capacity to rearrange itself and create new neural connections.
The state of deep learning currently
Loss of plasticity is a major barrier that needs to be addressed in order to develop artificial intelligence that is on par with human intelligence, according to the researchers, and can efficiently handle the complexity of the environment.
A large number of current models are not meant for lifelong learning. Sutton uses ChatGPT as an illustration; it is not a perpetual learner. Rather, the model is trained for a predetermined duration by its developers. Upon completion of training, the model is deployed without any additional learning.
Even with this approach, it could be difficult to integrate both recent and historical data into a model’s memory. Usually, it is more effective to erase the memory and retrain the model on everything from scratch. This can be an expensive and time-consuming process for large models like ChatGPT.
It also limits the possible set of features that a model can have. Sutton contends that dynamic, fast-paced settings like the financial markets require constant learning.
Hidden in plain sight
The group came to the conclusion that the first step in treating flexibility loss was to show that it happens and is important. The issue was “hiding in plain sight”; while there were indications that loss of plasticity might be a common issue in deep learning, very little actual research had been done to look at it.
Rahman claims that he was initially drawn to the topic by the fact that he kept seeing clues about it.
I would go through a document and you would notice a decrease in performance in the appendices. And ultimately, another newspaper would publish it,” he stated.
The goal of the study team’s multiple experiments was to look for signs that deep learning systems were losing their flexibility.
They trained networks on a series of classification problems in supervised learning. In the first challenge, for instance, a network would learn to distinguish between cats and dogs; in the second, it would learn to distinguish between beavers and geese; and so on for several tasks.
They hypothesised that as the networks lost their ability to learn, their ability to discriminate between tasks would also deteriorate.
And that’s exactly what happened.We used several data sets to test and demonstrate that it might be common. Sutton claims that it amply illustrates that it isn’t confined to a narrow subset of deep learning.
Taking care of the deceased
And that’s exactly what happened.We used several data sets to test and demonstrate that it might be common. Sutton claims that it amply illustrates that it isn’t confined to a narrow subset of deep learning.
After identifying the issue, the researchers had to consider whether it could be resolved. Was there a method to keep continual deep-learning networks learning, or was loss of flexibility an intrinsic problem?
They discovered some promise in a technique that included changing backpropagation, one of the key algorithms that underpin neural networks.
The structure of neural networks is designed to mimic that of the human brain: Similar to neurons, they are made up of units that may communicate and form connections with other units. Information can be transferred from one layer of units to another by individual units, which reciprocate. All of this adds to the overall output of the network.
However, these units frequently compute results that don’t truly aid in learning when adjusting the network’s “weights” or connection strength via backpropagation.
Additionally, they won’t pick up new outputs, which means they will stop advancing learning and become a burden on the network.
Mahmood points out that up to 90% of a network’s units may die over the course of long-term continuous learning.
Furthermore, the model loses some of its flexibility when donations cease coming in at a sufficient rate.
The group renamed their modified method “continual backpropagation.”
According to Dohare, the main distinction between continuous and backpropagation is that the former initializes the units just once, whilst the latter does it continuously.
It periodically picks out some of the dead and worthless units as it accumulates experience and reinitializes them with random weights. It turns out that models can learn continuously for a very long time sometimes even for what seems like an eternity when continuous backpropagation is used.
The group renamed their modified method “continual backpropagation.”
Sutton agrees that other researchers might come up with more practical solutions to deal with flexibility loss, but their continued use of backprops shows that the problem is manageable and not unique to deep networks.
He hopes that by raising awareness of loss of flexibility, the team’s work may inspire other researchers to look into the problem.
We made this problem so obvious that it hardly needs to be mentioned. Deep learning has fundamental issues that need to be resolved despite its successes, and the field is growing more and more prepared to acknowledge this,” he said. “So, we’re hoping this will open up this question a little bit.”
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.