My page - topic 1, topic 2, topic 3 Postbox Live

AI models have a huge flaw.

Ai Models Have A Huge Flaw In That They Must Be Completely Rebuilt Each Time They Are Updated

AI models have a huge flaw in that they must be

completely rebuilt each time they are updated.

 

 

 

25/8/2024,

“A solution to continual learning is literally a billion-dollar question.”

It turns out that AI models are incapable of learning new information, as a recent study has shown.

A study carried out by a group of academics from the University of Alberta in Canada was released this week in the journal Nature. The study discovered that deep learning-trained AI algorithms that is, AI models such as big language models built by searching through enormous volumes of data for patterns fail to work in “continual learning settings,” or when new concepts are added to a model’s already-existing training.

Put another way, the results suggest that if you wish to teach an existing deep learning model something new, you will most likely need to retrain it. If not, their symbolic heads’ artificial neurons will probably eventually become zero. This results in the loss of their “plasticity,” or ability to learn anything at all.

In an interview with New Scientist, primary study author and computer scientist Shibhansh Dohare of the University of Alberta stated, “If you think of it like your brain, then it’ll be like 90 percent of the neurons are dead.” “There’s just not enough left for you to learn.”

The researchers also note that training complex AI models is an expensive and time-consuming process, which presents a serious financial problem for AI companies, which already have exceptionally high cash burn rates.

“When the network is a large language model and the data are a substantial portion of the internet,” according to the study, “then each retraining may cost millions of dollars in computation.”

The envisioned “artificial general intelligence,” or a hypothetical AI that would be regarded as broadly as intelligent as humans, and the phenomena of plasticity loss are also very different from one another. In human words, this would be equivalent to having to completely reset our brains each time we enrolled in a new college course, for fear of wiping out the majority of our neurons.

Do AI companies have a promising future? Interestingly, when the researchers developed an algorithm that might randomly resurrect some damaged or “dead” artificial intelligence neurons, they made some progress toward tackling the plasticity problem.

As it is, a workable solution is still unattainable.

In an interview with New Scientist, Dohare said that the problem of continuous learning is “literally a billion-dollar question.” “A real, comprehensive solution that would allow you to continuously update a model would reduce the cost of training these models significantly.”

 

 

 


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading