By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: This AI Model Never Stops Learning
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Gadget > This AI Model Never Stops Learning
Gadget

This AI Model Never Stops Learning

News Room
Last updated: 2025/06/18 at 8:27 PM
News Room Published 18 June 2025
Share
SHARE

Modern large language models (LLMs) might write beautiful sonnets and elegant code, but they lack even a rudimentary ability to learn from experience.

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.

“The initial idea was to explore if tokens [units of text fed to LLMs and generated by them] could cause a powerful update to a model,” says Jyothish Pari, a PhD student at MIT involved with developing SEAL. Pari says the idea was to see if a model’s output could be used to train it.

Adam Zweiger, an MIT undergraduate researcher involved with building SEAL, adds that although newer models can “reason” their way to better solutions by performing more complex inference, the model itself does not benefit from this reasoning over the long term.

SEAL, by contrast, generates new insights and then folds it into its own weights or parameters. Given a statement about the challenges faced by the Apollo space program, for instance, the model generated new passages that try to describe the implications of the statement. The researchers compared this to the way a human student writes and reviews notes in order to aid their learning.

The system then updated the model using this data and tested how well the new model is able to answer a set of questions. And finally, this provides a reinforcement learning signal that helps guide the model toward updates that improve its overall abilities and which help it carry on learning.

The researchers tested their approach on small and medium-size versions of two open source models, Meta’s Llama and Alibaba’s Qwen. They say that the approach ought to work for much larger frontier models too.

The researchers tested the SEAL approach on text as well as a benchmark called ARC that gauges an AI model’s ability to solve abstract reasoning problems. In both cases they saw that SEAL allowed the models to continue learning well beyond their initial training.

Pulkit Agrawal, a professor at MIT who oversaw the work, says that the SEAL project touches on important themes in AI, including how to get AI to figure out for itself what it should try to learn. He says it could well be used to help make AI models more personalized. “LLMs are powerful but we don’t want their knowledge to stop,” he says.

SEAL is not yet a way for AI to improve indefinitely. For one thing, as Agrawal notes, the LLMs tested suffer from what’s known as “catastrophic forgetting,” a troubling effect seen when ingesting new information causes older knowledge to simply disappear. This may point to a fundamental difference between artificial neural networks and biological ones. Pari and Zweigler also note that SEAL is computationally intensive, and it isn’t yet clear how best to most effectively schedule new periods of learning. One fun idea, Zweigler mentions, is that, like humans, perhaps LLMs could experience periods of “sleep” where new information is consolidated.

Still, for all its limitations, SEAL is an exciting new path for further AI research—and it may well be something that finds its way into future frontier AI models.

What do you think about AI that is able to keep on learning? Send an email to [email protected] to let me know.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Halo Security Honored With 2025 MSP Today Product Of The Year Award | HackerNoon
Next Article Google finally confirms the Pixel 10… sort of
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Capybara Go! takes $40 million in overseas revenue within two months of launch · TechNode
Computing
iOS 26’s messaging filters are effective but tedious to use
News
The HackerNoon Newsletter: Addicted to Your AI? New Research Warns of Social Reward Hacking (6/18/2025) | HackerNoon
Computing
PSI Software SE (ETR: PSAN) Shares can be 34% under their intrinsic value estimate
News

You Might also Like

Gadget

KitchenAid Promo Codes: Up to $200 Off in June 2025

4 Min Read
Gadget

Those Creatine Gummies You Bought Online Might Not Contain Any Creatine

4 Min Read
Gadget

Authors Are Posting TikToks to Protest AI Use in Writing—and to Prove They Aren’t Doing It

4 Min Read
Gadget

US Supreme Court Upholds Tennessee’s Ban on Gender-Affirming Care for Minors

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?