By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Together AI enhancements make AI fine-tuning faster and easier – News
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Together AI enhancements make AI fine-tuning faster and easier – News
News

Together AI enhancements make AI fine-tuning faster and easier – News

News Room
Last updated: 2025/04/21 at 5:51 AM
News Room Published 21 April 2025
Share
SHARE

Together Computer Inc. today launched a major update to its Fine-Tuning Platform aimed at making it cheaper and easier for developers to adapt open-source large language models over time.

The startup, which does business as Together AI, operates a public cloud optimized for AI model development. The new features support fine-tuning from within a browser, bypassing the need to install a Python software development kit or make calls to an application programming interface.

The company also added support for direct preference optimization fine-tuning and the ability to start tuning jobs from the results of previous runs with a single command. It also adjusted pricing to lower training costs.

Together AI said the updates reflect its belief that AI models shouldn’t be static but should grow alongside the applications they serve. The browser-based interface allows developers to launch fine-tuning jobs without writing any code. Previously, such tasks required extra setup and technical know-how. Developers can upload datasets, define training parameters and track experiments, lowering barriers to continuous fine-tuning.

“While there’s no inherent quality improvement, since the underlying method is identical to fine-tuning via the API, the browser-based flow eliminates the need for scripting and streamlines the entire process into an intuitive, no-code experience,” said Anirudh Jain, fine-tuning product lead at Together AI. “This makes fine-tuning approachable to nontechnical users and saves around 50% of the time compared to the manual API approach.” The Python SDK and API are still available but not necessary, he said.

Preference-based training

Direct preference optimization is a method of training language models using preference data, in which the model is shown both a preferred and a less-desired response to a prompt. Instead of mimicking a fixed answer, the model learns to favor responses based on human feedback using a contrastive loss function. It teaches models to bring similar things closer and push dissimilar things further away in its representation space.

“Supervised fine-tuning helps the model learn what to say while DPO teaches it what not to say,” Jain said. SFT is preferred when using labeled input/output pairs and DPO when training data contains preferences from human raters or A/B tests.

Unlike traditional reinforcement learning techniques, DPO doesn’t require building a separate reward model, making it simpler, faster and more stable to implement. Developers can fine-tune models to align more closely with the way users interact with applications to improve accuracy and trustworthiness.

Continued training enables developers to resume fine-tuning from a previously trained model checkpoint. That feature is useful for refining models over time or running multi-stage training workflows that combine methods like instruction tuning and preference optimization. It’s invoked by referencing the job ID of an earlier training run and continuing to build from where the previous task left off.

“This is significantly more efficient and cost-effective, allowing for faster iteration and model improvement,” Jain said.

Another enhancement to its platform allows developers to assign different weights to messages in conversational data, essentially downplaying or ignoring certain responses without removing them from the training context entirely. A new cosine learning rate scheduler offers more flexibility and fine-grained control over training dynamics.

Updates to the platform’s data preprocessing engine have improved performance by up to 32% for large-scale training jobs and 17% for smaller ones, the company said.

Together AI is also now offering pay-as-you-go pricing with no minimums in an effort to make it easier for small teams and independent developers to experiment with customized LLMs. Prices vary depending on the model size and training method.

The platform currently supports fine-tuning for popular open-weight models, including Llama 3, Gemma and DeepSeek-R1 variants. The company said it plans to support larger models such as Llama 4 and future DeepSeek versions.

Image: News/DALL-E

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Today's NYT Wordle Hints, Answer and Help for April 21, #1402 – CNET
Next Article TikTok announces restoration of US services · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

The Greek revival you’re not watching (but probably should be) | News
News
The TechBeat: How to Tell if AI Really is a Revolution (5/11/2025) | HackerNoon
Computing
Sound and Vision: Is it me or is Apple TV+’s The Studio not very funny?
Gadget
Google is a search company, so why is search so bad on my Pixel phone?
News

You Might also Like

News

The Greek revival you’re not watching (but probably should be) | News

7 Min Read
News

Google is a search company, so why is search so bad on my Pixel phone?

9 Min Read
News

Taylor Swift gesture towards Travis Kelce teammates revealed by Chiefs star

3 Min Read
News

AI-powered fraud management: Fiserv and FICO redefine security – News

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?