Generative artificial intelligence video pioneer Lightricks Ltd. has responded to recent updates from OpenAI, Google LLC and others with today’s release of what it claims is the fastest video generation model in the business.
The new model is called LTX-2, and in addition to the usual improvements in video quality and fidelity, it’s able to do something that no other model can do: Create an entirely new and original, six-second video in as little as five seconds, the company said. Compared with OpenAI’s Sora 2, which has a standard processing time of one to two minutes, that’s incredibly fast, making it suitable for Lightricks’ target audience of professional marketing teams and budding filmmakers.
The idea with LTX-2 is to speed up creative workflows by enabling users to iterate on their ideas quickly. So users will be able to generate a video, see how it looks, then tinker with it using additional prompts, check the result and keep repeating that process until they’re satisfied. Lightricks said its goal with this is to accelerate the creative process.
The five-second videos will be generated in Full HD resolution, but once users have settled on a final concept, they’ll be able to increase the quality to 4K resolution at 48 frames per second. In this case, it’ll take a little longer to generate, but will ensure the end result is of “cinematic fidelity,” the company said.
Lightricks’ focus on professional creators sets LTX-2 apart from OpenAI’s Sora 2, which debuted last month alongside a new, TikTok-style application for iPhone users. While Sora 2’s demonstration videos were certainly impressive, the launch of the Sora app suggests that OpenAI clearly has a consumer and social media influencer audience in mind.
Speed isn’t the only improvement in LTX-2, which is the latest in a long line of open-source LTX models that the company has produced. It builds on Lightricks’ original LTXV-2B model that launched in 2024, and the long-form LTXV-13B model that followed earlier this year, enabling creators to generate videos of up to 60 seconds.
For instance, Lightricks said LTX-2 is now able to generate audio alongside video for the first time, so users can add in a synchronized soundtrack of dialogue during the creative process. This eliminates the hassles of needing to create the audio separately and try to stitch it into the video after the fact.
The company is also providing more tooling for creators. Though the base model is open-source, it can also be accessed in Lightricks’ LTX Studio filmmaking platform, which provides users with comprehensive editing tools for AI videos. The company said it’s adding new features such as depth and pose control, support for video-to-video generation and alternative rendering techniques.
Cost-effective, open-source
As with its previous models, LTX-2 has been designed to run on consumer-grade graphics processing units, in line with the company’s policy of making its models as accessible as possible. The company also claims it’s one of the most efficient models in terms of cost, with prices starting at just four cents per second for HD quality video outputs, rising to 12 cents for 4K video with maximum fidelity.
“Diffusion models are reaching a critical point that will redefine the field of computer graphics,” said Lightricks co-founder and Chief Executive Zeev Farbman. “LTX-2 represents that shift. It’s the most complete and comprehensive AI creative engine we’ve built, combining synchronized audio and video, 4K fidelity, flexible workflows and radical efficiency.”
LTX-2 will no doubt be compared with Sora 2, but its closest competitor is probably Google’s newly released Veo 3.1 model, which is available to access inside Flow, an AI filmmaking tool that’s similar to LTX Studio. Google released Veo 3.1 last week, and it has some compelling features of its own, including support for audio generation and editing capabilities such as “ingredients to video,” “frames to video” and “extend.”
However, LTX-2 has an advantage in that it’s fully open-source, not just open access. Lightricks said it will release the model, along with core components including its pretrained weights, training datasets and some editing tools on GitHub next month. For those who don’t want to wait, it can be accessed now through LTX Studio and Lightricks’ web-based application programming interface, or through select partners such as ConfyUI, Replicate and Weavy Inc.
Image: News/Dreamina AI
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.