Sigma Browser OÜ Friday announced the launch of its privacy-focused web browser, which features a local artificial intelligence model that doesn’t send data to the cloud.
Numerous browser companies are jumping onto the AI bandwagon and incorporating major AI models into the primary user experience. AI-native experiences with major browsers include Gemini in Chrome from Google LLC and AI model integration with Mozilla Corp.’s Firefox. AI model developers also have their own specialized AI-native browsers, including Comet from Perplexity AI and Atlas from OpenAI Group PBC.
All of these browsers send queries and questions from users to cloud-based AI systems to provide answers and generative content.
In contrast, Eclipse from Sigma embeds a local large language model that can function offline and keeps all the user’s data, questions and chat completely local. The company argues that this approach is designed to eliminate hidden behaviors or backdoors that could change answers or leak user information through third-party services.
“AI has become incredibly powerful, but it has also become centralized and expensive,” Sigma cofounder Nick Trenkler said. “We believe users shouldn’t have to trade privacy or pay ongoing cloud costs to access advanced AI.”
The company said Eclipse’s included LLM is unfiltered and contains no ideological or content-based restrictions, meaning it won’t unduly bias responses. This design choice follows Sigma’s intent to provide users full control of their AI interaction experience without limitations on topics or perspectives.
The update also included local PDF processing, allowing users to analyze and work with documents directly on their computer.
Sigma’s Eclipse isn’t the first browser to enable local LLMs. In 2024, Brave Software Inc.’s browser added the capability to “bring your own model” for its Leo AI assistant. This included simple integration with locally run LLMs; although the process can be a little bit technical, as it involves installing Ollama or another local AI inference provider.
To run local models effectively, especially those with medium sizes of about 7 billion parameters, hardware requirements typically include a minimum of 16 to 32 gigabytes of memory. It also requires a semi-recent graphics processing unit, such as the entry-level RTX 3060 from Nvidia Corp., but it is recommended that users host at least an RTX 4090. More memory and higher-performance GPUs are needed for running larger AI models.
Enabling local-first LLM for users without them having to provide it themselves could make the browser more appealing. The company stated that the release makes a step toward more transparent user-controlled AI, allowing users to maintain privacy and accessibility while keeping performance and capability.
Image: Pixabay
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
