Now, it’s important to remember that this is an experimental model. Google is still working out the kinks, so don’t be surprised if you encounter a hiccup or two. Some features, like file uploads, aren’t available just yet. Think of it as a sneak peek of what’s to come.
Even though it’s still in the experimental stage, Gemini 2.0 Flash is already showing a lot of promise. Google says it’s twice as fast as previous versions and performs better on key benchmarks. This translates to a more helpful and responsive Gemini assistant. Google plans to make the 2.0 Flash model available to developers in January, with more model sizes on the horizon. In the meantime, they’re encouraging users to take the experimental model for a test drive and share their feedback.
This update comes after a similar rollout for iPhone users, who also gained access to the 2.0 Flash Experimental model. And there’s more on the way, as mobile users can look forward to “Deep Research” access early next year, which will bring advanced reasoning and long context capabilities to the table. I’m also really curious to see what the Deep Research feature will be like when it launches. It sounds like it could be a game-changer for anyone who needs to do some serious research.