The latest Android Studio Otter feature drop introduces several new features that make it easier for developers to integrate AI-powered tools in their workflows, including the ability to set which LLM to use, enhanced agent mode through device interaction, support for natural language testing, and more.
LLM flexibility allows developers to select which LLM powers AI features in Android Studio. While the IDE includes a default Gemini model, developers can now integrate a separate remote model, including OpenAI’s GPT and Anthropic’s Claude, or run a local model using providers like LM Studio or Ollama. Local models are particularly useful for developers with “limited internet connectivity, strict data privacy requirements, or a desire to experiment with open-source research”, Google says, though they require significant local system RAM and hard drive space to run effectively.
Developers who prefer Gemini can now use their own Gemini API key to access more advanced versions, as well as expanded context window and quota, which can be important when running long coding sessions using agent mode.
Android Studio Otter also enhances agent mode by letting it “see” and interact with apps. This includes the ability to deploy and inspect an app on a device or simulator, debugging the app’s UI by capturing screenshots and analyzing what is on screen, and checking Logcat for errors.
Another major feature in Android Studio Otter is support for natural language testing through “journeys”, which allows developers to define user journey tests in plain English, with Gemini converting those instructions into executable test steps.
This not only makes your tests easier to write and understand, but also enables you to define complex assertions that Gemini evaluates based on what it “sees” on the device screen. Because Gemini reasons about how to achieve your goals, these tests are more resilient to subtle changes in your app’s layout, significantly reducing flaky tests when running against different app versions or device configurations.
The IDE provides a dedicated XML-based editor to manage these journeys, along with a test panel that displays screenshots of each action alongside Gemini’s reasoning for performing each step.
Android Studio now also supports the Model Context Protocol (MCP), allowing the AI agent to connect to remote servers like Figma, Notion, and Canva. For example, by connecting to Figma, Agent Mode can access design files directly to generate more accurate UI code, reducing the need to manually copy-paste context between tools.
As a final note, this update introduces a dedicated UI to review every file edited by the coding agent. It lets developers view code diffs and choose to keep or revert changes individually or all at once. Additionally, multiple chat threads can now be managed, enabling different tasks such as UI design and bug fixing to be executed simultaneously without losing context.
Otter Feature Drop 3 includes many more enhancements than can be covered here, such as an improved App Links Assistant, automatic Logcat retracing, and more. Be sure to check the original announcement for the full details.
