Google recently announced support for third-party tools in Gemini Code Assist, including Atlassian Rovo, GitHub, GitLab, Google Docs, Sentry, and Snyk. The private preview enables developers to test the integration of widely-used software tools with the personal AI assistant directly within the IDE.
Offering similar functionalities to the market leader GitHub Copilot, Gemini Code Assist provides AI-assisted application development with AI code assistance, natural language chat, code transformation, and local codebase awareness. Launching these tools in private preview integrates real-time data and external application access directly into the coding environment, enhancing functionality while reducing distractions. Ryan J. Salva, senior director at Google, and Prithpal Bhogill, group product manager at Google, write:
Recognizing the diverse tools developers use, we’re collaborating with many partners to integrate their technologies directly into Gemini Code Assist for a more comprehensive and streamlined development experience. These partners, and more, help developers stay in their coding flow while accessing information through tools that enhance the SDLC.
According to the documentation, the supported third-party tools can convert any natural language command into a parameterized API call, based on the OpenAPI standard or a YAML file provided by the user. GitHub Copilot Enterprise also includes extensions to reduce context switching. Richard Seroter, senior director and chief evangelist at Google Cloud, comments:
Google often isn’t first. There were search engines, web email, online media, and LLM-based chats before we really got in the game. But we seem to earn our way to the leaderboard over time. The latest? Gemini Code Assist isn’t the first AI-assisted IDE tool. But it’s getting pretty good!
With coding assistance being one of the most promising areas for generative AI, Salva and Bhogill add:
Code Assist currently provides developers with a natural language interface to both traditional APIs and AI Agent APIs. Partners can quickly and easily integrate to Code Assist by onboarding to our partner program. The onboarding process is as simple as providing an OpenAPI schema, a Tool config definition file, and a set of quality evals prompts used to validate and tune the integration.
This is not the only recent announcement impacting Code Assist, with support for Gemini 2.0 Flash being a significant announcement. Powered by Gemini 2.0, Code Assist now offers a larger context window, enabling it to understand more extensive enterprise codebases. According to Google, this new LLM aims to enhance productivity by providing higher-quality responses and lower latency, allowing users to “stay in an uninterrupted flow state for longer.” In the “The 70% problem: Hard truths about AI-assisted coding” article, Addy Osmani warns:
AI isn’t making our software dramatically better because software quality was (perhaps) never primarily limited by coding speed (…) What AI does do is let us iterate and experiment faster, potentially leading to better solutions through more rapid exploration (…) The goal isn’t to write more code faster. It’s to build better software. Used wisely, AI can help us do that. But it’s still up to us to know what “better” means and how to achieve it.
Code Assist currently supports authentication to partner APIs via the OAuth 2.0 Authorization Code grant type, with Google planning to add support for API key authentication in the future. Pricing is based on per-user, per-month licenses, with monthly or annual commitments. Licenses range from 19 USD to 54 USD per user per month. A Google form is available to request access to the private preview of Code Assist tools.