Anthropic is giving its new Claude 3.5 Sonnet model the ability to control a user’s computer and access the internet. The move marks a major step in generative AI models’ capabilities—and raises questions about AI companies’ ability to properly mitigate the risks of more autonomous AI.
According to a series of example videos from Anthropic posted Tuesday on In another example, a user asks Claude to help with the logistics of a trip to watch the sunrise from the Golden Gate bridge. The user describes what they want the model to do by giving it text prompts.
AI companies have been stressing a desire to push large language models to become more “agentic” and autonomous. Doing so means extending the ability of the AI to control not only its own functions but also external devices.
“Instead of making specific tools to help Claude complete individual tasks, we’re teaching it general computer skills—allowing it to use a wide range of standard tools and software programs designed for people,” Anthropic said in a statement on X.
The new computer control capabilities are being rolled out to developers through an API, as a public beta. Anthropic says it wants to collect feedback on the performance and usefulness of the new capabilities.
The company acknowledged that Claude 3.5 Sonnet’s current ability to use computers isn’t perfect and will make some mistakes (especially when it comes to scrolling and dragging), but the company expects this to rapidly improve in the coming months.
With greater power comes greater responsibility. Anthropic has some explicit instructions on how to mitigate the risk of giving an AI control over a computer. In the user guide, the company advises avoiding giving Claude access to sensitive data such as user passwords, and to limit the number of websites the AI can access.