Imagine an AI that can open a Word document, write text in it, organize your files or even book a plane ticket without you lifting a finger. This is exactly what Microsoft is preparing with Copilot Actions, a new component of its assistant integrated into Windows 11.
Your next colleague is called Copilot
According to the company, the goal is to transform Copilot into “ digital collaborator » able to interact with your local applications and data « like a human would “. Concretely, the agent will be able to click, scroll windows or enter text to accomplish complex tasks in place of the user. The promise of this automation is to make users’ lives easier… but this technology also poses an embarrassing question: to what extent will we let an AI act for us on our own files? The precedent of Windows Recall – a function accused of collecting too much personal information – has left its mark. This time, Microsoft wants to avoid any missteps.
Copilot Actions will not be enabled by default. The test is limited to members of the Windows Insider program, who must manually enable the experimental mode hidden in the settings (System > AI Components > Agent Tools). The group also ensures that it has locked the system: the agents must be digitally signed by a trusted source, which makes it possible to block any malicious agent; they operate in an isolated space called “Agent workspace”, with a separate virtual office and restricted access to sensitive files (Documents, Downloads, Images); any access extension must be explicitly authorized by the user.
« The agent starts with limited permissions and can only access the resources you give it “, explains Dana Huang, vice president of Windows Security. “ It has no power to modify your device without your intervention, and this access can be revoked at any time. » Internally, Microsoft says it has formed “red teaming” teams, in other words security researchers responsible for attacking the system to detect its flaws before it is offered to the general public.
Despite these precautions, security experts remain on guard. Agentic AI introduces risks that are still poorly controlled, such as so-called “cross-prompt injection” attacks, where trapped content could divert the agent to exfiltrate data or install malware. Microsoft promises finer privacy controls before any official release. But the issue goes beyond simple technique: it is a question of knowing whether users are ready to entrust their files, their identifiers and their digital gestures to autonomous software.
🟣 To not miss any news on the WorldOfSoftware, subscribe on Google News and on our WhatsApp. And if you love us, .
