Microsoft’s AI agents will be “a new class” of agents that will operate as «independent users within the corporate workforce«as described by the company in a Microsoft 365 Roadmap document, where it marks its roadmap in an agency AI technology that has become a major trend.
Each incarnated agent will have its own identitydedicated access to the organization’s systems and applications and the ability to collaborate with humans and other agents. “These agents can attend meetings, edit documents, communicate via email and chat, and perform tasks autonomously”they describe.
Microsoft will sell these “agent users” in the M365 Agent Store and make them visible in its Teams collaboration tools. Rich Gibbons, a Microsoft licensing specialist, says he has seen additional documentation provided to M365 administrators that mentions a license called “A365” (he believes this refers to a product called “Agent 365”) and that indicates that administrators assign the A365 license required at the time of approval. No additional Microsoft 365 or Teams license is required.
The researchers who have had access to this A365 license explain that the agents will have their own email address, Teams account, an entry in the business directories (either Entra ID or Azure AD) and even a place on the organizational chart.
Microsoft documents suggest that the agents will be released in late November. With the start of the annual “Ignite” conference next week, it is certain that the company will talk about a topic that, as we said, has become the biggest trend in artificial intelligence. Of course, Microsoft’s AI agents will be touted as ideal for improving business productivity and profitability.
Analysts disagree on whether to advance the Price to maintain Microsoft’s AI agents, as the company increasingly adopts a consumption-based pricing model, which is inherently much more difficult for customer organizations to predict. In these cases, when the AI agents work autonomously, How are companies supposed to predict usage/consumption?
In addition to licensing and cost issues, another big question is how an organization will manage these agents. If they can join meetings and send emails/messages, what happens if they act dishonestly? They could send sensitive data to the wrong people, provide incorrect information, or send strange or offensive messages.
How can it be prevented, monitored and acted upon? Questions for discussion that these AI agents are raising when deployed alongside a human-based workforce.
