In a well-received closing keynote at QCon London 2025, independent AI consultant Hannah Foxwell challenged the common narrative about AI making us more productive and helping us to do more, instead arguing that AI agents should be designed to eliminate mundane work for us rather than replace human jobs.
Foxwell’s talk, titled ‘Making AI Agents Work For You (And Your Team),’ explored how AI is reshaping work and how organisations are structured. ‘When our work changes our teams change, and as our teams change our organisations will change,’ she noted in her talk summary.
Foxwell, who founded the ‘AI For the Rest of Us’ conference, began by addressing the worldwide focus on productivity statistics when we try to explain the benefits of AI. ‘The productivity benefits go from 20% to 40% to 60%. And honestly, I find this conversation so boring,’ she stated. ‘I don’t think there’s ever been a point in my career where I got to the bottom of my to-do list. There are tasks that languish at the bottom and never get done, and that is OK because you did the right thing!’
Instead of pursuing productivity based predominantly around an individual’s capacity, Foxwell advocated for a fundamental shift in thinking: ‘For most people and most teams, I think maybe we should be focusing on doing the right work and doing it better.’
Foxwell demonstrated a team of AI agents working together effectively, drawing on her experience of working with companies deploying agent teams in highly regulated environments. She showed how she built a team of four complementary agents: a developer, a reviewer, a coordinator, and an infrastructure expert. She showed how these agents could work together to tackle a real-world problem, namely prioritisation of fixes for vulnerabilities in software.
‘A team of agents with specific roles will perform better than a single agent,’ Foxwell explained. ‘If you allow them to put on multiple hats, then you’re going to maybe get a bit more of a one-dimensional answer. But if you give an agent a specific role to play in that dynamic you get better results.’ In the scenario where some agentic platforms offer up to 400 integrations and tools to give to an agent, she emphasised the importance of constraining agent capabilities: ‘Rather than thinking about it as a job that you want the agent to do, think about it as a series of tasks that you’re going to break down as small as possible.’ She cited research showing that limiting agents to one to three tools is optimal for efficiency and reliability. Foxwell also referenced the ‘Principle of Least Privilege’, advocating that the audience remember the basics of information security when working with AI agents.
She criticised our anthropomorphic views of AI agents, suggesting that we look at them more as microservices rather than as substitutes for people, and that the architectural communication between these microservices is what actually delivers reliable, repeatable outcomes.
Don’t give your agent 400 tools, it’s going to get really confused. One to three tools per agent is the safe and efficient number of tools to give to your agent.
Foxwell was unequivocal about the need for human oversight in all the work that agents perform. ‘Keep a human in the loop. We are just at the very early stages of figuring out and understanding architectures around agents that work in the real world. And so having a human in the loop protects you from some of the worst consequences of giving an agent too much autonomy.’
Foxwell also thoughtfully considered the prevailing economic realities of the AI tool explosion. She referenced economist Kate Raworth’s ‘donut economics’ model, warning about AI’s potential environmental impact and how it could displace and change jobs. She urged the audience to keep this front and centre by deploying AI thoughtfully and sustainably, recommending that people use smaller, more specialised language models where possible rather than LLMs which carry a large environmental cost. She also advocated deploying solutions into data centres powered by renewable energy.
This technology might reduce the need for humans in certain roles, which may result in job losses… Large language models also take an enormous amount of energy to train and consume a lot of energy to run.
Foxwell continued her vision by urging people to invoke AI agents to primarily handle what she termed ‘toil’ – work that scales linearly and is ‘manual, repetitive, automatable, tactical, devoid of enduring value.’ This would free humans to focus on novel problems requiring creativity, originality, and work involving customer and user empathy.
I don’t want to ask them to do the fun stuff. I want to give the toil to my agents. I want to win back some of those hours in my day. I don’t want to do more. I want to do less.
She expressed surprise at survey results showing 65% of managers expect AI to help with business strategy: ‘If an agent is helping you with business strategy, then it’s probably helping your competitors as well. You’re not going to differentiate. Maybe this is table stakes – the minimum bar for what a strategy should be.’
Having a human in the loop protects you from some of the worst consequences of giving an agent too much autonomy in your business. So don’t forget that bit. When you’re starting out with this, keep a human in the loop.
Foxwell concluded by challenging the audience to resist the pressure for ever-increasing productivity and instead use AI to improve working conditions and sustainability. ‘Let’s stop this obsession with squeezing more from people. Let’s stop accepting that burnout and stress related health issues are just normal, and let’s offload some of this burden,’ she urged.
My hope and my ambition is to elevate people out of the mundane, allowing them to do the right work and do it better; the work that needs a human touch and delivers direct value to our customers, whilst protecting our planet. Honestly I do believe it is possible if we choose these things as our goal. Agents are here to make people incredible, not replace them, and that’s the mission I can get on board with.