By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Making AI Agents Work For You (and Your Team)
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Making AI Agents Work For You (and Your Team)
News

Making AI Agents Work For You (and Your Team)

News Room
Last updated: 2025/08/22 at 9:15 AM
News Room Published 22 August 2025
Share
SHARE

Transcript

Foxwell: My name is Hannah. I’m going to talk about AI agents. The pace of change in this corner of technology is absolutely bonkers at the moment. If I stood up here and I tried to teach you about how to build an AI agent yourself, when you go home, it would probably be out of date by the time you left the building, and so I’m not going to do that. What I’m going to do instead is I’m going to do my best to ground this talk into things that I know to be true and that I believe to be enduring. Things I know to be true about technology transformation, things I know to be true about people and teams and how work happens every day.

This is a talk about AI agents, and so I’m going to start by asking you to forget about all of the productivity gains and the benefits that you may have been promised from chat-based tools like ChatGPT or Claude. People are citing massive numbers. The productivity benefits go from anything from 20% to 40% to 60%. I find this conversation so boring. I don’t want to be made better and faster. I don’t think there’s ever been a point in my career where I got to the bottom of my to-do list.

I go to work every day, and I choose the most important and most impactful work that I need to do, and if something isn’t important enough, it will bubble to the top of the list. There are tasks that will inevitably languish at the bottom of that list, and they will never get done, and that is ok, because you did the right thing. When people talk about the productivity gains of generative AI, there’s this unspoken sentiment that we should be doing more. Get through that whole list. Get to the very bottom. Do 20% more work and do it faster. Do you ever feel like that? There is never a shortage of work to be done, and so I think we’re asking the wrong question. For most people and most teams, I think maybe we should be focusing on doing the right work, and doing it better. Very simple.

The Future Is Already Here: It’s Just Not Evenly Distributed

At the end of last year, I decided I wanted to explore the world of AI agents. I’ve been very fortunate that I landed in a company where we’re building agent teams, not just the individual agents, but teams of agents that work together, and teams and teams, workflows in organizations of agents that work together. It’s not just demo-ware, some of these agent teams are deployed into critical national infrastructure. They’re deployed into defense. They’re some of the most highly secure regulated environments that you can deploy this bleeding-edge technology into. I really believe that the future is here already. I’ve seen what these agents are doing, but I don’t believe it’s touching everyone yet. Just as like a level set, give me a little wave if you work alongside an AI agent or a team of AI agents today.

There are some hands, but most of you aren’t, and so this is something that’s really new. It’s really new. It’s unfamiliar territory to all of us. My belief is that this will become more and more normal. That’s something that I believe. That’s why I’m really interested in the domain of AI agents and actually the impact it has on us as humans. Like I said, I don’t find conversations about my individual productivity, the amount that I, Hannah, can produce in a day, I don’t find that very interesting. I don’t think that’s the right goal. What I do find interesting is how work will change, because as our work changes and our responsibilities change, our teams will change, and our organizations will evolve. We will need to rethink some of the things that we kind of hold to be true about how work happens.

I think that is fascinating. I also think this is a transformation that will impact all of us eventually, depending on the role that you’re in. It’s impacting some people already. I also think it will be the experimental innovators who are the very first people to trash their org chart and rebuild it from the ground up.

We’re talking about org charts, we’re talking about org design, we’re talking about how to make agents work for you and your team, but the thing that you absolutely do not want to do with these agents is go and replace people with them. You don’t go and rebuild your org design with little agent teams and think you’re going to be successful. That’s not how this works. That’s a really bad idea. Don’t do it. You can’t even really replace a person with an agent. They’re not very good at that. They’re not very good with lots of breadth, lots of responsibility. You can, however, quite successfully give a task to an agent, and so that’s where we’re going to start. You can give a task to an agent. You don’t give them a job.

Actually, when I was prepping for this talk, I was thinking back to all the jobs that I did very early in my career, before my career even started, and I identified one job that could be replaced by an agent, and it was the few weeks in the summer where I spent doing data entry. I did data entry for one of my friends’ companies, and we needed to digitize a load of records. I basically sat in a cupboard with a stack of paper and keyed things into a computer while singing along to the radio until the boss shut the door and asked me to be quiet again. That’s the only job really that I could think of that I’d done that could be replaced by a single agent on its own.

Agent Team Design

What I’m going to talk about first today is how you design your agent teams, because, one, we don’t want to replicate our current organization as agents. That’s not going to work for us. What we can do is we can get some quite surprisingly great results if we have a group of specialist agents working together, and so that’s the example I’m going to talk you through when we get started. Because it’s a technical round, I wanted a relatable problem. This is a relatable problem that I came up with which is maybe a little bit nerdy. I need to design a system for prioritizing and fixing vulnerabilities in my applications. We already have a scanning tool, but the problem is that we have too many vulnerabilities to manually review all of them one by one. Maybe this is a problem that I personally have spent a little bit too much time thinking about, but maybe some of you have patched a vulnerability in your lives. Maybe you understand the problems of doing that at scale. This is the problem that I set out to solve. What I did first is I created a developer agent.

I asked the developer agent to give me a task breakdown of all the things that I might need to do to build a solution to help me with this problem. You saw the prompt. This is what I fed my developer agent. What it came back with was a little bit vanilla, but there are some subject matter things in this. It came back with, you need to integrate with your scanning tool, no-brainer. You need to automate the fixing process. You need to design a prioritization algorithm. I was like, ok, yes, seven steps, it’s not bad. It’s not particularly useful. I could have written this myself. I prompted it to be a bit more specific about what it wanted me to do, and the task it came back with was developing an algorithm that considers CVSS score, exploitability, impact on business, age of vulnerability, and dependency usage, which I think is much more useful advice, to be fair. It took that extra poke from me, the human, to get it to that level of detail.

Feedback loops are really important when working with agents, but it doesn’t have to be you, the human, that provides that feedback loop. Introducing my second agent. This is my quality assurance feedback loop agent, the reviewer agent. This agent’s mission is to get to the greatest level of detail it can from the developer agent who is building the task list to build a solution to help me prioritize my endless vulnerabilities in my application portfolio. When I’m building a reviewer agent, I tend to ask it to do things like, be really detail-oriented. I give it a persona that’s like, you have a broad technical understanding, but you are very detail-oriented, and you like people to be specific. This reviewer agent does this work for me. It goes and pokes the developer agent, and it says, I need more. You get nice little interactions like this.

For example, the reviewer has come back with, I think you need to define the data model for storing vulnerability information, and you need to break it down to subtasks such as schema design, data types, relationships. You can see that adding that reviewer agent is already automatically getting us to a higher quality output, an extra level of detail. Thank you, reviewer. Those two agents working together have provided a much more extensive and usable task list than the original developer agent on its own.

The next agent I added to my team was the coordinator agent. This is a prerequisite, because, if I was to start adding more agents, they would start to get a little bit confused about who is doing what, and so my coordinator agent is there to facilitate the conversation between the other agents. That’s all that they’re there for. You get lovely little interactions like this. The manager says, developer, please provide the first draft of the plan. It then goes to the reviewer and says, please review the task list and provide feedback and suggestions for improvement, which it does. It did that in the last iteration, but now it’s being facilitated by that third agent. Then you say, thank you, developer, for the updated draft. I’ll now review the revised plan and make sure that it meets quality standards and user expectations. They’re all very polite to each other, these little agents. I love it when they play nicely. I put on my manager hat.

Then, finally, once the reviewer agent is happy with the input, it sends a message like this to the manager that says, the updated task list is comprehensive and well-structured. The inclusion of subtasks provides clarity and ensures that each main task is broken down into manageable components. It goes, manager, print the plan.

Then me, the user, I get the plan. I’m happy. It’s a much lower level of detail. We’ve introduced the coordinator agent, but we’ve still only got two agents mainly interacting on the task. The manager is not adding anything. It’s facilitating at this point in time. Actually, I also wanted to point out this anecdotal successful pattern of agent dynamics is actually backed up by some academic research. Sparse communication patterns between agents can improve accuracy and can significantly reduce cost. That’s the main thing because you make sure that the team is focused. There are no distractions. There’s no duplication. You could hardcode this so that the communication paths between the agents are fixed, but what we found is that actually prompting the agents about how they are expected to communicate with each other is not a bad approximation of this as well. You get a much more efficient, lower-cost interaction between your agents. It’s called multi-agent debate. You can actually get better accuracy out of it as well.

The next agent I add into the mix is my infrastructure agent. Why am I adding an infrastructure agent? Because, of course, the plan we’ve come up with completely ignores all the infrastructure requirements. Have you ever met a development team that did that? No, I’ve never met a development team that ignored all of the infrastructure requirements when planning a new project, so I added a subject matter expert agent into the mix, and I gave them the prompt that they were to augment the list that was created by a developer with the infrastructure-related task. This is all coordinated by the manager agent.

The manager agent says, “Infrastructure expert, please review the updated plan and provide your expert guidance on the infrastructure requirements. Ensure all necessary infrastructure tasks are captured, and provide the effort and complexity ratings for each task”. That’s what happens. It’s another iteration through that planning. You get new requirements. Your task list grows. You’re getting a more complete picture of what it might take to address that user’s problem.

I haven’t done anything more than that original prompt that I sent in. I’ve got a problem, help me solve it, agents. You get things like this. Set up an IAM role. Set up a policy. Implement encryption for data at rest. Configure security monitoring. Make sure you’ve got monitoring and logging for operability. You get things like that which you didn’t have before, which is quite nice as well. You implement your feedback loop a second time. The reviewer has reviewed the developer’s work. It doesn’t need a separate reviewer to review the infrastructure work. I’ve told the reviewer that they’re a broad technologist. It doesn’t matter. They start asking good questions of the infrastructure experts. How will your IAM roles be managed and audited?

Agents and Tooling (Principle of Least Privilege)

What you end up with is an even more detailed list. Again, me as the user, I haven’t done any more interaction than say, I’ve got this vague problem, software-y, will you help me solve it? These four agents role-playing these specific roles with these communication patterns that I’ve outlined can actually produce a pretty good spec for something that you might build to solve that problem. Overall, a team of agents with specific roles will perform better than a single agent.

If you allow them to put on multiple hats, then you’re going to maybe get a bit more of a one-dimensional answer. If you give an agent a specific role to play in that dynamic, like you would do if you were in a team meeting, let’s bring the infrastructure and the security and the app people together, and let’s solve this problem together, if you do that, you get better results. This is all built off a vanilla LLM. We haven’t even given these agents a tool yet. We haven’t given them a tool they can use to actually fulfill their objectives.

All of this knowledge about how they might approach this problem is all internalized somewhere within the weights of the LLM. You might want a documentation agent added to this team so that the output isn’t in a chat interface, that it’s a document that you can go and review and edit yourself. You might want to augment one of your subject matter experts with contextual knowledge. You might be like, this is the toolset we use, these are the architectural patterns, these are our security policies, and if it’s got access to that knowledge and context, it can give you a higher-quality answer. You might want to augment your SME with data. You might want to allow it to query a database, and to bring together data with the context to solve the problem.

If you don’t have that context within your organization, you can add a researcher agent with access to the internet. It can go and it can scrape the internet and see whether there are answers to be had in the vast realms of the internet as well. Importantly, you need to limit the number of tools and responsibilities that you give to each agent if you want reliable and quality results out of them. Rather than thinking about it as a job that you want the agent to do, think about it as a series of tasks that you’re going to break down as small as possible. There’s a great article, and actually it’s a series of articles, by Allen Chan, an IBM distinguished engineer who’s done a lot of experiments with how to optimize agentic teams.

One of the articles includes this, how many tools is too many tools? You look at some of the more popular agentic platforms, and you can choose from one of 400 integrations and tools that you might want to give to your agent. Don’t give your agent 400 tools, it’s going to get really confused. One to three tools per agent is the safe and efficient number of tools to give to your agent. After that, it can slow down execution and consume more tokens, and it’s just not going to be a very successful agent. If you’re interested in dabbling in this space, you might want to do these experiments yourselves, but anecdotally, talking to people who build agents, this is the limit. Give your agent a task to do, optimize it for that task, and give it the lowest number of tools possible to actually achieve what it’s going to achieve.

There’s obviously another reason why we don’t give agents massive amounts of access to all of our data and all of our applications, it’s called principle of least privilege. You want to limit the blast radius of a single agent. You want to provide feedback loops. You want to provide guardrails. Don’t forget this bit. It might feel really tempting to make this all-powerful agent, but it’s always useful to think about what happens if this goes wrong. Principle of least privilege still applies to agents. Do not forget that. As I’ve been working with these agents, I originally put on my little manager hat, and I was like, I’m going to build a team to do a thing.

Then, I had to take it off again, because, actually, an agent might be more like a microservice than it is a person. You can give it a task to do. You can give it a function to perform. You can prompt it or engineer it and give it access to things, and it can do that one thing really well. It’s then the architecture that you build of these agents as well as other software processes and automations that actually deliver an outcome for you that is reliable and repeatable and usable. I’ve forgotten the one final member of my agent team, the supervisor. It’s that pesky old human, but it is important to keep the human in the loop.

We are just at the very early stages of figuring out and understanding the architectures around agents that work in the real world, and so having a human in the loop protects you from some of the worst consequences of giving an agent too much autonomy in your business. Don’t forget that bit. When you’re starting out with this, keep a human in the loop. Then, as we gain more data about the reliability of that agent in performing its tasks, then we can start to step back. Don’t do that on day one. Don’t give it too much power on day one. Keep that human in the loop.

Is This the Future We Want?

This is where I pause the talk for a little reality check. The agent team I just showed you, I built it in a couple of hours. It’s not perfect. It might be a useful one to support a software development team, or consultancy. It did a pretty good job of planning a thing given some vague requirement. There are jobs outside of software development, there are lots of jobs, where maybe planning is the only thing that the humans in that team do. There are jobs where the only thing that they do is maybe research. There are jobs that only do data entry. I try not to be too much of an AI hype girl, even though I am excited about this technology, and it is fun to play with. What I want to say is that I am actually a passionate advocate for people, and this technology has huge potential. It could also have a huge impact on our world.

Our economies and our businesses today, they’re grounded in the belief that growth is our goal. We must grow our profits. We must grow our margins. We must produce more. We must consume more. We must be more efficient. We must do more with less. We must keep doing that forever. We must always be growing. It’s a flawed belief system, because nothing can grow sustainably forever. The economist, Kate Raworth, introduces a new way of thinking that centers our economy not on endless financial growth, but instead the thriving of humanity and the protection of our home, planet Earth. She proposes a social foundation where every human can thrive.

Every human should have access to food, water, education, health. She also proposes that there’s an ecological ceiling whereby the resources that are provided by our planet, the finite resources, can be consumed in a sustainable way, protecting us from environmental collapse and preserving the really delicate balance of life on this Earth. This is called doughnut economics for this reason. There’s a zone in between these levels which she calls the safe and just space for humanity.

Right now, there are too many people living below that social foundation, and we are also overshooting our ecological ceiling. We are doing irreversible harm to our planet all the time. We’ve seen again and again the ways that the endless pursuit of growth and profit has done harm. We only need to look at our water companies that are spewing sewage into our rivers, making them uninhabitable, and doing so with the goal of protecting shareholder value. You don’t need to look very far to see that pursuit of growth doing harm to people and doing harm to our planet.

Why do I pause my talk when I was getting all hyped up about AI agents to talk about this? It’s because this technology might reduce the need for humans in certain roles, which may result in job losses, and it may push more people below that social foundation. It’s also because large language models take an enormous amount of energy to train and consume a lot of energy to run.

As the integration and consumption of this technology sprawls across every single aspect of our digital lives, the carbon footprint of this technology grows. We’re already putting pressure on this. We’re already exceeding our planetary boundaries of how much we can consume sustainably. How do we seize this opportunity with this really awesome new technology, but how do we do it intentionally, and how do we do it sustainably?

Firstly, back to basics, are we solving the right problem, and does this problem need to be solved with AI or with an AI agent? Could we solve it another way with a simpler software solution? Always look for the simplest method, even if it’s not the most fun. Are you using a sledgehammer to crack a nut? As technologists, I think everyone has a responsibility to make sure that we’re deploying this technology and deploying these tools in the right places.

Secondly, if you have decided that an agent or an agentic solution is the right thing for this problem, then you don’t always need a large language model to power it. You can use small language models, specialist models that are fine-tuned for a specific task. You can use compressed models, distilled models. You can make sure that you’re choosing the smallest possible thing for the task at hand. There is a real temptation because these large language models can do almost anything for you to just throw that at the problem, and maybe you do when you’re in the experimental phase, but that’s not the end game. The environmental impact of that is too high.

Thirdly, we have control of where we run these solutions. Not all data centers are made the same. Some data centers run on 100% clean and renewable energy. If you have control of where you run your solution, choose one of those. Really simple. If we don’t, if we delegate that to someone else to choose where these things are running, then we do risk. We do risk the carbon footprint of our agentic solutions really spiraling out of control and us accelerating that environmental degradation that we need to be reversing.

Think About Value (Real-World Agents)

Real-world agent stuff. Let’s go back to the real world. I’m going to tell you a story about a situation that I had at work. I was part of a small company, and we were acquired by a very large company. One of my jobs in running my team was to ensure that we had contracts go out to customers. We’d send them a proposal, and then we had to put that proposal in writing in the form of a contract. It’s not rocket science. All of our contracts were very standardized.

At the small company, I could get that contract out of the door, into my customers’ hands, ready to sign within a day. Then I was acquired by a big company, and suddenly, it took weeks and months of chasing and hounding the 17 different teams that all had to have their hands and eyes on that contract before it went out through the door. That taught me quite a valuable lesson about, actually, the heavy burden that bureaucracy has on organizations. That’s kind of what I want to talk about now. I want to talk about value. Has anyone ever done any Lean Six Sigma training, or read anything about it?

One of the most controversial concepts when I was going through Lean Six Sigma training was the concept of business non-value add, and people really didn’t want their work to be categorized into that category of non-value add. It was really controversial. Managers were in the room getting really quite heat up about it. No, I do add value, I do. Anything that doesn’t deliver enduring value to your end user, that’s non-value add work. It may be important, and it may be necessary, but it doesn’t add value to your customer.

I was reflecting on this, and I was like, where might agents have the best impact in our organizations? I started to think about flow. I started to think about how typical large organizations are still today made up of these silos. We do talk about value streams in the tech world, and we are lucky in that a lot of the things that we create go on to serve users directly. We are on that value flow. There are a lot of folks who aren’t, and who don’t, and their work is necessary, but it doesn’t add enduring value for your customers. Those flows tend to be very siloed and arduous. I wanted to give you a really nice definition of toil from the Site Reliability Engineering book, because this is one that I quote all the time. Toil is any kind of work that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as the service grows. I was like, yes, it’s not just us in engineering that suffer toil, is it? I think there is toil everywhere. Is this toil I want to give to my agents, isn’t it? I don’t want to ask them to do the fun stuff.

I want to give the toil to my agents. I want to win back some of those hours in my day. I don’t want to do more, I want to do less. Which is why this report is absolutely bonkers because everyone seems to be thinking about this differently to me. Maybe I’m wrong, and you can tell me afterwards if you think I’m wrong, but AI is really good at this stuff on the left-hand side. Agents are fantastic at that. They will not get bored of that bureaucracy. They will just do it 24-7. Sixty-five percent of managers think that these agents and AI is going to help them with business strategy.

If an agent is helping you with business strategy, then it’s probably helping your competitors as well. You’re not going to differentiate. Maybe it’s like table stakes. Maybe it’s like, this is the minimum bar for what our business strategy should be. Now, how do we make it better? Maybe it’s a tool we can use in that context. This is hilarious to me that the stuff that these agents are really good at is the stuff that people are ignoring, and the stuff that really drives business growth and value and innovation is the stuff where people want to offload it. I don’t get it. Maybe you do, but I don’t get it. I’m starting to think now about where we deploy agents into the real world to help relieve us as humans of our toil.

Thinking about the flow, all the flows of business process that are necessary and have to exist in our organizations but do not deliver enduring value to our customers, what could we do with those? We could probably solve them with some agent teams and some smart software. That would be great. What if every process was triaged as it was entering our organization, and we were like, is it a simple request? Is this a standard request? Is there a standard result that happens as a result of this? Can we solve that with a combination of traditional automation and orchestration software and some AI agents? Can we do that? Even if we only handle 50% of the toil, wouldn’t that be awesome? Wouldn’t that be incredible? The stuff that is genuinely novel that needs a human touch, just route that to some humans. That’s what we’re really good at. Do that.

What Is the Minimum Viable Human?

I was thinking, what happens next? I’ve gone and done this transformation program. I’ve gone, here is all the toil and the repetitive work, and it’s gone. Thank goodness. What is the minimum viable human that we have next at the end of this process? I started to think about what the work of those humans might look like in this vision that I’ve painted for myself. This is all plucked out of my imagination. Do those humans actually need to break out of their silos? This is where I talk about reimagining your org chart. Do they need to break out of their silos? Because this work that the humans are doing is not repetitive. It’s novel. It’s the exceptions, and it’s not the norm.

Do we actually need to be more collaborative, and do we need better communication skills? Do we need cross-functional teams with a cross-section of skills to solve these problems? Depending on what your business process is, this might look like finance, it might look like operations, it might look like legal. Can we break down those silos and those handoffs and actually get shit done a little bit faster? Does it look like this? Does it look like, actually, the simple requests come in, and they’re just processed through, job done, the toil is gone. Then there’s a group of specialists who solve the novel problems, and that is their job, and maybe that is the boundary of the team. Maybe it’s not in those silos based on their specialisms. Maybe that is the team. Do we need to become more effective at problem-solving, cross-functionally?

As I said before, do we need actually more creativity and originality to differentiate ourselves in a world where everybody can answer any question using an LLM? I heard a story about a marketing agency who do use large language models to help them build customer strategies, but what they do is they ask the large language models what they would do. Build a strategy for me, and they say, that’s what we’re not doing, because that’s what all of our competitors are probably going to do. We need to take a step up. We need to differentiate ourselves from this base level. Do we deploy humans for things that we’re actually genuinely good at, which is creativity and originality?

Then we talk about our customers. It’s like, how do we serve our customers better? These people who are not doing the toil work, what could they be doing? Could they be deepening our relationships with our customers, uncovering their unmet needs with empathy? Could they be building relationships? These are things that you don’t give to an agent. I actually find it quite funny that right now, so many teams are putting AI agents in front of their customers in customer support roles, and that seems really unintuitive to me because that’s a place where you build trust and you build reputation, and where empathy and collaboration and communication is actually really important to deliver an amazing service. Maybe these companies just want to deliver an ok service efficiently. Maybe that’s fine. There is scope to differentiate on those customer interactions, and those things that only humans can do really well.

Conclusion

Is this the future that we want, where repetitive, mundane work, the work of no enduring value is completed by teams of agents? For me, yes. That’s the future I want. I want that future. What is the minimum viable human? We don’t actually know yet, but we need to probably throw out all of the things that have built our understanding of org design, especially at scale, and come up with something better. I look at it like this. Throughout my whole career, I’ve been asked for more, more, more.

As soon as I switched from using a desktop to a laptop, my work started to come home with me, more, more, more. I have Slack on my phone, and I’m available 24-7, more, more, more. I know people who use ChatGPT in voice mode to write their emails while they’re driving on the school run in the morning, more, more, more. Twenty percent productivity boost with generative AI, more, more, more. No. Just let’s not do that. Let’s stop this obsession with squeezing more from people. Let’s stop accepting that burnout and stress and health issues are just normal, and let’s offload some of this burden. If your goal is to reduce headcount, make people redundant, and if you are willing to sacrifice sustainability for profit, then you can do that. That is a choice you can make. People make that choice every day. It’s not going to get us into this green spot, the safe and just space for humanity.

My hope and my ambition is to elevate people out of the mundane, allowing them to do the right work and do it better. Do the work that needs a human touch and that delivers direct value to our customers, happy, thriving colleagues, sustainable business growth, happier customers, all the while, while protecting our planet. I do believe it is possible. It is only possible if we choose those things as our goal. Or to put it more simply, agents are here to make people incredible, not to replace them. That’s the mission I want to get on board with.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article I Get All the Free Books I Want, Thanks to These Sites
Next Article Tesla exports materials from China for 4680 battery production in Texas: report · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Warning to all 2.5b Gmail users after Google hack puts them at risk
News
Top 40 Social Media Platforms to Expand Your Brand Presence (2025)
Computing
Arsys expands your VPS family with two options with 10 and 60 GB
Mobile
The Missing Layer in AI Infrastructure: Aggregating Agentic Traffic
News

You Might also Like

News

Warning to all 2.5b Gmail users after Google hack puts them at risk

9 Min Read
News

The Missing Layer in AI Infrastructure: Aggregating Agentic Traffic

18 Min Read
News

‘Historic’ shift in broadband usage heralds expansion phase | Computer Weekly

4 Min Read
News

Ligue 1 Soccer: Livestream PSG vs. Angers From Anywhere

12 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?