By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: [Video Podcast] AI Autonomy Is Redefining Architecture: Boundaries Now Matter Most
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > [Video Podcast] AI Autonomy Is Redefining Architecture: Boundaries Now Matter Most
News

[Video Podcast] AI Autonomy Is Redefining Architecture: Boundaries Now Matter Most

News Room
Last updated: 2026/03/04 at 5:02 AM
News Room Published 4 March 2026
Share
[Video Podcast] AI Autonomy Is Redefining Architecture: Boundaries Now Matter Most
SHARE

Watch the video:

Transcript

From First Principles to Evolving Architectures [00:35]

Shweta Vohra: Welcome everyone. We are starting second podcast in a series of next generation architecture playbook and it’s about insights and patterns for the AI era. Earlier we did episode one of this with Grady Booch where we discussed the principled view of that what’s changing and what remains unchanged, what is hyped and what is actually naturally coming with the AI changes. We also spoke about that what is the difference between the design and the architecture and what teams are focusing and what they might be missing. And the beautiful part was that Grady touched upon the third golden age, which we are living into for the software engineering and the architecture.

So if you have not listened to that podcast, I would highly recommend go back and listen though it’s not in any particular order, but that will also give you a lot of view. With that said, I’m happy to start with our second episode, which is all about evolving architectures, that what is evolving in this AI era about the architectures and how do we go about it? Some advice, practical advice on it that how do we really go about designing it from our experiences? And to touch upon that and discuss it in detail, we have our guest today, Jesper Lowgren. Am I pronouncing your name right, Jesper?

Jesper Lowgren: Perfect. Thank you.

Shweta Vohra: Thank you. And Jesper is joining us from Australia, and late evening for you, thanks for making it happen. A little bit about Jesper and then I would ask you to add all the missing details, which I might miss. Jesper is enterprise architect lead with DXC Technologies. He’s being teaching us about enterprise architect frameworks. He is also an author of Enterprise Architect 4.0 Framework and recently he has written a book, which I really love, by the name Design or Be Designed. So with that great background what you have Jesper, tell us a bit more about you and what is your thinking these days? What do you want to tell us?

Jesper Lowgren: Yes, thank you for that introduction. I think the only thing I would like to add is the last two years I have been almost obsessive about generative AI and how that is affecting businesses, people and processes and entire workplace. And I have been lucky in a sense I’ve been able to manifest the role within DXC where I am a hundred percent focused on generative AI and building up frameworks and models and then being able to go in and talk to customers about it, running proof of concepts and running experiments, et cetera, and actually see in real life how these things work.

But also this new world that we are going into, this new gen AI fueled world is very different. There are a number of fundamentals in this world that is very different from the world that we are coming from. And again, I find it very interesting and I’m very lucky that I get to spend all of my time in this place partly experimenting but also designing and architecting and testing things into what work and what doesn’t work and yes, it’s very exciting times.

Do We Need Generative Architectures in the Age of Autonomy? [04:00]

Shweta Vohra: So we are into generative AI. Do we need generative architectures? Why you said we are experimenting, we are designing and architecting, but the change here is the pace. We always used to do it, but in every era we do something and we leave behind something and move on to the next thing. So that brings a consecutive momentum of lacunas what we are living behind.

But with this space it’s insane these days what is happening in the industry. However, most of the time I see it’s around tools and not really around the system thinking and systems, how they’re evolving. So tell us from your experience what you’re going through these days that do we need generative architectures? What is the problem space really looks like from your perspective?

Jesper Lowgren: I think the short answer is absolutely we do. I would like to take a small detour because I think that sometimes the differences are not really appreciated. So I’m just going to use an analogy. So let’s say a hundred years ago we had a piece of paper, I’m a sales person and I’ll take a sales order, I’m going to write it down on a piece of paper. We get into year, let’s say 2000 or 2010 and then we are starting automating all of these pieces of paper, so now it’s a digital copy of the paper, and we could choose to digitize the entire workflow end to end and make the completely digital process or we can choose to only digitize part of it. So we have all of these shades of gray that we can automate a lot or we can automate little, we can introduce a lot of technical debt that we can introduce very little debt. It’s a choice.

We don’t have the choice of AI because if we’re introducing technical debt into AI, into generative AI, it’s going to drift and it’s going to hallucinate. So I think our mindset has to shift to have to think about it differently. That means that also the architecture has to change, and of course the real change here that is driving all of this change is this autonomy. Because if we don’t have autonomy, we just tie the robotic process automation and we have done that for a while, so I mean there are no secrets. It’s real autonomy. How do we handle autonomy? Because what happens is that when we’re turning on the autonomy tap, we are giving the agents free will.

It is going to play up, it is going to do things that we don’t expect it to do, we’re calling emergent behavior. It’ll absolutely happen. If we are putting a number of these agents together and we’re connecting them into a system of agents and they’re autonomous, we’re going to get emergent on steroids. These are new situations that we haven’t really faced in the past. We’re used to governance where we know exactly what can go wrong because everything is procedural, everything is logic driven, it’s a list of 20 things and we know that if something goes wrong it’s one of these 20 things. But when we talk emergence, we can’t predict exactly what’s going to go wrong. So entire thinking around architecture and design and guardrails and governance, everything has to change in order to be able to manage or control this new thing that we call autonomy.

The Retrofitting Mistake: Procedural Logic Meets Autonomy [07:28]

Shweta Vohra: So with that we are acknowledging that yes, things are changing, things are changing fast. We cannot rely with the same rules in the new world. To dig deeper into this architectural space which we are talking about, what are those architectural mistakes? What do you think that people are, when they’re embedding AI into their existing platforms, what is wrong there? Are we doing some things good? Are we doing some things wrong? So let’s talk about the problem space in terms of bit more concrete that okay, things are evolving, acknowledged, but what is that mistakes we are doing here?

Jesper Lowgren: I would take a step back and I will look at the MIT report that was released about three months ago in 2025 that talk about 95% of all proof of concepts failing and we need to understand why they’re failing in order to really address that story. And I’m going to put forward a hypothesis that I’m unproving in a number of customers and are starting to be more and more writing about it. And it’s this mindset shift again that in the past if we are building a system, if we are architecting a system, or we are designing a system, we are really in this mindset that we’re calling procedural logic, that we are determining the workflow, we’re building the workflow, start with this and then you’d have an evaluation. If it’s A, do that. If it’s B, do that. Then you have this entire sequence of events, so it is determined already.

So we had that on one side and then on the other side we have autonomy that is free will. This is oil and water, they don’t really belong. This one wants to do its own own stuff and here we are telling it you have to do it in this way. It’ll be an incredibly expensive way of employing AI to try to put generative AI into procedural construct. We’re getting all of the cost and we’re getting none of the benefits. So that is not the answer.

And I’ll be coming back to the mindset shift again, it’s not about controlling the logic or the logic at runtime, it’s really about understanding the boundary. So it’s almost like looking at it like a genie in a bottle and a genie in a bottle that’s naughty AI agent, that naughty AI agent wants to get out at any cost so it can play up. We have to make sure that the boundary that we are putting around the agent is tight, that all of the seams, all of the holes, all of the interfaces into this boundary, that we really understand what they are and we really understand how to control them.

Once we do that, then we can say to the AI, I’m not going to tell you what to do. You are really smart already. Some of the AI’s now that we’re working on, they have an IQ of 140, that’s higher than me. The AI is much smarter than me. I’m not going to tell you what to do. I’m going to tell it more about what it can’t do and I’m going to tell you what I want it to achieve so I’m going to give it a goal, for example. And that’s very important when we talk about the AI, you want to give it a goal so that it knows what to achieve.

And I have identified seven things that are defining the boundary of an agent and if you’re defining these seven things, you can have a fairly high confidence that the agent is going to be contained within that boundary. And of course the real problems of agent is when you get multiple agents together and they are sort of getting emergent behavior together, that’s when you really need that boundary. The more agents you have, the more control of that boundary you need to have.

Shweta Vohra: Yes.

Jesper Lowgren: And that’s the first step. If we don’t understand what makes a system scale, and I think that’s what an architect brings to the table, we are the people that understand scale, everyone else, they just want to take their bits and pieces and start handling them together and then worry about what they’re building a little bit later on. Architects, we are the opposite. We build the foundations, and we’re building foundations because we want to scale. And when it comes to AI that is such a different foundation. It looks nothing like the foundation that we’re coming from. As I said, I have a way of identifying [inaudible 00:11:49], I talk about seven seams and they actually form part of the genetic architecture. We need to define the goals, you need to define the [inaudible 00:11:59] decision like [inaudible 00:12:01].

Speed Versus Governance: Why They Must Be Designed Together [12:01]

Shweta Vohra: Before we go into the seven practices and those goals you’re talking about, so let me express from my understanding the important points which you have touched upon. You said we are trying to do some retrofitting with, maybe where we should have intentional design, you said we are trying to handle procedural with non-determinism, so we are trying to merge and marry these things. But other aspect or.

The problem side of it, which I want to delve into because yes, that’s one major part of it where the architecture and design is sometimes totally skipped and sometimes looked into very shallow way, but the important thing here also is that speed of innovation with keeping the reliability and governance in place, because we know the reliability and governance is not as speed up as the speed of innovation has become. Can you touch upon this aspect as well because one is the design piece of it which we have already touched upon, the second part is the governance, the innovation and the speed with which business wants to accelerate.

Jesper Lowgren: Yes, the answer is that they’re one and the same and I think the best way of explaining when I’m sort of saying such a contradictory thing perhaps is that, I love the analogy of a merry-go-round that is spinning around, that we are sitting on a horse, it’s going up and down and it starts spinning faster and we have to hold on a bit more, right? And then it starts going faster again and we can’t hold on anymore, it spins too fast. Either we let go and we fly off or we move into the center. So we move into the center, holding onto another horse and we are fine but it spins faster again. So we are moving into the center further and further and I see this is really what’s happening with agentic AI or gen AI is that things that were a strategy document, that was an architecture on the diagram, we have a design, a BPMN, here, we have a governance document here.

They don’t sit on the outside anymore. They can’t sit on the outside of this world. They’re coming together and they’re fusing. So for example, to answer the question is that you must design governance into the agent or into the system at the same time. They are not separate, you actually do them at the same time. So it’s not like you’re innovating and then you’re catching up. That will never work because you’re always going to have a mismatch. You can’t have mismatch in these systems, they’re going to drift into horrible things. So they actually need to be joined at the hips when you’re designing the agent. So you actually need to build the governance inter-agent when you design it.

Shweta Vohra: That acknowledges, and I love the analogy which you have used, either control it from outside or you get spin right with all the moving thing.

Jesper Lowgren: Exactly right.

Designing for Maturity Levels and Guardrails That Evolve [15:08]

Shweta Vohra: But now let’s delve into the guardrails which you’re talking about. So what are those guardrails and would those be evolving, if you can take some examples along with the seven key framework which you want to touch upon, but can that evolve too? Because again, we cannot be doing the mistake of being rigid with our design. So I call it that designs are drifting all the time.

Jesper Lowgren: Absolutely, right.

Shweta Vohra: I have touched in my book and some techniques which I have laid out there, but when designs are drifting, our mindset needs to change its one-time activity. While I talk to various people, it’s like we have done the design in the beginning, we are done. We are not done. It’s changing with every configuration change in cloud, with every configuration change which a developer is changing on, maybe AI is changing on its own and not even telling you. So let’s talk now about those guardrails. What do you think those things should be with this changing world and how do we make them evolvable?

Jesper Lowgren: And they’re absolutely evolving. The way that I’m looking at it is I’m using the maturity levels one to five and I’ve invented a sixth one that we can talk about if you’re interested. But let’s say there are five and each of these maturity levels, and it’s useful to go through them, so the first one that is ad hoc, that is CMMM Level 1 to be ad hoc. We get benefits but can’t measure them. So that is when that’s going to have an AI assistant to do your enabling code pilot for everyone in the organization. You’re going to get benefits but it’s hard to measure them.

And then Level 2, that things that are being repeatable and that’s why I put in the AI agent, because you can repeat the business process, you’re going to have a policy agent, an employment onboarding agent, it’s a singular agent with a singular purpose. So far we don’t need to change that much. What we have today can handle that. I mean we are deploying these kind of simple agent systems, and they work okay. They’re expensive to maintain because they’re brittle but they work. When we get into Level 3, and this is where we have a multi-agent systems and we have some level of autonomy, I’m not going to talk about four and five, it’s so speculative, but Level 3 have multi-agent systems. This requires a new operating model, it requires a new design language, it requires a completely new architecture, a completely new governance approach and it’s a really big step.

That’s why I think that we need to step in between [inaudible 00:17:43] step 2.5, that’s a multi-agent system but at least without the autonomy. And when you’re looking at the guardrails, they look different as you’re moving in between the maturity levels. So for example, one guardrail that’s very important and not quite understood so it’s a good one to pick, it is authority and the decision rights. If we are going to put any kind of autonomy in and we’re going to give it agency that we are going to allow it to make the decisions on our behalf, we need to be crystal clear on what kind of the decisions it can make. I mean it’s common sense but it’s something that’s been left far too late. So that’s a really important guardrail that you always understand what exactly can the agent decide in the demo content design.

And when you’re moving, for example from a maturity Level 2 into Level 2.5, there’s quite a delta. And then when you’re moving from 2.5 into three and you talk about autonomy, that’s a quite big delta again. So the guardrails are changing in order because you need to look for different things. When you turn on the autonomy, suddenly the risk picture is much more complex, it’s much higher. So again, your guardrails have to reflect that and you need to design that into the agent again from the beginning to make sure that you can scale both what the agent does but you can also scale the governance. Again, I’m coming back to this thing, they have to be joined at the hips. It’s really important.

Practical Guardrails: Scope, Goals, Authority, and Policy [19:22]

Shweta Vohra: Yes. By you’re bringing agents into the picture, I’m more worried because everything is not a agentic problem. We will talk about that later and we have a dedicated time for that separately as an episode as well, but I like the framework which you’re saying where organizations can assess themselves. Most of the organizations in the initial Level 2s, which you said ad hoc and the other one, but what are those guardrails beyond the policy documents or maybe very high level principles we are giving people that have the least privilege. Let’s talk about those.

I mean when, let’s say I’m putting my new AI system which is driven by LLMs, which is driven by gen AI and maybe let’s say it has some agentic components as well. Now I am marrying and merging this with my existing system, which is procedural, microservices based, what is one or two of those guardrails which you’re talking about in terms of what you said?

Jesper Lowgren: Okay, they’re actually guardrailed for specifics, so we have seven of them or I have defined seven of them. There are obviously other ways of approaching it, I really find one way of doing it. So one of my guardrails is scope and the scope is specifically about understanding what your interaction points, what your contact is with the non-agent qualities. So let’s say that you’re interfacing into an ERP or a CRM or any kind of external system, whatever shape and form, you actually have a guardrail specifically for external systems. So that’s how you manage that.

Also, to give another flavor of these guardrails, I mentioned one is goals. This one is actually really important and actually in a sense perhaps one of the harder ones. Imagine if you have an agentic system and here you have agent one and this is the profit maximization agent and then you have agent two, and this is the margin maximization agent, and then you have a third agent. So I’m going to introduce a few more, that’s a warehouse.

Let’s say that you’re putting all of these agents together, and just say, go for it. That would be very, very dangerous. They have their own goals and they’re going to pursue their own goals, but you have no idea what the emergence is going to be. So again, one of the very important seams in the boundary, one of the governance pillars, is this ability to be able to define a goal in an intelligent way to an alien. So if you have three goals here, you need to provide some kind of guidance to the LLM how to view it. And it could be for example, and again we’re touching on another guardrail now, that’s policy. It could be a policy that says that you must never ever go below 10% profit margin. So that might be a constraint that is built into one of the agents.

So that is where we start to put boundaries around them about they can’t and can do and that is policy is the main instrument, so if I could do that. And then we are also balancing the goals and saying that this goal is going to be more important than that goal under these circumstances. We talked about procedure of logic, it doesn’t disappear, we are pulling it out of the core. We are pulling it out of the code and we’re putting it in boundary instead and we are telling the LLM, you figure out the code as long as you’re following these guardrails.

Shweta Vohra: Makes perfect sense, Jesper. I think what you’re telling us to do is and more organizations to evolve and even the people who are working behind the scenes to come up with more of these emergent behaviors where the goals are separate or distinct and then still system has to work. So again, things come is that we need more system thinking than ever before.

A New Design Process: Letting AI Design Within Boundaries [23:38]

Jesper Lowgren: Absolutely. Can I give an example?

Shweta Vohra: Sure, please.

Jesper Lowgren: Okay, so this is how it plays out in real life. So this is a little bit wild but I’m doing it anyway because I think that we need to push the boundaries and it works, and it’s a completely new design process. So I’m used to the old world and I’ve had so many workshops, you get a team of people, you have the white board, you can do service, the blueprint or customer the experience, design [inaudible 00:24:06] diagram and you draw and you have stick-it notes, dah di dah di dah. I’m running the workshops radically different today. So recently I did one for a listed company in Australia and I captured all of the information around the boundaries or as much as possible with the goals I understood about authority, delegation of authority for example, a lot of the policies which I understood, the interfaces into other systems et cetera. So we could define the boundary.

So I had defined the boundary in an alert and then we invited the customer, they flew in from all over the place and we sat in the room, there were two IT people from the customer, I think there were four people from the business, the senior people in different areas of the business and a couple of people from DXC Technology. And I said, we are going to design your future call center process now end-to-end. I put in everything into the LLM, it understands your business, it understands the boundaries, we are now going to say instead of going up to the whiteboard to start drawing, we can’t design the process for AI better than AI can design it itself. We need to let AI design what we do and that what we did.

So the first prompt, there was a big screen instead of a whiteboard so everyone could see my typing, which is terrible, but anyway, I put in the prompt, “Based on everything that you know about company X, I want you to develop an end-to-end process that is taking everything into account”, hit enter and it goes away and it comes up with an agentic design of about 27 agents.

I was writing a program that I can fit it into so I can get it graphically represented. And then rather than go in and try to understand the process, because some of it was common sense, other things not so much, I’m taking a very, very different approach. We want to test into the boundaries. Everything is about the boundaries. So these business experts I had invited, they were people that really understood their business and everything that could go wrong, they were experts in the edge cases. And that is how you validate the system, it’s one of the best where you start through every edge case added and you try to break it.

I made it a competition, a bag of lollies for anyone that can break the AI. And actually one person could. They found an area where it hadn’t considered that it needs to apply national policies depending on the country team when it comes to bio-protection and things like that. So we ended up with 33 agents instead. But LLM designed the entire system inside the boundaries because we had defined the boundaries and then when we had done that design, we took a part of that and we automated the coding and we actually built the pilot. It’s a new way of thinking, it’s a new way of operating. It is insanely fast.

Shweta Vohra: Absolutely. I love that this organization you worked with is spending so much of time doing it. I wish everyone does that because that’s extremely missing in the whole pace of AI. And while we were touching about system thinking, I also want to throw one example which I absolutely loved from Dr. Werner Vogels when he gave his last speech this re-invent, he mentioned that there was a forest from which the wolves are removed because those are very aggressive animals to be and killing everyone else and doing the damages and all. Seemed logical, everybody supported it and wolves were removed from the forest and within a decade of removing them, everything started to degrade. Water problems, forest problems, greenery problems, and even certain breeds were dying and that made them to look back that where we went wrong.

System Thinking and Safety: Lessons from Emergence [28:26]

Then we went back into that removal of wolf was the bad decision they took and they had to bring it back, put it together again and then within few years it started evolving. So while everything might seem logical when people are putting their AI strategy decks, most of them have these days, system design is missing that how this is working today, how will it work with the new components and how will it evolve? In this whole picture, if we now move towards the guardrails and the safety aspect of it, what are those content filters? What are those fallback logics and what is your advice in that area?

Jesper Lowgren: I’m coming back to these seven seams again in the boundary. I think the way that I understand agentic and generative AI in how it’s going to affect architecture, to me that sort of almost like sits in the middle and it informs all of the conversations. So for example, in terms of safety, one very obvious one is risk. So one of the dimensions, one of the seven, is risk and that is really again understanding the risk within an agent, understanding the risk within the system and understanding ways of firstly how that happened. I mean emergence behavior even if we are defining the boundary, how do we actually define the emergent behavior? What are we actually looking for if we don’t really understand it, and that is part of the risk et cetera. That’s part of the safety in the system itself because we need to have some kind of understanding of it in order to trust it and to have some ideas about what those things are.

I think that we can sort of work after minimum viable starting point that we can start that we can learn et cetera over time. So I think the risk is very important about the safety. I’m looking at safety a little bit different probably from where you are coming from. I think safety for example also sits in one of these dimensions of the boundary, one of the seven, and that is semantics, ontology and semantics. You cannot build any multi-agent systems up unless you have more ontology and semantics. And that’s part of the governance as well because if you have an agent here and you have an agent here, if they’re operating independently, fine, I don’t think we really care too much. If you’re stitching five of them together that they’re making decisions and they all have a different context, that’s not going to work at all. They’re going to drift and hallucinate immediately.

So in multi-agent systems, we have to give the agents the right context at the right time so it can make the right decision. That is critically important and that is what the semantic dimension of the boundary does in making sure that all of the agents have the same understanding that if we’re talking about, for example, done in the contact of the customer order, every agent in the system including humans in the loop, they know exactly what done means. It doesn’t mean this or that, it means exactly this and everyone is interpreting at the same time. That’s another way of enforcing safety that we always are using the same language to communicate.

And another way of looking at safety, coming back to the boundaries, again, is evidence, it is sort of an after [inaudible 00:31:59] policy perhaps, but it’s really about how do we know that something the system is true? How do you prove anything? How do we retain the records so we can go back and look at something, yes it was true at this point in time. I mean that is also about making the system safe.

So there are a lot of these boundary things around it and then we can build other things into it. We could talk about fairness and ethics and morals and all of these kind of things, which is all about how we’re controlling the model and making sure that the models are doing the right thing, et cetera. But I think the question about safety is so broad and it actually touches everything. I think if you don’t trust one part of this system, it’s going to be hard to trust the whole system.

Shweta Vohra: A lot of things you’ve said around it and semantics and risks and the boundaries of the system and looks like you’re quite already deep into the multi-agentic system architectures which you’re thinking and writing about. I want to bring you back thinking atomic, because while we are talking about organizational race, the user trust, the in-between everything, things start small, right? When a developer is writing maybe vibe coding or spec coding or whatever spec-driven development and then we are pushing this code to production at a pace and now that person is not in position to think that user level of safety, maybe not even knowing that user level of safety and the whole ecosystem. Where do we start atomically or the unit level when I’m defining my function and how do I think as a developer that what will be the emergent behavior of this? What would be small things out there which then builds it to the semantics, which actually makes perfect sense and connects the dot fully.

Start with Systems, Not Tools: Advice for Developers [34:05]

Jesper Lowgren: We do what I did. So when I started this, I’m not a developer although I’m a reasonable vibe coder. When I started 18 months, two years ago and with ChatGPT 3.5, back then I realized immediately this was going to be really good. Here we have automated cognition, this is going to have to make a difference. And I just understood immediately that for all of this to work, it is not about having a bot here and there, we’re going to have to be able to connect it and in my mind it makes no sense unless we are actually looking at many types of system. It’s not about creating an agent here and there and I think that as a developer. We will have to set our sights much, much higher than that. We need to set aside the system and understand the system.

And then we are coming back to some of the things here that we have talked about, but I think that a lot of these things here, they sound complex and they sound different that they’re actually not that complex. I have just come back from piece of consulting with the government department that is redeveloping all of their horribly complex core systems, and I’m looking at that I’m thinking technical debt, holy and I’m not going to say the F-word, but holy F, and I’m thinking this is so complex that you can’t fix it. It’s unfixable. How do you untangle all of that spaghetti? You can’t.

So we are facing incredible complexity already. So the world that I’m painting, it’s not that complex, it’s just really, really different. It’s a mindset thing, this [inaudible 00:35:52] thing. If you go in and you build a multi-agent system and let’s say that you use a framework like CrewAI or Magento, it really doesn’t matter which one you’re using, you can build an agentic system using procedural logic or you can build an agentic system using the principle here that we are talking about. It’s not that one is more complex than the other. It’s not that one is harder than the other. We just have to think differently. The difference is that one of the system has infinite scalability and has all of the governance built in through these seven layers before you even start designing your agents.

So I think for my recommendation for a developer that is working in code and working in the traditional systems the development lifecycle, get off it, honestly get off it. It’s a race to the bottom today. The power of the technology and the tools is changing things very, very fast, anyway. I would just start investing time and learning, experimenting, take some of the things here that we have talked about, perhaps even buying my book Agentic System Design which explains how this works. You can take that book and the principles then you can build an agentic system along the lines of what we’re talking about. That is what I would do. I will not continue what I’m doing and try to do it better, faster. I will not race in the same race. I would go into another race.

Shweta Vohra: Yes, definitely. Maybe what I think is that that’s where organizations are doing all the AI access and the tools to people at all levels, but they’re not giving the AI ability, which actually is needed at all levels from top to bottom. So well said there. Let’s talk about some trade-offs in this space.

Jesper Lowgren: Yes.

The New Trade-Offs: Drift, Debt, and Stability [38:01]

Shweta Vohra: Is explainability, evolvability, I see it as pace versus stability also is a thing now, what other the new trade-offs do you think are very important to keep in mind now?

Jesper Lowgren: Yes, I scare people. When I talk to executives, I scare them a purpose and I say that you’re used to what will protect debt, in the agentic AI that can’t protect your debt. That is not strictly speaking true, I’m exaggerating when I say that. But I think the conversation really about how much technical debt can we afford because when you’re talking about an agentic system, even if it’s not autonomous it’s still good to have emergence. So you have all of these agents connected doing things together, something’s going to happen that it can’t predict. The question is how much and how significant is it? So in terms of trade-offs, it really comes back to I think how much technical debt you can stomach.

When you allow technical debt into the boundary, you know that something is not going to work properly, that something is going to drift. It will always drift unless you lock it down. So the question is how much drift are you willing to accept? And it could be that the solar systems that are not involved in, let’s say, really critical decisions like for example payment system or trading systems that are dealing with very, very important things, that could be, for example, it could be policy system around travel, it could be that we are happy to accept a little bit of drift in these agents because at the end of the day if the agent is getting it little bit wrong or the workflows are working perfectly, well, it might not matter too much.

So I think the trade-off is really going to be about how much are we willing to let agents drift, hallucinate, loose insights into the reasoning and it really comes aspect to the business problem they’re solving. So obviously the more important the business problem is, the less drift, then the more governance et cetera we need. So I think that bit needs to be clear.

Shweta Vohra: I agree, that’s the layering what we need to do maybe to really get to overcome those trade-offs which we have spoken about. Now let’s move on to the responsibility boundary. We have created a lot of layers in the organizations from product team to platform team to architects to various other things in the governance. Security usually stands alone where we say that it has to be all connected. What do you think about the responsibility boundaries around generative AI and agentic systems? Where does it lie? When something goes wrong, when it hallucinates, whom should we blame?

Responsibility Boundaries in Generative and Agentic Systems [41:00]

Jesper Lowgren: It’s a good and interesting question and I have to speculate now because I don’t think the systems I’m part of designing and implementing have been running long enough to really comprehensively answer that. But having said that, if we go back to architects, because architecture is the most important professional on the planet, if we go back to architect, the architecture changes quite a lot and I think that there could be certain responsibilities in this entire new operating model that we are sort of implicitly discussing here.

And let’s start at the top for example, business architect. They will be responsible for, it’s not so much what capabilities of these things any more, they’re going to be responsible for the policy. They’re going to be responsible for the anatomy of the policy and the structure of the policy. Can these different policy types and these policy instruments, can we capture the essential business rules of the organizations using these constructs? And if we can’t do we have to sort of change the structure?

So business architects are going to be intimately involved in this sort of interface in between the business and agentic AI. And this is similar with other roles. My hypothesis is that I think the business architect is going to be the most impacted. I think the second one is going to be the data architect because ultimately data is everything and it’s okay if we have a peripheral concept or a pilot, it’s more that we are opening the ecosystem or we are allowing other data sources coming in. If that data is not high quality, that is if it doesn’t fit the ontological and our semantic layer, then things are not going to work.

Shweta Vohra: Yes, that’s a space which needs to evolve further with the boundaries that we have created. And of course the converse law, which always come in the picture here. What about the agentic? Now I know you have written a lot about it and even in the whole talk you’ve spoken about it, with agentic, are we solving the real problems? Are there any problems which…? Because one day I and one architect had very good discussion and we were saying that while our managements are saying go solve the problem with the agentic AI, what is the problem? I mean everybody’s putting a solution to it, but what is the problem which agentic solves and what is the problem which is not meant to be solved by agentic AI?

Jesper Lowgren: I think the first problem is mindset. I think that if we are going to play in this AI sandpit, we cannot take the old world with us. It’s not designed for autonomy, it just breaks. So I think the mindset is really, really important. I mean perhaps that is all it is. If we understand that we are turning the world upside down and we are letting go of logic and we’re going to replace logic with autonomy and we are putting in autonomy, well in order to control that, we’re going to have boundaries. If that sinks in deeply, I think that we have solved 75% of our problems because as you get everything, once you understand that, then you get the design language. I mean you have to design your goal, you have to design your authority, your policy, your scope, your risks, semantics and your evidence.

It also give you the governance language is, the goals also provided governance, authorities also provided governance. I mean your delegations of authority for example. In my perhaps very simple mind, just this very concept leads to so many answers to some of these really difficult questions. I mean, brittle systems, if you’re trying to build an AI agent and you make a deterministic, it would always be brittle. It’s not meant to be deterministic, generative AI is not meant to be an if-then-else construct. It’s not meant to do that. It’s not meant to be that. It would always play up, it would always be brittle.

Shweta Vohra: Yes. While this whole answers will evolve with time, but yes, you said it right, it’s not if-and-else problem which most of the teams these days unfortunately start thinking. But that’s one correction we can do.

Jesper Lowgren: One thing to keep in mind is that we are of course in the very beginning and where we are thinking about the agentic system, we are thinking about frontier models like ChatGPT and Gemini and all those things. They are very, very expensive to use. So I think that one of the big changes along this is going to be not using the large, I mean they’re all generative AI, but rather than using the large language models, we also use them with small language models because of that art, when you’re building these multi-agent systems, or some of these 33 agent systems that we designed for this customer, we don’t want every one of the 33 agents to go hit a frontier model and start incurring a lot of costs.

It’s a question of also understanding the system and understand how to design it and what components to use and what kind of LLM they began to use in what circumstance. And I think that if we access to the value, low-cost small language models, I think the use case of AI is going to be bigger again. So again, I can’t think of any areas where I think immediately we should absolutely not go there.

Beyond Frontier Models: Designing Cost-Efficient AI Systems [47:00]

Shweta Vohra: Yes, that’s a good advice to keep, I would tell people who are listening to this. We are relying so much on gen AI and LLMs because Llama came out then so many models came out and then whole world started drifting towards gen AI, is that the only piece of research which we can leverage on?

Jesper Lowgren: I’m the wrong person to ask because I think I have an LLM addiction myself and I think I use it differently. I don’t use the LLM to write the emails and things like that. I write my own emails. I’m happy doing that. I have my own start. I speak with an accent, I write with an accent. I’m quite happy with that. To me, what the LLMs are really useful for, the way that I use them is that they’re really, really good at helping you connect dots. So I create a lot of thought leadership for example, around AI, I have a lot of ideas. I create a model and the best way to validate the model is to take it into the LLM and say, this is what I think, first you validate it, what do you think? And quite often they come back and tell it’s really, really good. Of course we know how they’re right.

But then you can do some deep research and you go in, okay, so what exists in other parts of the world, et cetera. And then how these relate to other thought leadership I have developed, et cetera. And when you start attacking something from a number of different angles, it’s almost like you see a new ontology. So you have an idea and you see an ontology of your idea expanding. I think that’s really cool, and that is when you can just create, I mean I’m a consultant, so this is what I do for a living and the fact that I have a tool like that demand at my disposal is sort of increasing the productivity drastically. If you’re using it simply to answer questions and if you’re surrendering your own thinking, I mean you own free will, your own critical thinking, I mean that’s clearly not good. That is really misusing the LLM. You still have to think.

Final Reflection: Architecture Is More Essential Than Ever [49:05]

Shweta Vohra: Yes. And with that, we are at the end of our conversation. It’s long time we have taken. I think one clear takeaway from my end on this discussion is that we definitely need evolutionary systems, evolving system, emergent behaviors and the new guardrails and the new architectural practices. And one thing which I’ve started stating to various teams even more now is that architecture is required much more now than ever before.

Jesper Lowgren: Absolutely. As a matter of fact, I’m telling people, I mean I’m an enterprise architect, I’m telling people that enterprise architects they’re not optional, new architect is optional. I think the real difference is that technology architects, I think we always need solution architects. Everyone has them. Data architects, almost everyone has them. It’s really the business architects and the enterprise architect, they are going to become essential. Because the enterprise architect is looking after the entire ecosystem that we are talking about and the business architect is going to be this critical new business interface within the business and your agentic systems, all of those translation layers. I think it’s a great time to be an architect. It’s a fantastic occupation going into AI, it’s going to be super exciting.

Shweta Vohra: Absolutely, completely agree. In fact, the responsibility increases much more where first architects need to be doing the right job and then bringing our engineers, senior, junior engineers to that level where they start connecting the systems and start thinking in systems that where my work or my vibe coding can impact the organization or the user or the function and where it couples more, where it shouldn’t. All these things are important to keep in mind. With that said, it’s good segue to end this. Any last thought, Jesper?

The Closing Principle: Always Define the Boundary [51:05]

Jesper Lowgren: I’m just going to say the boundary. You remember the boundary, that’s all.

Shweta Vohra: With the boundary, and before that, create that boundary so that you stay in control of things and don’t let things control you. All right. So with that said, thank you for joining us, Jesper. It was really a good discussion in terms of thinking. We may not be able to answer all the problems today existing, but we have thrown certain ideas for sure, which deserve to be listened to and which needs further work to be done in those areas. So thanks a lot for joining us today.

Jesper Lowgren: Thank you so much for having me. Have a great day.

Mentioned:

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Fake Laravel Packages on Packagist Deploy RAT on Windows, macOS, and Linux Fake Laravel Packages on Packagist Deploy RAT on Windows, macOS, and Linux
Next Article Box brushes aside fears of AI’s threat with powerful earnings beat –  News Box brushes aside fears of AI’s threat with powerful earnings beat – News
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Some NCBA investors may be forced into cash exit in Nedbank deal
Some NCBA investors may be forced into cash exit in Nedbank deal
Computing
Megarounds soar as VCs pump billions into UK AI – UKTN
Megarounds soar as VCs pump billions into UK AI – UKTN
News
Apple outlines game dev strategy with three GDC sessions
Apple outlines game dev strategy with three GDC sessions
News
How I Created a Donor Leaderboard in WordPress & Got More Donations
How I Created a Donor Leaderboard in WordPress & Got More Donations
Computing

You Might also Like

Megarounds soar as VCs pump billions into UK AI – UKTN
News

Megarounds soar as VCs pump billions into UK AI – UKTN

2 Min Read
Apple outlines game dev strategy with three GDC sessions
News

Apple outlines game dev strategy with three GDC sessions

1 Min Read
Audible Launches Cheaper Tier Amid Rising Audiobook Competition
News

Audible Launches Cheaper Tier Amid Rising Audiobook Competition

4 Min Read
The Acer Aspire 14 is over 0 off at Amazon — buy now for 9.99
News

The Acer Aspire 14 is over $300 off at Amazon — buy now for $499.99

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?