By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: [Video Podcast] The Craft of Software Architecture in the Age of AI Tools
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > [Video Podcast] The Craft of Software Architecture in the Age of AI Tools
News

[Video Podcast] The Craft of Software Architecture in the Age of AI Tools

News Room
Last updated: 2026/02/11 at 6:55 AM
News Room Published 11 February 2026
Share
[Video Podcast] The Craft of Software Architecture in the Age of AI Tools
SHARE

Watch the video:

Transcript

Next Generation Architecture Playbook: Series Introduction [00:35]

Shweta Vohra: Today, we are starting with a new podcast series and followed by an eMagazine, which is Next Generation Architecture Playbook. Now, the goal of this podcast series is to explore how today’s architects are rethinking design, platforms, API, and maybe the delivery in the age of AI. The whole purpose of this podcast series is to provide experiential guidance to the people out there that helps practitioners apply these insights in real world, including the architecture, design, and leadership context. Before I introduce our guest, let me tell you what this episode is about, because that precisely is meant to be talking with this guest.

Podcast Introduction: Software Architecture in the Age of AI Tools [01:20]

The topic we are going to talk about today is the craft of software architecture in the age of AI tools. Is it really required? Are we changing anything from software architecture to AI architecture? And as I said, today to cover up this topic, I have with me, Grady Booch.

Hi, Grady. How are you doing?

Grady Booch: Hello. Very well, thank you.

Shweta Vohra: Thanks for joining us here. So Grady is our esteemed guest today, and I’m truly honored to have him on our very first episode of this series. He is IBM Fellow and Chief Scientist. And vast experience, what you have, Grady, I would like to hear from you. What would you like to tell us?

Grady Booch: Well, I’m having a lot of fun. I think this is the first time this will be public, but I just retired from IBM on the last day of last year. It’s not like I’m really retired, though, because I’m working on a couple of books. We’ll talk about this throughout, but one of those books is associated with the documentary I’m working on, much like Carl Sagan’s Cosmos.

Shweta Vohra: That’s interesting. Tell us a bit more about that.

Grady Booch: Sure. Well, if you’ve seen the Cosmos series, not the one by Neil deGrasse Tyson, who, respectfully to Neil, I thought there was too much Neil and not enough Cosmos. The original Cosmos, done by Carl Sagan, which was, I think, the most viewed documentary ever on PBS until Ken Burns when the Civil War came out, addresses basically looking at the world through scientific thinking. And obviously, computing has moved itself into the interstitial spaces of the world. So I decided it was time, and actually have been working on this for about 10 years now, to do something similar. It’s a little bit of science, a little bit of history, but very much philosophical in terms of what does it mean to be human in the face of computing. We have 12 episodes in mind, and that’s what I hope to finish up. I’m also working on another book on computer architecture, which is the source of our topic today, so a great time.

Shweta Vohra: It’s timely, and-

Three Golden Ages of Software Engineering and What Feels New and What Feels Familiar in AI Assisted Software Development [03:29]

Grady Booch: Yes, it is.

Shweta Vohra: … who knows, next time when we meet, we must be talking about that book. Let’s plan after this, but let’s get started on this. Not many people would know, but let me tell you all that. Grady is also the UML co-creator, which we all… If you ask me, I started with UML. When we started understanding, okay, what is the design? How do we structure it? How do we really represent it? But Grady, you have shaped how generations think about software architecture. And when you look at today’s AI-assisted development landscape, what feels genuine? And is there anything which feels like history repeating itself?

Grady Booch: I’m very happy you used the phrase generation, because to answer your question in full, let me take you back in time and talk about generations. I would assert that we are currently in the third generation, the third golden age of software generic.

The first golden age really began probably in the ’40s, ’50s, ’60s. And this is the time when we were moving out from hardware and software being indistinguishable from one another. When you programmed a computer, you were on the hardware itself, to the point where we moved to higher programming languages. And so folks such as Grace Hopper played an important role. All these folks who were trying to build the business of software itself, it really came into full flower in the ’60s when IBM decided to decouple software from hardware, and so software actually became a thing unto itself.

It is astonishing to realize how young our industry is. Sagan has this story of the cosmic calendar, where he compresses all of the history from the Big Bang on the 1st of January to where we are today. And if you consider the history of computing, we would be in that compressed calendar, the last 250 milliseconds of the cosmic calendar, which is literally the blink of an eye. So there is so much of this happening in our world in that space. And if you dive into that 250 milliseconds, we realize that even the idea of software itself was a term that was coined in the 1950s. The idea of software engineering came about through Margaret Hamilton in the 1960s. This was in my lifetime. So it’s really two or three generations that we have what we have today. It has brought us to an amazing place, and yet within that time, what we have done has transformed civilization.

In that first generation, consider the problems we had. There were mostly large machines, and the machines were more expensive than humans. So a lot of what was happening in software engineering was to optimize things for the needs of the economics of the computer and not the human. So you would do lots of things offline from the computer because computer time is very, very expensive, and you’d kind of do upfront things with… This is where the waterfall method came into play because it was cheaper to do that. It was also very expensive to fix errors if you wanted to fix them upfront.

It was during this first generation, this first golden age, that the ideas of structured analysis came to be. And this makes sense because the fundamental problem that was being faced by software engineering was the problems of scale. And so we’d never really seen this before, but moving from individual small programs, just automating single kinds of mathematical functions, to large systems, how do you approach that? And the way we humans attend to that is by abstracting. In those machines, the primary unit of abstraction was the algorithm, and so the first golden age was all focused upon algorithmic decomposition, and thus you had languages such as FORTRAN and COBOL and C, and all those that surrounded that.

In the ’70s, ’80s, the world began to change because now all of a sudden we had the rise of the mini computer. We had individuals who could now program on their own device, and we had the beginnings of distributed systems. I say beginnings because it really started out with systems such as SAGE, the Semi-Automatic Ground Environment, which was due to what was happening with the Cold War. It was a result of our reaction to what was happening with the Soviet Union. So we saw here in the ’70s, thereabouts, the beginnings of a change, a seed change in the way we dealt with software.

And it wasn’t just the complexity on single machines. It was the move toward distributed systems, and of course, this was the day before the Internet. For those listeners, ask yourself, what year did you get your first email address? And I bet I beat it. My first email address I got in 1979 when it was on the ARPANET. In fact, at that time, we had a small book that listed the email address of everybody in the world. So I happened to be around at the beginning of this amazing thing. And indeed, much of the distributor work was happening not in industry, not in the commercial sector, but a lot of the innovation happening in software engineering was being driven by military systems. This is why in my documentary, I have this chapter, this episode called Woven on the Loom of Sorrow, which makes the assertion that much of what exists in modern computing came about due to World War II and the Cold War.

In this generation, there were strains upon what we knew how to do in software engineering, but there were some really fascinating things happening in research. The ideas from David Parnas, such as information hiding. The ideas with regards to the language Simula, looking at the world through classes as opposed to algorithms, the idea of abstract data types, and the like. These all came together, driven by an effort, the DoD, for creating a language called Ada, which was an attempt to provide a single language that ruled them all, because at the time, there literally was a software crisis. That term was coined by a NATO conference in the late ’50s, I think it was, in which it was realized we had this huge demand for software, but simply could not develop it fast enough that was the crisis.

So there was a shift happening, and I will admit, I happened to be at the right place at the right time where that sea change was happening, and had the opportunity to experiment. And so I took those ideas of abstract in-types in Parnas’ work and realized this could lead us to a different way of thinking about programming, which was object drawing design, not structured design. And thus we were in the middle of the second golden age of software engineering.

I would observe that we are in the third golden age of software engineering, but it didn’t start this year. It didn’t start with the rise of ChatGPT. It actually started maybe a decade ago with the rise of platforms because again, you see the shift going on in software as we move to larger and larger systems, the operative word being the word system. It wasn’t just individual programs. It wasn’t just distributed systems themselves, but often distributed systems that interacted with other distributed systems over which you had absolutely no control whatsoever. As I think it’s, David Deutsch said, “What is a distributed system?” It’s a system in which a computer you didn’t even know existed goes down can affect yours. That’s a true distributed system.

So now all of a sudden, we were on systems of literal global scale, and thus we saw the rise. We had algorithmic abstractions, we had objectory abstractions, and now we’re dealing with complete platforms in which the role of the architect was often to weave these things together. “Oh, I need to do some messaging. Well, gosh, I’ll use this particular library. I need this authorization mechanism. Not going to write it myself. I’ll use this. I need a service from some SaaS, I’ll go out there”. So now all of a sudden the architect’s problem was one of weaving these things together. That’s the age we are in.

Just so happens, my assertion would be that what we’ve seen in the rise of things such as Claude, which by the way is my go-to AI companion development tool. These are part of the atmosphere, if you will, in that third great golden age, which we’ve been in for a while. So that positions what’s happening in AI. It is a part of this new trend in which we are in. There are parallels, by the way, to other industries, but I’ll pause for a moment so we can talk for a little bit before I go there.

Shweta Vohra: Yes. It was all music for my ears, seeing our software industry moving from third golden age, which you’ve said, but I want to reflect on one point which you specifically mentioned, and I think it’s important for our architects and builders out there to understand that in 1979, you had your first email idea while we all got it in somewhere 2000, right? That’s where people started, in the beginning of 2000, started adopting and all that. So similar to AI, as you said, it starts quite early. What we get to see, that’s the hard work of all those years which has happened, and then things start coming to reality.

Architecture Versus Design and What Must Remain True Regardless of AI Tools [13:28]

Shweta Vohra: With that said, there is one thing in my mind which I want to reflect on that often teams struggle to separate architecture from design. And I know you have stated it at many place. So, in an AI era specifically, can you generate designs rapidly? What do you believe architecture must still be responsible for, regardless of tools? What’s your view on that? First of all, if you can reflect on your understanding, or which you have been telling us from years that architecture and design is different, and then if you can relate it to the today’s AI tools, et cetera.

Grady Booch: So to unpack that question, which is a great question, by the way, I think we have to address two things. The first is, what is architecture versus design? And second, what is the nature of creativity in the design and architecture process? I have a very simple observation of what this is. So it’s a very simple definition for me. All architecture is designed, but not all design is architecture. Architecture represents the set of significant design decisions that shape the form and function of a system, where significant is measured by cost of change. Let’s tear that apart for a moment.

Architecture is a process that’s at a higher level of abstraction, the design itself. Indeed, there’s even a lower level than design, and that’s idioms. Idioms would be things like, “How do I name my variables? Do I use CamelCase or do I use something else?” These are common kinds of patterns, but that is a choice and design is always a choice. Indeed, if you look at the nature of the software architect, the software designer, the programmer, one way to look at it is we are engineers. And the reason I think we can legitimately call ourselves engineers is because we are the ones who try to build systems of reasonably optimal value that push back against the static and dynamic forces that weigh upon us.

If I’m a structural engineer, if I’m an architect like Frank Gehry, we’ll come back to him in a moment, then I’m trying to build a system, who says, “I want to build this skyscraper. And gosh, I’ve got these static loads, things weigh. I’ve got earthquakes. I’ve got this kind of stuff. I’ve got dynamic loads. People move in and out of my buildings, winds happen”. These are the kinds of things that an architect in the civil engineering world will attend to.

There are other things they worry about as well too. They worry about things like beauty. They worry about cost. They worry about schedule. They worry about the ability to maintain the system. The same is true of software engineers, but we have a different set of physics around us because software ultimately is an extremely malleable, fluid element with which we build systems from our pure thought. Design still is a matter of choice. You’ve got idioms at the bottom. You’ve got design at the next level, which would be things like design patterns. I need to have a separation of concerns between users and my data. We invent the idea of CRUD, which says, “We put these things in the back end, we put these things in the front end, and best practices over time tell us we should have this separation of concerns”. Those are the kinds of things of design.

And the great thing in our world is that the aperture, the potential for design, is enormous because we have so many use cases upon us. This is where the seasoned architect and software engineer, and developer will shine because they have within their portfolio, they have within their experience space, “Oh, I see this problem with these forces. I should use this particular design pattern”.

Architecture is the next level of abstraction, which simply says, “These are the kinds of decisions that you make that are load-bearing”, if you will. A decision to say, “I’m going to use React versus some other language”. That turns out to be an architectural decision because the cost is non-zero. A decision to say, “I’m going to use this particular database”, sometimes is an architectural decision. Sometimes it’s not depending upon the degrees of separation that I have within it. The decision to say, “I’m going to use statistical methods as opposed to empirical methods in it”. That’s a design choice. It’s architectural as well. So, this is the whole spectrum of things within us. That’s what architecture is.

Now, let’s talk about creativity, the second piece of it. One of the things that distinguishes us as humans is that we are creative creatures. So what is creativity? In the Go competition, which AlphaGo beat every human in the world, the best players, there was one Korean player who I believe said… He was literally shook by what happened, and he said something to the effect, “I have seen the face of God in this because AlphaGo made some moves that no one anticipated”. Well, what is creativity? If you think of the landscape of some space, imagine some three-dimensional space, or rather, you have three-dimensional space. And I’m walking through the countryside, and I’m visiting this hill in that valley. We as humans, in things like Go, history, and tradition, and just experience, have led us to the same kind of paths. The thing is, AlphaGo has no such constraints of tradition or history, but they can step back from, in effect, another dimension and say, “What about here?” So they are looking because of the sheer volume in which they can look at, as opposed to humans, they can explore a larger state space.

Is that creativity? Well, no, it’s a larger search space done more quickly, but it’s not necessarily creativity because creativity represents the unexpected bringing together of things that are often within context of needs and wants and desires and loves and the like that. Our AIs don’t do that. They’re great search engines. They are great with large language models. They’re great unreliable narrators. They’re bullshit generators at scale, to be very honest, but we haven’t cracked the idea of how we take the creativity we humans have within that itself.

How AI Changes the Workflow Without Replacing the Architecture [19:55]

So we’ve unpacked the two main elements. Let’s come back to where AI comes into it in place. I use Claude all the time, and I’ve been using it for some JavaScript. I’ve got a Swift project. I’ve got a PHP project. I’ve got a C++ project. It’s, for me, like having an intern who is enthusiastic, indefatigable, they never sleep, they’re naive as well, and they need constant directions because they’re not creative. I can tell them, “Do X, do this kind of thing”. And they’ll often do it with perfection without worrying about it time-wise, and they’ll get it mostly right, but they don’t know that. In fact, there’s no they there because they’re not personifications. I don’t want to anthropomorphize them. They are really good at automation, so it’s me as a human working in conjunction with them.

It works well for people such as me, and I can’t speak of anyone because I have a lot of experience in building systems, and I know I have in my quiver lots of different design patterns, lots of architectural patterns, and I know smells. I know when something smells right or not. The large language model I’m working with off to the side doesn’t know those things, nor does it have the context. And so I’m very happy to delegate certain things to it, but then I’ll check it. “It’s a trust but verify thing”, as Reagan said in the great Cold War. So, they are aids to me, and in that sense, to sort of wrap this part of the discussion up, that’s why I view what’s happening in this space where AI in our field, I’m not threatened by it. I’m delighted by it. I’m liberated by it because it does things for me that I don’t have to do myself, but at the same time, it is something that’s not going to replace what I’m doing because it’s very limited in what it can do.

Why Software Architecture Is Dynamic and Civil Architecture Analogies Often Mislead [21:51]

Shweta Vohra: I always these days try to avoid saying that software architecture is civil architecture at all because people really confuse it and makes things rigid or static. Whereas our thing is so dynamic and every time it’s changing, we cannot say that onces structure is built, all decisions done. It’s forever changing, and now it’s even more changing. But then it’s a very good way of looking at creativity, the way you explained, the way you explain the intern, your Claude, which you use. I think it would be fair to say, in my opinion, and that’s where I have also written, and that made me write the Dear software and the AI architect, because usually we think that tools change, our judgment starts eroding, and then we struggle to articulate what really changed. The change here is that we should be a good master rather than a good slave.

Grady Booch: Respectfully, I prefer not to use those terms because they’re so full of emotion and history. I prefer to think of it as we are the directors, and then we have the actors on the stage that we are directing. So, we are in a place where… And by the way, there’s some sense of lack of control in that, that I as a director, I’m not going to micromanage, but I’d like to let the ones that I am directing to have some agency and degrees of freedom, which leads us in discussions about what’s happening with AI agents, which we’ll get to in a bit as well.

But before you go on, let me also offer some interesting parallels because you mentioned civil engineering and the like. It’s difficult and sometimes dangerous to compare our two fields, again, because the physics of our medium is so vastly different, but I want to go back to Frank Gehry, whom I mentioned before. He, among other things, designed the Disney Concert Hall, which has these sweeping curves and all this. He himself was a brilliant architect, but how did he generate those kinds of sweeping structures? And the answer is it was due to tools such as AutoCAD. The introduction of AutoCAD made it possible for designers such as him to experiment with new shapes and new materials, and explore them by mathematically instrumenting those models, and then testing them before they got built. So his tools didn’t eliminate the job of the architect. If anything, it unleashed potential new creativity, which is what happened in his case.

That’s what’s happening in AI right now. Now, people may say, “But wait a minute, architecture has all these nasty things around it”. Turns out that even architecture has a curious history. If you go back in time, so Frank Gehry is relatively new. Let’s go back in time to where architecture itself was controversial. In the earliest days of software development, software engineering, and in particular, we’re going to look back in the first golden age where we had… They were all monoliths and the like, mostly in assembly language at the time. Things were getting so complex, it was hard for the human to manage them.

And so David Wheeler, Maurice Wilkes, and Stanley Gill, I think it was, sat back and said… This is the late 50s, said, “This is though we look at this. Whether we spread apart our software and not just these long lists of things, create this thing called the subroutine”, was a very controversial idea. Today, it’s obviously part of the atmosphere in which we breathe, but their notion was we need this as a mechanism for humans to be able to abstract at a higher level of abstraction, to think at a higher level of abstraction. It was controversial because in that time, it required a handful more machine instructions to actually make a call to a subroutine and then come back. And when you have an operation that takes a few milliseconds, that’s very, very costly indeed. Now, our machines came up in power. We’re there today with our distributed systems. So it’s things like that, things that look controversial in our architectural decisions, ultimately, the constraints go away.

Why “AI Architecture” Is Mostly a Language Problem in How We Describe Systems [26:20]

Shweta Vohra: That’s where I would like to now move us to the AI architecture because somewhere in between you reflected on creativity and the LLMs, while they’re doing the mixture of things for us, not really the creativity getting born, but then AI architecture everywhere, from your perspective, is this a genuinely new architectural discipline, or simply software architecture being forced to confront new constraints? What do you think?

Grady Booch: I have no idea what AI architecture is. That’s a phrase I’ve never heard before. What does it mean?

Shweta Vohra: That kind of answers what you are trying to say. New ways of software architecting with AI. It’s a way of saying that AI is changing the rules. What are those design rules? Is there any impact on the architecture side of things? So, is this a new discipline? Is this a same discipline with new principles, or new rules and guardrails?

Grady Booch: Words are important, which is why I came down on you in this one, because we can combine all sorts of things together, and we end up with this Trumpian kind of language where it’s just noise and doesn’t mean anything. AI architecture is a meaningless phrase to me. I can talk about architecture that is supplemented through AI tools. That’s a meaningful phrase for me.

Architecture is a timeless thing. We have seen architecture in civil engineering from the days where people slapped mud on huts and came to it, and to the days of Frank Gehry, where we have these sweeping, soaring things. Architecture represents a fundamental way of looking at the world at a high level abstraction. So it is in software-intensive systems. Oops, I used a phrase, software-intensive. It’s not just software, but it’s systems that are made up of hardware and software and people and societies that we are now building together, and that’s what the role of the architect itself is in that regard.

So, where does AI fit in that? AI is just a tool, and in fact, I would claim that all we are seeing is a new rise in the levels of abstraction. This is the history of software architecture and software engineering. The history of software engineering is one of rising levels of abstraction. We saw this in the first days where we were basically trying to control our machines. That’s what the difference engine was all about. That’s what the ENIAC was all about. Trying to control these electromechanical or mechanical things. At the next level of abstraction, we were trying to take our thoughts, which were at a very high level of abstraction, and turn them in a form that could then control our machines; thus was born assembly language. So assembly languages are a level of abstraction above the machine language itself.

Then we moved to higher programming languages, another level of abstraction. I would assert that the rise of AI tools will have as much impact upon software as did the rise of compilers. Both of them represent raises in levels of abstraction and moving a lot of the mediocre things that I had to do down to the machines themselves. What did compilers do for us? In Grace Hopper’s time, it was very controversial in the time of the invention of FORTRAN, very controversial, but the idea was let’s accelerate what the human can do and push these things off to the machine itself. Back in the days of FORTRAN and COBOL, how do I optimally assign this stuff to register so it’s faster? I don’t want to have to worry about that anymore. I push it to the machine itself.

Similarly, today, I want to make this change to my software. Let’s refactor this. I know the pattern. The machine can do it for me. I’m going to just think about refactoring and let the machine do it. So, what we’ve done is to move to another level of abstraction. The unintended consequence, or maybe it was an intended consequence, was we didn’t actually write less software. We wrote more software because it made it possible for those who were not experienced to do things they could not do before. This is the same thing that happened with the rise of Visual Basic. It enabled people who were not programmers to do some amazing things, and it revolutionized the business, but it didn’t change the nature of what architecture itself was.

What the Industry Is Overhyping and What It Is Underestimating [30:34]

Shweta Vohra: In view of what you said, let’s talk about that. What is underestimated here, and what is overestimated or overhyped?

Grady Booch: Let’s talk about overhyped, because that’s an easy one. That’s an easy target. If you follow me on Twitter, you’ll find that I constantly bash folks like Elon, Sam, and others, saying, “Good God, folks. You’re going nuts. I know you’ve got businesses to run, and I know you’re losing money left and right on your things, but be real. AGI is not imminent. It’s not going to happen”. I urge your readers to go watch the TED Talk I gave over a decade ago, in which I talked about the rise of superintelligence. And my reaction was, “I’m not worried about it”. More recently, remember I use Claude, I love it, but I blasted Dario, the CEO of Anthropic, because, dude, at Davos, he was saying, “Oh yes, it’s just around the corner. We’re going to write all of our software in these systems”. Well, yes, just insofar as compilers write all your software for the assembly language and machine language, but the next level above, you’re not going to do that. It’s your humans that do that.

There’s a great cartoon, xkcd. Basically, I think that’s where it is. It says, “What do you call a language that’s sufficiently expressive and precise enough that you can produce executable artifacts?” We call it a programming language. And so prompting is just another level of abstraction. It’s things like, “Go refactor this into a command pattern”. Well, that’s just another level of abstraction up where I moved from my human understanding of it, and the machine says, “Oh, I know how to do this”, and it does it for me. That’s great. But the AI is not going to replace it by any means because I am the human who is with my creativity directing it to happen.

I don’t fear the rise of superintelligence. I fear the rise of the billionaires and multimillionaires who are controlling these systems and driving it to increase their power and control. That’s what frightens the hell out of me. And that’s a human problem, not a software problem.

Where Real Productivity Gains Are Coming From and Where Integrity Is at Risk [32:48]

Shweta Vohra: Agree. There are a lot of things which are coming from humans, so agree on that completely. However, I do want to reflect on that. If I go by the responsibility, which you said that, how much you can push to the hardware, I think, geez, LLMs have done a great deal of bringing to that extent, though, creativity-wise it’s doing what we want it to be doing, and the pace has increased because of that, but, yes, we have made it little more accessible for others. We should give due credit to this technology. It’s evolving.

Grady Booch: It’s accessible to the general public and the masses, which creates all sorts of ethical problems and legal problems, especially in the world of text-to-video and text images. There’s a lawsuit going on as we speak right now in the EU against Grok because of that very thing, so there are these unintended consequences.

In the world of software, we are less constrained because less was stolen, if you will. Most of the training was done on things like stack overflow, which it’s a little bit dodgy in terms of the ethics of that or not. A lot of open-source software, a little bit dodgy as well too, but it’s not like it was taken from copyrighted code, so a little bit different world in our case here.

Shweta Vohra: That’s true. That’s true. A lot of security things and it’s a complete discussion on its own.

Grady Booch: Oh, gosh, yes.

Architectural Guardrails for Quality and Trust – When Machines Write Code [34:11]

Shweta Vohra: But before we move there, I want to take your view on that. We are promising massive productivity gains. We started with microservices, then we said Cloud Native, and now we are seeing AI coding assistance, and vibe coding is getting fancier and fancier. Where do you see genuine leverage, and where do you worry we are trading long-term integrity for short-term speed? What’s your view on that?

Grady Booch: So I’ll start with the second part of that first. I am concerned about de-skilling. If software is somewhat of an apprentice business, not unlike the legal business. If I’m going to become a lawyer, in the past, I would sit there and get the law under my fingernails quite literally as I pore through books and the like. I would have the chance to work side by side with humans in various cases and get their insights on strategies and the like, so they were part of that. Today, I mean, with the rise of things like LexisNexis and being able to search law things and like, it becomes a pattern-matching problem for many. And so, I have a number of friends in the legal industry, and I read in this space, there’s real concerns about de-skilling because there’s no place for these junior folks to get a job and then grow over time.

The same phenomena I fear could happen in software as well, where jobs for entry-level, they’re just evaporating because you can write it off to an LLM. So, where does the next generation then get their experience? I don’t know the answer to that one. We shall see.

The other reality is that, again, this goes back to Dario. I quoted some Shakespeare for him. “There’s more in the world of computing, Dario, that are dreamt of in your philosophy”, that was actually said to Horatio in Shakespeare. And my point is that the world of computing is far larger than the world of web-centric systems at global elastic scale. Most of the LLMs have been trained on just that domain, not a small domain by any means, and that’s great, but we’re going to see more of those kinds of things, but it also means we’re going to be driven to the common mediocrity.

Large language models tend to push us toward the same designs, which is great. We need more of those kind of things, so it’ll address that software crisis, but it doesn’t deal with two things. It doesn’t deal with the unexpected. I want to try something radically different. Large language models aren’t going to help me there because we’re outside their training data. And second, I want to build a new meteorological system that uses this latest model of physics of how clouds and fluids work. They aren’t trained on it, so it’s outside their realm, so those kinds of things won’t help me. We’ll get it over time, but there will always be on the edge these kinds of things, which is economically not viable to train any of the large language models on, so economics will constrain them.

The point of all this is that the world of software is vast, and so these tools are a tool within that, but they’re not the only tool, nor will there always be the only tool. There will always be others, and it’s back to us as the humans to figure out what tools do we need, what are the right ones, and the like. This is a tool. AI is just a tool for me, and I urge developers to learn how to use those tools, but don’t surrender their humanity. Don’t surrender your creativity to the tool because that is the value-added you have.

Shweta Vohra: I would also strongly encourage our listeners, especially who are builders, engineers, developers, that listen to this advice. Don’t do it mindlessly. Understand even to the part of it that how is it doing? Because when the day comes, we have to fix it. Who’s going to fix it?

Responsible System Design – Human Machine Boundaries, Principles, and Guardrails [38:09]

And that brings me to the next question that where are our human and machine boundaries? What are the principles? Because if I wear my architect’s hat, I’m always thinking these days that it is so much of things which these tools are generating, and who’s going to evaluate that? Who’s going to really create those boundaries? Where do we put these boundaries in?

Grady Booch: The question you ask is one of the reasons why I’m working on this documentary, because the intersection of computing and what it means to be human is just an astonishingly wonderful place to consider. I have no answers other than to know I see the process that’s happening there as we humans come face-to-face with this degree of automation at scale that we’ve never seen before. It’s utterly amazing. So I don’t have any answers, but I do have some principles that guide me, and those principles that guide me are that the creativity remains in the human camp, not the AI camp. And insofar as I surrender my creativity to them, I have made a fatal mistake in depending too much upon them, because that is not what they do. They don’t have the context. There is no they there, for that matter, because they don’t have the same fate.

They don’t have the same kinds of constraints or context that we do, nor will they ever in my lifetime or your children’s lifetime ever do so, because the context in which we humans have grown to be what we are is much vaster than anything we see out there. And no matter how much Sam wants to get a trillion dollars of investment, that’s not going to be enough. Large language models are an architectural dead end, and Yann LeCun, who has said the same kind of thing in that regard, so these are funny times in the industry right now.

I think the short answer to the question is, “Don’t neglect your humanity. Don’t neglect your ability to create and be different and think outside the box”. Ultimately, one of the things architecturally we know that large language model-based things cannot do is they can’t reason. This is controversial, I know. They have inference engines, and they have deductive engines, but they don’t have abductive reasoning, the ability to build theories. They can summarize, but that’s not the same thing as theory building. And as far as we know, abductive reasoning is currently left up to mammals. I say not just humans because there are other mammals who can do it as well, it appears, whales, apes, and the like. But our large language models, we don’t know how to do that, which is the exciting thing about this.

The last six years I worked for IBM, we’re working in our AI lab, and I also work with a set of neuroscientists because I realized as a computer scientist, I don’t know anything about the architecture of the brain, so I set off to study that for the last six years. And there’s an astonishing beauty in the structure of the brain, the way evolution has shaped what we have here. And those who are saying, “Oh, AGI is just around the corner”. It tells me they have a breathtakingly naive understanding of what the human brain or the organic brain does, so I don’t fear it.

Accountability When AI Generated Code Fails in Production [41:34]

Shweta Vohra: My takeaway from this is that architecture is even more important now than ever, because earlier we left it to maybe some testing folks will see, or maybe someone will fix it in production before it goes and those tests, but now it’s so much out there that we cannot leave it to that. We need to have our human in loop, guidelines, and principles, and guardrails. So, yes, that resonates, and we need to talk about more around that. But I don’t want to leave understanding one thing fundamental from you, that when AI is generating code, and let’s say it fails in production, where should architectural accountability sit?

Grady Booch: The human. An easy answer. It’s the human, not the tool itself. It’s like saying, “Oh, the dog ate my homework. Oh, the AI did-“

Shweta Vohra: That is not fully authored by me as an architect. Shall this responsibility land on me?

Grady Booch: That’s right. Tom Watson of IBM said in the ’50s and ’60s, that basically says that a machine should never be held responsible for a mistake. It’s always the human because the machine was directed to do it, so the ultimate responsibility comes back to the human who directed it. We know that in social systems, humans love to delegate responsibility. Humans love to have authority with no responsibility, and we are possibly down that path with AIs. “I have authority to do this, but oh, if it’s a mistake, the AI did it”. Oh, BS, the human did it. Take responsibility for it.

Shweta Vohra: Point taken. We will take responsibility. Definitely, because there’s no other way around-

Grady Booch: Right?

Shweta Vohra: … and we need to define those things. But yes, it is a common thing which comes if I have vibe coded and even the company who’s providing the vibe coding there, developers don’t know what it is providing, and-

Grady Booch: Yes. And this is where smells come in because I’m vibe coding, and it’s like, “Why does Architect know how to do this well?” Because he’s a tremendously experienced and talented developer, and he knows the shape of things. He can sense the smell of things. The same thing is true for you and I, that, “You know, wow, it generated this, but there’s something wrong here. Let’s poke it this way”. And that’s one of the great things I found with using things like Claude. It’s a great pair programmer for me, but it is an unreliable one because I know that it make mistakes. I know that it can’t even know if it makes mistakes, so I must be eternally and consistently vigilant and provide that supervision. And the moment in time I withdraw that responsibility, then all of a sudden it’s my fault if this thing goes awry. It’s not the machine’s fault. I am the one who screwed up here.

What Architects Should Lean Into and What They Must Resist [44:25]

Shweta Vohra: Yes. So, to conclude this one, I would say that if AI writes code and AI reviews code, then we are lacking on the human responsibility boundaries there, and maybe that’s where we need to seal it.

Grady Booch: Yes. Let me add one color to that, which is this all comes back to the human issue of trust. I have on my phone at the moment, I’m counting one, two, three, four, five different large language models. They’re all very different personalities, if I can use that word. And I have different levels of trust about them depending upon the topic in which they’re working, much like the friends I’m surrounded with. I talk to this person, and I expect them to do this, and I expect them to know this, but they have a blind spot about this. The same thing is happening with all these tools around me. Claude does some great things, but it also does some mind-numbingly stupid things as well too. So I, as a human, have built, if you will, a theory of mind of them, and that’s where the trust comes into play.

So I urge readers to say, “Go get comfortable with these tools, not unlike a carpenter gets used to the hammer they buy. It’s a little bit different. Weight feels different, its balance is different. Get used to it, and adapt to it, and adapt them to it, but don’t change who you are in the process”.

Shweta Vohra: Yes, definitely. Another way of looking at it is, it’s exciting time because whenever new comes, there are new opportunities and new ways to learning to it.

Grady Booch: Yes. And new times that are changing are also frightening at the same time because when you have change, when you’ve been in a period of reasonable plateau and equilibrium, when change like this comes along, there’s a lot of uncertainty, there’s a lot of FOMO, and that’s what annoys me the heck out of folks such as Dario and Sam because they’re saying, “Oh, you got to use this”. Well, no, I don’t got to. My first goal as a software engineer is I want to build cool stuff that adds value, and only insofar as your tools help me do that will I care about them.

Shweta Vohra: Yes. I’ve seen you doing it firsthand during the metaverse days.

Grady Booch: Yes. Oh, my gosh.

Shweta Vohra: That’s a memory for some other day to reflect. Anyway, what I want to close with is that for the architects listening today, what opportunities should they actively lean into with AI? And just as importantly, what behaviors or shortcuts they should resist, even if the tools make them easy?

Grady Booch: Ancient philosopher says, “There is no royal road to mathematics”. There are no shortcuts to architecture. It’s a matter of experiencing it, trying it out, feeling the consequences of your architectural decisions. Don’t be an astronaut architect where you say, “Go do this”, and you walk away, but you need to feel the consequences of your decisions. So my recommendation is go play around with these tools. They’re fun. Learn how to use them because they’re going to be part of the nature of software development for the coming future. But at the same time, hone your skills as an architect. Don’t just get bogged down in the nature of your particular domain. Go study code from outside your domain. Go read the original code behind MacPaint. Go look at the Unix kernel because learning about other approaches to architecture will help you in your particular domain along the way. The best writers I know also read. The best architects in software I know don’t just write code; they also read the code of others, so hone your skills constantly.

Shweta Vohra: I completely admire your point that don’t just be a namesake architect, just own your things when you advise something-

Grady Booch: Yes.

Shweta Vohra: … want it end-to-end, and then, yes, maybe more responsibility for people. 

Closing Reflections on Privilege, Responsibility, and Joy in Computing [48:24]

Shweta Vohra: Having said that, we have covered a lot here. We have started with architecture, design, responsibility. Then we spoke about reality versus hype, and we spoke about productivity, speed, and integrity, human-machine collaboration. Anything you want to reflect on before we bring it to a natural closure here?

Grady Booch: I have led an astonishing life that I never expected. I have met Grace Hopper. I met J. Presper Eckert. I didn’t meet Turing. He was dead before I was born, but I met people who worked with Turing, so my career spans from the very beginnings of our field to where we are today, and I have been able to learn from all those folks. I’ve been able to, I hope, to some degree, advance the field. I’m having a lot of fun. And I think that computing is a domain which is a wonderful one in which to live. For those of you out there watching, it is both a privilege as well as a responsibility. It’s a privilege because what we do as individuals, we’re changing the world, and it’s a responsibility because we are changing the world. What other industry can you speak of that has such a vast and current impact upon the nature of civilization itself?

That’s pretty freaking cool to be in this world, and so I would urge all the readers to say, don’t lose track of that. Celebrate and have joy in what you’re doing here because you’re in the midst of a civilization-changing industry. That’s pretty cool.

Shweta Vohra: Absolutely. It gives me equal joy when my mom comes to me and tells me that, “I’ve learned there’s something cool. Let me tell you how to use this feature or that feature-“

Grady Booch: Oh, right?

Shweta Vohra: “… and this configuration or that configuration”. It is really amazing feeling that, yes, we are doing something, but yes, we need to do little more responsibility. So all your advice, all your opinions, as well as the guidance which you have offered us today, is highly respected, Grady. I’m so thankful that you have joined us today here. Any last thing from you?

Grady Booch: Go have fun. Life’s too short.

Shweta Vohra: Yes, definitely. I’ll go and have ice cream today. The weather is good. I’ll go by your advice.

Thank you so much, Grady.

Mentioned:

  • Dear Software & AI Architect

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Alphabet issues 100-year bonds for AI investments Alphabet issues 100-year bonds for AI investments
Next Article New tech galore from Ecovacs including a robot mower that promises to map your garden in less than a minute… New tech galore from Ecovacs including a robot mower that promises to map your garden in less than a minute…
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

OpenAI opening ChatGPT access to Pentagon
OpenAI opening ChatGPT access to Pentagon
News
The Security Interviews: Mick Baccio, Splunk | Computer Weekly
The Security Interviews: Mick Baccio, Splunk | Computer Weekly
News
Wallpaper Wednesday: More great phone wallpapers for all to share (February 11)
Wallpaper Wednesday: More great phone wallpapers for all to share (February 11)
News
Czechia’s policy on China: Swinging between engagement and de-risking
Czechia’s policy on China: Swinging between engagement and de-risking
News

You Might also Like

OpenAI opening ChatGPT access to Pentagon
News

OpenAI opening ChatGPT access to Pentagon

0 Min Read
The Security Interviews: Mick Baccio, Splunk | Computer Weekly
News

The Security Interviews: Mick Baccio, Splunk | Computer Weekly

13 Min Read
Wallpaper Wednesday: More great phone wallpapers for all to share (February 11)
News

Wallpaper Wednesday: More great phone wallpapers for all to share (February 11)

6 Min Read
Czechia’s policy on China: Swinging between engagement and de-risking
News

Czechia’s policy on China: Swinging between engagement and de-risking

26 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?