By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Mental Models in Architecture and Societal Views of Technology: A Conversation with Nimisha Asthagiri
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Mental Models in Architecture and Societal Views of Technology: A Conversation with Nimisha Asthagiri
News

Mental Models in Architecture and Societal Views of Technology: A Conversation with Nimisha Asthagiri

News Room
Last updated: 2025/10/13 at 5:45 PM
News Room Published 13 October 2025
Share
SHARE

Transcript

Michael Stiefel: Welcome to the Architects podcast, where we discuss what it means to be an architect and how architects actually do their job. Today’s guest is Nimisha Asthagiri. She’s a global director in data and AI at Thoughtworks. She leads digital transformations for her clients with strategic combinations of design thinking, change management, experimentation, and platform architecture, often applied to data products. Her most recent focus is architecting agentic enterprises while applying systems thinking for responsible AI. Previously she was chief architect at EDX, driving intentional architecture for the next generation of large-scale online learning. Nimisha also serves as an advisor and board member to emerging businesses, including serving as a consulting CTO. She began her career in Boston-based technology startups and holds multiple degrees from MIT. A seasoned technologist, Nimisha is passionate about fostering innovation through the amplification of diverse voices and the synergism of collective strength.

How Did You Become An Architect? [01:44]

It’s great to have you here on the podcast. And I’d like to start out by asking you, were you trained as an architect? How did you become an architect? It’s not something you decided; one morning you woke up and said, “Today I’m going to be an architect”.

Nimisha Asthagiri: That’s so true, Michael, and thank you so much for having me on this podcast. I’m really humbled and privileged to be having this conversation with you and looking forward to how this comes out, and hopefully there’s something here for the InfoQ audience. So for myself, in some ways I want to say, “Yes, when I’m thinking about architecture, I’m thinking about design”. And ever since doing a robotics competition when I was in college and thinking about design and thinking about designing that robot and what is going to be the software around it, something that’ll be resilient, something that regardless of what the opposing bot is going to be, we’ll be able to survive and get some points and all that. So in my mind, that started very early on, and I really caught onto it. I really just loved that, and it was a creative expression and process and the collaborative aspect of it.

I think the title itself, and then you start expanding your scope as you continue to work. So when you’re beginning out, you’re thinking maybe smaller scale but then expanding your scope. So I’ll say the title itself, yes, it kind of happened organically. I was at EDX, which is a nonprofit organization for higher education, and we had an open-source community also supporting it, and we had a huge monolith. And what happened was me being a principal engineer in the organization and being very frustrated by what we were facing day to day, I was like, “You know what? There has to be something better, rather than handling this big ball of mud”.

So I started reading about domain-driven design. We started a book club, and then all of a sudden it led to as an organization and the few small people that we were able to cut together in a cohort who were interested in. It really started us talking about how might we do this differently. So that was the starting point. Eventually had the title principal architect, and then eventually chief architect for the open-source community, and just getting us all together and aligned on how we move forward.

Michael Stiefel: So it was sort of a gradual process, which is not at all unusual. One of the things that we’ve spoken about, and I know you’ve spoken about, is systems thinking, which is a very important concept for architects to understand. So how would you describe systems thinking as a concept? How would you explain its value to architects and how it is important for anybody, not just architects, who wants to think about design and architecture?

The Importance of System Thinking [04:39]

Nimisha Asthagiri: So for me, I got into it by reading Thinking in Systems by Donella Meadows. That kind of really opened my eyes, and you start viewing the world very differently. It’s no longer one cause, one effect, but in multiple things coming together. In software engineering, we’re always thinking about feedback loops. So already we’re kind of on that journey, and whenever we’re doing any sort of product engineering, already that’s in a Cynefin framework; it’s in the quadrant of complexity. We don’t know how our users are going to respond. There’s a lot of unknown unknowns. As opposed to, for instance, the known knowns might be more about how we might do CI/CD at this point. There’s a lot of strong defaults and sensible defaults in the industry. So that brings you into thinking about larger systems, but primarily it’s a lot about the unintended consequences of our actions.

One way that I might explain it, Michael, is what I like as a starting point for many people when I’m talking about systems thinking is the iceberg metaphor. And in that iceberg metaphor, there are those things that are very visible, which might be some point events, might be production failures, it might be those types of things. But then when you start doing root cause analysis, you start seeing those RCAs, and you’re understanding the patterns that keep emerging. So now you’re starting to get things below the visibility layer. And what’s there in the iceberg is your start seeing these behavioral patterns. But then when you start surfacing, what are the actual structural aspects within the organization?

The Importance of Mental Models [06:19]

It might be organizational boundaries; it might be your architecture; that may be not more rigid than you thought it might be; the communication patterns. So those structures are the invisible things, but then the thing that has the highest leverage point is the one that’s even furthest down in visibility, which is the mental models. And I think for me, therefore, that way of thinking about systems from that iceberg standpoint and the mental models, and this is where, as an architect, it’s awesome because you’re starting to uncover what those are. And that’s where the paradigm shifts start happening is when you’re uncovering it and then maybe reframing it and clarifying it for people. So it’s a great experience in thinking about it that way.

Michael Stiefel: You mentioned this idea of the mental models, which I have actually read quite a bit about for many years because it’s a fascinating thing to me. Because, as you say, it’s the most invisible thing, yet very often it is the most crucial thing. When people come together, and very often when you think they’re arguing about a certain thing, they’re not really arguing about that thing; they’re really arguing about a set of mental models. One is the classic one where you have an airplane crash, and it’s because someone says the pilot flipped the wrong switch. So the question is, “What did the pilot think? What was the pilot’s mental model at the time that made them flip that switch?” And then you start asking questions of, “What the gauges were saying, were the gauges giving the right information?” And it’s really, maybe the pilot actually did the logical thing, but the mental model and what was being presented to the pilot was wrong. That’s at one end.

At the other end is you have the classic – let me pick a design example, an architect example. Is that you have very often people who, let’s say, are trained with databases, and they view the world solely through the relational model. Well, relational models are very nice because it’s mathematically provable, but the world does not correspond as to what we found out when we’ve done distributed systems to the world of the relational model. So there’s another sort of clash at a much higher level. And it’s interesting to explore those things because that’s where the unintended consequences come from because everyone thinks everything’s logical.

Nimisha Asthagiri: Yes, yes. And everyone’s coming in with their own individual perspectives. And there’s a dissonance, right? At the surface, it seems like there’s a dissonance in terms of the database developers and then the front-end developers and coming from very different worlds. And I think with systems thinking, there are frameworks and tools that we can use to try to draw some of these out. And I think being able to understand those interdependencies. But here, I think this is what I love about it is also just there isn’t that one is wrong and one’s right; it’s actually the multiplicity of it all coming together. And so therefore it’s like, “Oh, yes, you have that perspective. I have that perspective, and then the sum is greater than individual parts, and so let’s now figure out the collective understanding”. And so those mental models then become an uber mental model. And as an individual human, it’s hard to maybe have the cognitive load to understand it, but that’s why things like diagramming this out really comes into play.

Mental Models and Domain Driven Design [10:10]

Michael Stiefel: Well, domain-driven design is a great thing for this because that’s the whole idea–we have the bounded context.

Nimisha Asthagiri: Exactly.

Michael Stiefel: Sort of corresponds to a mental model of something.

Nimisha Asthagiri: Exactly. And allowing for the polysemes, right? The polysemes across those bounded contacts and the customers in the sales domain versus marketing domain and the product domain will have a very different concept of customer, but that doesn’t mean one’s right or wrong; it’s just yes. Now the thing is how the boundaries are from a Conway’s Law standpoint as well. The communication structure mirrors the boundaries in your code base. But I think that also comes into play, and that’s one level above the mental models, is the structures. So the structures, those boundaries that we may have, human-made, in an organization or in society then also lend itself to having impact on our mental models as well. So that’s the way we view the world.

Business Requirements and Mental Models [11:08]

Michael Stiefel: And how do you find the business requirements as a way to sort through these mental models? Because sometimes at the end of the day, we’re producing a product for some goal. And when I say business, I don’t necessarily mean ROI-producing business. It could be a nonprofit as well, but somehow the business rules or the business needs have to filter into these mental models somehow.

Nimisha Asthagiri: Yes. And this is where I do view architects playing or any technologists, right? A leader in an organization from an architectural standpoint, I think everyone in our teams, understanding the business impact, product impact, at the end of the day, human impact of the work we do is crucial. And that does take a little bit of sometimes even unlearning to understand, as well as investing some time understanding the business. At the end of the day, our technology strategy must align with the business strategy; otherwise, we’re not really aligning in the same direction. So yes, no, that very much comes into play.

Michael Stiefel: So now I want to just take a little twist to this because I think this is very important. When we start to look at artificial intelligence, where I know you have some very important views on. Well, let’s start from sort of a classic situation. I’m sure you know many people in the medical profession were very frustrated with medical record software. And the reason why that was–I mean, there are many reasons for why that is the case, but one of the primary reasons is the people who paid to have the system built, the insurance companies, were not the actual end users.

In other words, it was made for–I’m oversimplifying here, but just to take one sort of aspect of this problem in order to get to where we want to go–is that the business requirements that were set were set by one class of people. And there was a great effect on the end users who had very little choice. And where I see this kind of dichotomy becoming much more important is in the area of artificial intelligence. Where very often the people who are building the models, who are building these multi-agentic systems, are not the end users, and they don’t have the same values or the same interests as the end users. And how do we grapple with that problem?

Nimisha Asthagiri: Yes. And the diversity that we’re finding also in our development teams aren’t necessarily where we want them to be. So I think there are a few things here about the techniques that we bring into our development processes to address this because we are seeing impact already in society from the technology that we’re building. And some of these are unintended consequences because, as you said, the users and who we are impacting are not necessarily the ones who are in the room when these decisions are made, or they’re not represented. So whether… The example I gave in the InfoQ talk was social media. And there’s a lot of advantage with social media and just connecting people together, whether it’s long past history of your alumni from your alma mater’s to your family overseas, but then there’s the addiction, and then there’s some depression issues that also come about. As technologists when we’re building this, we might be really seeing just the positives and not necessarily thinking about the other consequences and using techniques to spend even some time-box time for this of like an hour.

Let’s get some diverse ideas in the room, make sure all the voices are heard, facilitate that conversation. We draw it out and cause a loop diagram with what the reinforcing groups are and then what are those balancing loops? What are those loops that we might want to put in? And by balancing loops, we’re talking about just like the yin to the yang, we’re talking about, “Hey, well, this is going to continue to a cyclical form compound and compound”. Do we want to put some sort of measures in place? A reminder that, “Hey, you’ve been on the social media for five hours now or so forth”. Little things can go a long way. That’s a technique that when we do requirements analysis and design and definition of done. Right, something that we can bring in as we’re working through this.

The Architect’s Role in Reconciling Different Mental Models [16:00]

Michael Stiefel: Well, I think this is especially important for the architects, because the architects are the ones who see the system as a whole. One of my definitions of what goes into the bailiwick of the architect is things that you can’t write a use case for. In other words, you can write a use case to say, “Will be able to process this kind of information”, but you can’t write a use case to say, “This system shall be secure, this system shall be scalable, or this system should be safe for teenagers”, or something like that. Because those emergent properties of the system, if no one is responsible for them, they won’t get taken care of. So this is, I think, where what you’re saying very naturally leads to the architects because they’re the ones that are in the boiler room, and they’re the ones that can communicate these things. So what is the incentive for the architect to do this, and how does the architect communicate the need to do this to a business person perhaps who maybe has venture capitalists breathing down their neck and saying, “Get this out the door?”

Nimisha Asthagiri: Yes, that’s a great question. And I think something as a society–who’s that role, and why not let it be architects? I agree with you, and it’s the tragedy of the commons. Is it the CEO who’s on the hook, or the product leader who should be giving us the requirements, or the technologists who are building it, or is it the consumers who aren’t bringing their voice? An architect is a great archetype for this because we do have that elevator position, and we’ve been exercising those muscles to bridge the gaps between the business mental models, the product mental models with the technology mental models. So we are equipped for it. Do we have the voice and the strength to make that change? That’s a question for us. Do we have the guts to do that? Do we have the influence to do that?

Well, I do think that another muscle that we do exercise is influence, right? Because many architects, and even in my previous role, I wasn’t necessarily managing, although I’ve been transitioning between a manager role and architect role, or IC role, individual contributor role, back and forth over my career. But as an architect, you might not necessarily be managing a team. And even if you are managing a team of architects, you’re not necessarily managing the product development teams who are actually building things. You’re influencing them. So you’re exercising those muscles anyway. And to your question, how might you pitch this? Something that I’m still learning and I don’t have a clear answer for, but I’m looking for others to work with us and maybe make a stronger voice.

But one thing is a differentiator. Because we don’t see this as much right now. The prevalence of it is emerging but not really there, could actually be a business differentiator to say, “Hey, we do AI. Yes, like everyone else, we do AI responsibly”. And it’s not just lip sync, right? We’re actually doing it by doing X, Y, Z and what are those X, Y, Z techniques? That too, I think as a community we can develop over time. Causal loops are one way of just actually capturing our consequences. But then let’s see what else is in our architecture tool belt. So for instance, even modularity from the 1970s Parnas and thinking about the separation of responsibilities between agents, human agents, machine agents, amongst machine agents. So there is a lot that can come together in a structured way of thinking through how to design our multi-agent systems. Yes.

Michael Stiefel: I mean, certainly to your point, Apple has had a reputation for security in the phone space. Now how this maintains with government pressure and AI, that’s a different story, but certainly they got a reputation as you suggest for security with phones. So it’s certainly plausible that a company could develop a reputation for this. Part of the problem I see is that we as a society don’t fully understand what we want out of artificial intelligence, number one. Number two, there’s sort of a debate over large language models and are they artificial intelligence? So what are they? But to me, the most pressing thing is that these risks are not equally distributed among the population.

In other words, 90% of the people might; I’m just making this up right now to make a point. 90% of the people might be able to spend eight hours a day on social media, and they’ll be perfectly fine, but there’ll be that 10%, that 15% who will be extremely negatively impacted, and they will have negative impacts because we don’t very often look at sub-cohorts when we do these analyses. I’ll give you another example. For example, they talked about children viewing violence on TV don’t necessarily become violent people. Well, that may be true for 95% of the population, but we do know that there is a small cohort of people who that does have an effect on. So I think artificial intelligence is amplifying this problem in a way that I don’t think we as a society know how to deal with. And certainly the regulators are way behind the times in terms of understanding technology.

Nimisha Asthagiri: I mean technology itself and the information age, it has been disruptive, and the disparity that is there in society, whether it’s the economic disparity that gets amplified as a result. So you’re definitely spot on there. And I think, do we as a society and those that are privileged, how much effort and time and money do we spend on that 10% that you were estimating there? See, this is where systems thinking is saying, “Oh, we better”. Because systems thinking does say that one mental model is the interconnectedness that we have, whether it’s even in your family. If we take a smaller scope there, one member of your family that needs support or that needs help, it has an impact on the rest of the family. Whether you go on family reunions or the model that you have for your own next generation. There’s a lot that does influence. That mindset is something that perhaps some cultures have more inherent within them.

I know that the Indian Americans, they’re thinking seven generations ahead, “Hey, if I do this today, how is it going to impact seven generations ahead?” We don’t necessarily have to maybe go that far. But yes, I think there is still a little bit of time-box effort, just enough design. Like when we create architecture design records, for instance, as architects. And that’s something that has been adopted or getting adopted; it means people are now not questioning it as much, and there’s just enough of that. Depending on the decision you’re about to make, some decisions have a lot more wider impact on your organization, might take a little bit longer to review and assess other things, like even if it’s a multi-hour session because it’s a smaller scale or whatnot. But you’ve written it down, and when you wrote it down, you thought about different options. And with architecture, everything’s a trade-off. So there’s never a perfect decision, but you still took a step of just thinking it through. So similarly, some sort of time-box effort could do us good. Yes.

Multi-Agent Systems [24:14]

Michael Stiefel: So we’ve gone down sort of one path, but I want to get back to another question that we raised early on with multi-agent systems, where this really comes into play.

Nimisha Asthagiri: Yes.

Scaling Multi-Agent Systems [24:26]

Michael Stiefel: How do you scale multi-agent systems? This is interesting to me because on one hand they have to be independent in order to be effective. On the other hand, they have to have state. What does it mean to scale a multi-agent system?

Nimisha Asthagiri: Yes, that’s a good question. There’s so many different aspects to it, right? Of course, there’s the underlying technology and the GPU nodes and making sure we have that at scale and the alternatives of thinking at the model level of large models versus more diffused student models created from teacher models. So that’s at that level. And so there we’re scaling, thinking about the hardware but also the impact on the climate, the environmental impact with a lot of this. So the smaller yet more focused model for your purpose might do better from all of those different dimensions. Then there’s the aspect of thinking about the architecture and the structure. So if you’re going one level above, there is just thinking about the structure of the multiple agents. Is it an orchestrator pattern? Is it a peer-to-peer pattern? How collaborative are they? Are they competing with each other?

And then you have a decider that decides, “Okay, I’ll take the best decision that emerges”. So that is also an aspect of thinking about the structure and just as you might think about scaling an organization of humans where a lot of these types of things come into play. So you think about the organization structure within your organization and who are the teams and how small should the teams be? Is it two pizza-sized or whatever? And do I have a competing R&D type of team that’s trying to actually find something else compared to what’s currently status quo? There’s a lot of different aspects that come into play. But as you might scale an organization, a human organization, communications always become also a bottleneck. And there, the communications, some of them are very top-down, and thinking about that structure versus how might you have a more generative organization allowing for bounded autonomy.

Bounded Autonomy [26:34]

Now I have a bias towards figuring out how to do bounded autonomy at scale rather than top-down control. So my mind tends to go towards that, but that’s not always the case, especially in very highly regulated environments. So it just depends. It is a “depends” type of answer there too. But if you’re thinking about bounded autonomy, then yes, exactly what you were also hinting at there, Michael, is thinking about the communications between the agents, and there I think we have to have very good patterns and standards that we will need in the industry. Of course, there’s model context protocol now from Anthropic that’s come out with MCP, and Google has their A2A, but those are more at the protocol level. I think from an architectural lens, we’ll want to have an understanding of what is that single responsibility for this agent. And then evals, governing agents to make sure it doesn’t go out of its constraints and out of its bounds; otherwise, it’s going to become rogue.

Even as humans we’re like, “Oh, I can do that. I can do that”. An agent that’s like, “Oh yes, I can do that”, and goes a little bit outside of its scope can then have unintended consequences as well. So I think that type of definition and declaration of the agent’s boundaries become very important, and self-declaration of “this is what I can do”, that then gets communicated to the agent landscape and environment, so then others know how to leverage it. So that becomes something that could be a lot more self-organizing, if you can imagine what I’m thinking there.

The Role of Humans in Multi-Agent Systems [28:19]

Michael Stiefel: So where in this multi-agentic system; where do humans fit in? Because, as you no doubt know, but for the benefit of our listeners, I’ll sort of mention the sort of three patterns: humans in the loop, humans on the loop, and humans out of the loop. Being in one case, humans in the loop being very intimately involved. Humans on the loop is the human just makes the go-no-go decision. Humans out of the loop is where the AI is just doing everything on their own. And how do you see this bounded autonomy relate to these three aspects?

Nimisha Asthagiri: Yes, I do think that… And by the way, when we’re talking about multi-agent systems, I do want to also preface though that I think there’s still a lot that needs to emerge in the industry to really do it more responsibly. And I think there’s still a lot of gotchas around response times as well as a lot of the ethical concerns, and things like that still need to be hatched out, making sure that the responses are as we’re thinking. And hallucinations are understood, and not that we’re going to eliminate them with non-deterministic agents, just like who knows what the three-year-old kid is going to say, let alone three-year-old, even a three-day-old kid.

Michael Stiefel: But three-day-old kids are not making societal decisions.

Nimisha Asthagiri: Very true. Exactly. But they do impact the humans around them. Yes.

Michael Stiefel: Yes.

Nimisha Asthagiri: So coming back to your question about where humans fit in, I think that this too will be very dependent on the purpose of the agent. I think having also an understanding of these different topologies, and that was a good mental model that you had of these three different ways of categorizing it: human in the loop, human out of the loop, and human on the loop. Let’s say, for instance, some examples might be like a surgical robot, and in that case you would really want to make sure that it’s not just doing it autonomously, while you may have trained it, while there’s a lot that humans themselves may not be able to have as steady a hand. Of course our human surgeons have trained a lot for that, but maybe robots might be able to do that a little bit more precisely, but you still want the human in the loop to be able to make the judgment calls.

So where do we need the human judgment to come into play is a huge factor. And now I think where you might do things more autonomously but still have reporting back to humans, just like we don’t necessarily get in the way of a functioning sub organization, let’s say there are robots or drones that we might send to a location that we actually don’t want humans to be present because it’s not safe for humans. But that’s where robots can come in. So that’s a good use of robotic agents there. In that case, we would still want them to report back to the humans though. So we’re kind of out of the loop on the ground, but we’re still in the loop when it comes to decisions. But when they’re on the ground, they need to make their own decisions as well such as where to go. So I think it depends on the decision and depends on the purpose.

Michael Stiefel: I mean, if you’re sending a device to Mars, they clearly have to be on their own. But it is interesting you mentioned drones, and I don’t want to get too far afield. One of the places where I see this becoming a big problem is in the military sphere, because, you know, there are countries that may send out drones where humans are out of the loop, and the humans are not going to be fast enough to reply to drones. So there’s sort of an escalation here. If one side starts using technology without humans involved, there’s going to be an incentive for everyone to use technology that humans aren’t involved. And I really don’t see a way out of that dilemma, but it’s something for people just to think about.

Nimisha Asthagiri: So you’re saying in terms of humans as a bottleneck essentially, right?

Michael Stiefel: Yes.

Nimisha Asthagiri: And those societal problems that we might need to solve where we don’t want humans to be a bottleneck. So yes, no, I think that’s a great point. And I think once we develop these further, and when we’re talking about AI agents, and my mental model for this, is really I’m talking about things that are autonomous and able to make their independent decisions as well as able to learn. Now it’s on a spectrum of where they are. Some are a lot more autonomous than others, and some are a lot more… With more learning capabilities than others.

Similarly, I think if we do have such types of societal problems, we will probably want to invest more in those types of machine agents where humans are out of the loop and we are asking those robots or drones to make those decisions on our behalf. So I think there too, just like we might have in some parts of our distributed systems, there are a lot more core versus supporting versus generic in terms of domain design terms. Similarly, here, I think from a societal portfolio strategy, where do we put more energy and money versus others?

Can We Have a Healthy Relationship with Artificial Intelligence? [34:00]

Michael Stiefel: So to sort of wrap up, before we get to the architects’ questionnaire. I want to engage in a thought experiment. Let’s imagine a parallel universe where we have a healthy relationship with artificial intelligence, whatever that might be. How do you envision that parallel universe? I’m sure you’ve thought about this a lot.

Nimisha Asthagiri: Yes, and I’m curious what your answer is going to be as well, Michael, for this. But I do think that that healthy relationship is one where humanity is not worried, and humanity has found that relationship with artificial intelligence in a way that artificial intelligence is supporting humanity. And humanity gets a chance to actually be human and understand what that means for ourselves too. It’s not necessarily the race to the finish; it’s not necessarily… Or it can be unless if that’s really what you want, but it’s really the experience, the human experience. And that could be everything from really appreciating the feel and the output of a musical instrument. And it’s not worrying that, “Oh, well, this AI could play it much better than me”. It’s more about, “Oh, I love being able to do this and this struggle and challenge that it took for me to be able to produce this beautiful sound”.

“And yes, as a human, I’m going to make mistakes, but so be it. That’s who I am, and that’s what I appreciate”. So I just feel like that to me, in that parallel universe where we don’t have to worry about impostor syndrome.

Michael Stiefel: Yes.

Nimisha Asthagiri: Or these types of doubts and existential crises, it’s more of like, “Oh, yes, you got this robot. Good, I got this. And you’re doing this for me. Great”. And the other thing is that I know we read about this and there’s the hype about this, so I don’t know, but in that parallel universe, because you’re allowing me to think that way, I’ll say that, “Yes, I think we’re able to, as humans, leverage these autonomous machine agents to be able to solve some of our societal problems that, from our human cognitive limitations, we’re not able to today”. So really leveraging it for its strengths while still retaining our humanity. Yes. What are your thoughts?

Michael Stiefel: I’m coming at this from a very different point of view. I’ve lived my life without any of these agents. I don’t depend on them. I spend almost no time on social media. I live in a world where I like to read books, play musical instruments, learn foreign languages. For example, during the pandemic, the TV in my house basically did not go on at all.

Nimisha Asthagiri: Nice.

Michael Stiefel: So there were a couple of football matches I watched on TV, but the older I get… I mean, certainly I’m not becoming a technophobe nor removing my life from technology because there’s certain things that I certainly use technology for, but I don’t have this need to have this agent to talk to. So in my world, sometimes I wonder if we’d be better off if the Internet was never invented. Sometimes I feel about the Internet the same way that the atomic physicists, the nuclear physicists thought of the invention of the atomic bomb. I mean, you can’t uninvent it. And some of you would’ve come up with it anyway sooner or later because some of the logic of these things is very compelling. I mean, if you have network systems and independent agents, it’s the constraints. I don’t know if you were around for the days when people tried to push object models across the network, making remote procedure calls.

Nimisha Asthagiri: Yes, with CORBA and…

Michael Stiefel: Oh yes, CORBA and DCOM and RMI. That was a stupid idea for lots of reasons. In some sense, something like having distributed systems of some sort and an internet of some sort… The technical and scientific constraints almost forced you in that direction. So it was going to get invented one way or the other. But to go to your point about societal constraints, one of the reasons why we have problems with security on the Internet, or the more precise World Wide Web–because I mean, we tend to use the words “World Wide Web” and “Internet” interchangeably, but they’re not.

Nimisha Asthagiri: Yes.

The Consequences of Societal Diversion of Technology [39:06]

Michael Stiefel: So the World Wide Web was designed for scientists to exchange static information. It was never designed to be secure because it didn’t have to. It wasn’t designed to be transactional because it didn’t have to. So all these things came from society trying to use a technology that was not designed for that purpose.

Nimisha Asthagiri: Yes.

Michael Stiefel: And therefore use it for another purpose and then try to impose these constraints on it, which it was not designed to have in the first place.

Nimisha Asthagiri: Yes.

Michael Stiefel: So this is why I find this conversation very, very difficult to have for society, because I see how in the past, it’s an evolutionary constraint. We’re very short-term. Mind you, we’ll take what’s available and use it without thinking. So I’m kind of pessimistic, I know.

Nimisha Asthagiri: But you’re right. I mean, Tim Berners-Lee had actually a decentralized web mental model when he put it out, and I think it became very different with huge central systems and things like that. So I think we took it, and that was an unintended consequence of his design.

Michael Stiefel: Well, I don’t feel it’s an unintended consequence because he didn’t intend for it to be used the way it was used.

Nimisha Asthagiri: Yes, yes, exactly. I mean, he put it out. But then there’s an entropic force in our universe, and as those architects, we’re trying to contain it or direct it in a certain way. And you’re right, I think some of it is band-aids and patching. If we had the opportunity to throw away this prototype over the last 50 or 60 years and then recreate it, we might recreate it very differently. But it depends on who you give that power to. If you give it to the businesses, which means it’s still going to be something a bit out of our hands when we have all of those different mental models at play.

Michael Stiefel: And things are moving at a speed that we cannot come to grips with, which is another problem. I guess I’m sort of ending on a little pessimistic note, but I’d much rather be realistic about things and try to make things work for the better than to have sort of a mindless “Oh, it’ll all work out in the end anyway”.

Nimisha Asthagiri: Yes, no, there’s a little bit of inevitability that is happening in society.

Michael Stiefel: Yes.

Nimisha Asthagiri: And I think I share that with you. At the same time, I am optimistic in the sense that, “Hey, if we do put our minds together, we can do things”. Maybe have some sort of course direction, maybe not completely stop some things. But yes.

The Architect’s Questionnaire [41:48]

Michael Stiefel: So now I’d like to come to the part of the podcast where I ask the architects questionnaires to people to make this a little more human, a little more personal. We’ll see how people feel about architecture. What is your favorite part of being an architect?

Nimisha Asthagiri: It’s sort of like the MapReduce way of thinking about it. I love the opportunity to facilitate a conversation and being able to collect the diverse opinions, and I’m not talking about just even diverse opinions across functions where they’re like product and customer experience and why developer and backend developer and other architects, but just the diverse opinions of the humans that happen to be in the room. And so you are mapping all of that, and you’re collecting it, and then you’re reducing it, you’re synthesizing it. And so I think for me, facilitate and synthesize, I love doing that and taking complexity or something that seems complex and then being able to put something together that everyone feels like they see their part in it, and then that could result in some alignment.

Michael Stiefel: What is your least favorite part of being an architect?

Nimisha Asthagiri: Yes, that’s a tough question. I think it might depend on where you are as an architect. For me, sometimes it’s feeling disempowered because you are leading change through influence. Sometimes the notion of architecture and architect may not necessarily have the positive connotation in different places. They might view it as something that is a bottleneck or slows things down, or they might view it as like, “Hey, it’s an armchair architect. What do you know?” That type of thing. So I think the least favorite part, I guess, I would say, is just the industry’s perspective on the value of it not being omnipresent at the moment. But I do hope that’s changing. I think we’ve gone through phases up and down, though I find. It swings back and forth. That’s another challenge is to be able to find a good balance that allows you to do both.

Michael Stiefel: Is there anything creatively, spiritually, emotionally about architecture or being an architect?

Nimisha Asthagiri: I think for me, it’s the love of design and simplicity that can come out of creating something and learning it deeply enough that you simplify. And it’s like Pablo Picasso’s paintings of those bulls. It took multiple iterations to come up with something that’s just a few simple strokes. The fur is gone but just replaced by a single stroke of the back of the bull. And then there’s just a hint of the horns and whatnot. And I think that just takes you into a place where you feel so connected to what you’re trying to model. So you get this spiritual deep connection to it in some ways when you’re really… And then the creative and emotional aspect of it is just the simplification and the beauty in simplifications, the beauty in simplicity. That’s something I always appreciate when that happens. It doesn’t always happen, but when it does, you’re like, “Oh”, you feel satisfied.

Michael Stiefel: Well, not every artwork of Picasso is a masterpiece.

Nimisha Asthagiri: Yes, yes.

Michael Stiefel: What turns you off about architecture or being an architect?

Nimisha Asthagiri: There is a lot of context switching, at least in some of the roles I’ve played where there’s just… So you want to go very, very deep. Because you just love it and there’s a flow, but then you’re also getting pulled in multiple directions. So I think it’s just finding the right balance of how far deep you want to go and whatnot. But there’s a lot of context switching, and I don’t know how well our human mind is, how well it’s made for context switching. Some people do that very well, but for me it takes mental energy to do that.

Michael Stiefel: Do you have any favorite technologies?

Nimisha Asthagiri: I just love diagramming. Because I am more of a visual person. So for me, anything that can do that for me. And Excalidraw, for instance, was a tool that I got attracted to more recently, and it’s just very simple, and I like simple technologies. So I don’t like ones that have a lot of different menu items and it just gets exploded. So yes, Excalidraw could be one. Draw.io, like any of those, works very well. And these days, though, because there’s a lot of remote work that’s happening, and I do like to co-create with clients or co-create with my colleagues would be things like Miro and Mural, which is an online whiteboarding tool. While that’s not necessarily an architect’s tool, it’s still a very great collaborative tool, and we’re able to share things visually.

Michael Stiefel: What about architecture do you love?

Nimisha Asthagiri: I think the fact that there is… Each has its trade-offs, and being able to appreciate that. Like in Ayn Rand’s Fountainhead book, there the perspective was… This is a physical architecture like in building. The author is making the point through her characters about the simplicity of buildings and not being attracted to the gothic ways of thinking about buildings, and really just very utilitarian.

So if you appreciate the artists’ and the creators’ way of thinking there, that is great. But I also do like, I mean, come from a background of… I have an Indian heritage, and we like flashy, colorful things, but understanding that culture and where that rich culture is coming from. So it’s not one size fits all, but it’s appreciating from that perspective. So that’s what I would say about architecture. And I think while I talked about buildings and clothing or whatnot, I think even in our code or in our technologies, understanding that. Although of course I have biases, like I said, I don’t like the more complex UIs that emerge after years and years of just tagging on more. But a human-centric design in our technologies would be something that I continue to value. And I don’t think we’ve gotten the techniques completely there yet.

Michael Stiefel: What about architecture do you hate?

Nimisha Asthagiri: That’s what I would say actually. It’s the ones that are patching and patching and band aids and… Yes. And so I think even for us when we’re doing application modernization and thinking about breaking up monoliths and whatnot, just taking what we built 20 years ago and then just changing it to a different programming language or the latest technology. But while still inheriting all that legacy, in my mind it’s like, “No, guys, come on. Let’s think about what’s actually core, what’s valuable. Let’s simplify as we go”. So the default of just thinking about technology and not the purpose would be something.

Michael Stiefel: What profession, other than being an architect, would you like to attempt?

Nimisha Asthagiri: There have been times when I’ve thought about being an educator and maybe going back to school and getting a PhD or something and really going deep into a subject matter and then being able to live my life. I’m married to a professor, to an academic, so I see his life, and I’m just thinking, “Oh, yes”. But you know how it is. The grass is greener on the other side. But regardless, I do teach on the side. I teach a philosophy class on Sundays to high schoolers, and I’ve always done that in some way, shape, or form, but not as a profession. And I do think that when you are teaching, you learn a lot from teaching as well. So it’s that symbiotic relationship that you create with your students and your profession that is very nice.

Michael Stiefel: Do you ever see yourself not being an architect anymore?

Nimisha Asthagiri: No, I don’t. And I’m using the word “architect” very liberally, right? Because when you started this podcast, when you asked me about being an architect, when I was going back before I even had that title, and even before that, I could look at childhood and, in some ways, architecting my little brother.

Michael Stiefel: Yes.

Nimisha Asthagiri: Somehow he might learn mathematics or something. So when I use that word very liberally, I would say, “No. I think that’s part of humanity”. Is us architecting our every single moment and our life and who we are around us. So I hope that’s okay to use that word liberally that way.

Michael Stiefel: Sure, sure, sure. When a project is done, what do you like to hear from the clients or your team?

Nimisha Asthagiri: I love to be able to hear that they learned something. Even greater would be if there was a mental model shift. I never thought about it that way, and now I do. Or like, “Oh, wow. Now I see the world differently, for the better”. When we do things like that, we feel like we’re leaving the world in a better place than when we found it. So this is, as an architect or any technologist or as a human, you feel like… And I don’t think of it as a legacy as much as, “Yay. Okay, good. Something came out of this”.

Michael Stiefel: This was fascinating. I had a great time talking with you.

Nimisha Asthagiri: Likewise.

Michael Stiefel: Hopefully our listeners will find this very interesting because once again, this is another perspective on this profession that we call being an architect. Thank you very, very much.

Nimisha Asthagiri: Thank you so much, Michael. Really appreciate it.

Mentioned:

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Why 217,000+ People Switched to Oricle Hearing Aids
Next Article Refactoring 035 – Use Separate Exception Hierarchies for Business and Technical Errors | HackerNoon
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

The HackerNoon Newsletter: Can ChatGPT Outperform the Market? Week 9 (10/13/2025) | HackerNoon
Computing
Google's Nano Banana Brings More Visual Flair to NotebookLM's Video Overviews
News
Google will allow you to hide ads in your search results, but only after scrolling
News
Jack Ma praises Alibaba’s changes in the past year under new chairman and CEO · TechNode
Computing

You Might also Like

News

Google's Nano Banana Brings More Visual Flair to NotebookLM's Video Overviews

3 Min Read
News

Google will allow you to hide ads in your search results, but only after scrolling

4 Min Read
News

Paws and Play: Grab the Furbo Mini 360 Degree Pet Camera for 28% Off

4 Min Read
News

5 Google Gemini AI photo prompts to try

6 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?