Key Takeaways
- Creating an effective architecture for an MVP takes time that teams seldom have; AI helps buy them time to deliver better results.
- AI will enhance rather than replace software architects by better informing their decisions and automating mundane tasks to free them to discover more creative solutions to meet architectural challenges.
- AI cannot create an architecture because it cannot make decisions, but it can suggest alternatives when provided with sufficient context in the query prompt.
- AI can help teams who are less experienced with software architecture learn about architecting by exposing them to possible alternatives they may not have considered.
- While AI makes some architectural tasks easier, it amplifies the importance of architects who can make sound, empirically-based decisions about trade-offs.
Architecting an MVP is always conducted under extreme time constraints. AI can help to partially relieve these constraints by suggesting alternatives based on others’ experiences. While it can’t make decisions, it can help teams to be better informed to make those decisions. They still need to validate their decisions experimentally, but AI can help here, too, by generating some of the supporting code necessary to run the experiments.
Potential Benefits of AI for Software Architecture
In an earlier article, we defined what “software architecture” means to us:
“Software architecture is about capturing decisions, not describing structure.[…] The key activity of architecting is forming hypotheses about how the system will meet quality attribute goals, and then using empiricism to test whether the system meets them, and then repeating this loop until the system meets its quality goals”.
And the most important architectural decisions are about trade-offs between Quality Attribute Requirements (QARs).
How do software architects make these trade-offs, and how might AI help? The following table summarizes the ways that AI may help teams with tasks that we feel are important to architecting software. The “Can AI Help” column indicates the relative degree to which AI can help with the task, with the shaded portion of the figure representing the degree to which we think AI can help. While AI cannot completely perform any task, it can partially support some of the tasks more than others.
Each of these is discussed in detail in the sections below.
Understand the QARs that the solution must satisfy
A key aspect of architecting an MVP is forming and testing hypotheses about how the system will meet its QARs. Understanding and prioritizing these QARs is not an easy task, especially for teams without a lot of architecture experience.
AI can help when teams provide context by describing the QARs that the system must satisfy in a prompt and asking the LLM to suggest related requirements. The LLM may suggest additional QARs that the team may have overlooked. For example, if performance, security, and usability are the top 3 QARs that a team is considering, an LLM may suggest looking at scalability and resilience as well. This can be especially helpful for people who are new to software architecture.
For example, meeting scalability requirements is an important but often overlooked challenge when implementing an MVP. There really are scalability requirements; they are just hard to see. Every system has a business case, and the business case has implicit scalability requirements. Using an LLM may help a team to consider scalability requirements that they may have overlooked, especially when building an MVP.
Drive architectural decisions
An early architectural challenge is quickly narrowing down the search for suitable frameworks and technologies. Although an LLM cannot make a decision about which technologies to use, it can narrow the search for alternatives by identifying potential options and collecting reported positives and negatives of those potential options.
Understanding that AI cannot make decisions about trade-offs is important: it can suggest alternatives that help to inform decisions, but balancing trade-offs is up to the development team. To get the most from an LLM, they must provide specific details about the context when asking questions of the LLM.
For example, if performance is one of the QARs the system must meet, asking for recommendations to “make the system fast” in a prompt isn’t likely to result in useful recommendations. Being specific about requirements for response time/latency, turnaround time, or throughput will elicit a much better set of recommendations if key system characteristics are also provided to the LLM. It also may force the team to think more deeply about the domain to be able to provide a concise explanation of the requirements.
In a way, the act of prompt engineering and providing important context can be as useful as the results provided by the LLM. Getting good results requires specific requirements, clear communication, and an understanding of the desired outcome. This is true whether or not you are using an LLM. This is a case of “going slower to go faster” because taking time to think more about the problem domain will allow you to benefit from the fast results provided by an LLM.
Use experiments to obtain empirical results that support their decisions
AI can be very powerful by helping a developer create code to solve a particular problem, but developers have to validate the AI-generated results, as the following quote illustrates:
“While AI can generate code, human expertise is still needed to ensure it is correct, efficient and maintainable”.
Joanna Parke
Chief Talent and Operations Officer, Thoughtworks
Sometimes validating the AI’s results may require more skills than would be required to create the solution from scratch, just as is sometimes the case when seeing someone else’s code and realizing that it’s better than what you would have developed on your own. This can be an effective way to improve developers’ skills, provided that the code is good. AI can also help you find and fix bugs in your code that you may miss.
Beyond simple code inspection, experimentation provides a means of validating the results produced by AI. In fact, experimentation is the only real way to validate it, as some researchers have discovered.
In prior articles, we have described the need to use experimentation to test and validate architecture. The fact that parts of an architecture have been generated by AI does not change this fundamental truth: architectures can only be evaluated empirically; they can’t be generated from a set of principles.
In addition, an LLM may help a team rapidly create simple but sufficient user interfaces (UIs) to test business and architectural hypotheses. Early MVPs don’t need highly polished UIs; in fact, investing too soon in a sophisticated UI can be wasteful. A better strategy could be to use an LLM to quickly generate UI code that is good enough to explore critical issues as early as possible.
Finally, AI can also help with some of the more mundane but important coding tasks, such as creating a set of unit tests. Many teams struggle to find time to create adequate unit test coverage, and the relative simplicity of unit testing makes it a good candidate for AI assistance.
Document and communicate the architecture
Documenting and communicating architectural decisions and their implementation is a significant aspect of what architects must do. If future teams supporting the system don’t understand the decisions that the team developing the system made, their changes will cause the system to degrade. AI can help with this while the system is being developed. This can help to improve the sustainability of the system over time. Some examples include:
- Using voice recording and transcription tools when the team is discussing architectural decisions, and summarizing these discussions. This can provide context in the future when architectural decisions are being challenged and may need to be updated or even reversed.
- Turning text into architectural diagrams and documenting existing diagrams.
- Producing standard code documentation, such as API descriptions.
- Updating design documentation to reflect what got implemented, and flagging areas of the implementation that deviate from the design. Sometimes the implementation corrects flaws in the design, but sometimes it drifts away from the design. Both are useful to know.
Understand interfaces to other systems
MVP implementations nearly always rely on other systems; no team ever starts completely from scratch. Because of this, they can spend considerable time understanding the interfaces provided by those systems, especially when those interfaces are poorly documented. If these systems were created a few decades ago and written in an old language, such as COBOL that few people can still understand and use, this task may be almost impossible.
When developing an MVP, teams never have time to do this. AI can help with this by scanning code and producing documentation, such as a description of the APIs of existing systems, including some old and poorly maintained ones. AI can also help document potentially overlooked dependencies between systems, such as when one system calls another through an undocumented interface or implicitly through shared data. This improves the quality of the MVA and the chances of success of the MVP.
Understand and manage the technical debt of the system
Effectively managing technical debt is critical for a sustainable system. Reducing technical debt itself is an inherently challenging trade-off and one that many organizations fail to manage. They are under constant pressure to increase the functionality in each successive, incremental MVP, but in doing so they also usually compound the technical debt related to the associated MVAs.
AI may help to identify possible areas of the code that may increase technical debt but, because technical debt results from decision trade-offs made in the past, AI can only identify the most obvious instances of poor quality code that may need improvement, and it can’t make the decision trade-offs needed to effectively reduce or eliminate the major sources of technical debt.
Conversely, when making a new decision, the AI can help summarize the technical debt associated with the decision and how it differs from other options considered. If teams get better at documenting technical debt that is incurred (through ADRs) they can use that documentation to help drive future decisions about “paying down” the tech debt. That documentation can include the examples provided to the LLM for future decisions.
Again, providing as much context as possible in the prompt will ensure that the LLM delivers relevant information. For example, the team could provide known examples of technical debt, such as using a synchronous API as a temporary solution instead of using an asynchronous interface as specified.
Implement feedback loops
Experimentation, as already discussed, provides one kind of feedback that teams can use to improve the architecture of their systems. AI can also help them in other ways, by helping them evaluate their decisions through testing, architecture reviews, and automated code reviews. AI may help generate automated tests and also help the team by performing automated code reviews to flag issues that are then evaluated by the team.
Risk identification and mitigation
The architectural risks for a system are unique to its context. Similar to the way it can help with understanding QARs, AI can provide a generic checklist of risks and mitigations, but it can’t predict whether your system will experience them. This can be useful for brainstorming a list of potential risks for a system, provided that you give relevant details to the AI. This may require the team to have a conversation about risks that leads them to make better decisions.
Conclusion
While software architects won’t be replaced by AI, they do need to learn how and where they can use AI to make better decisions and achieve better trade-offs. Looking at the work that software architects perform provides insights into where and how AI can help. Focusing on how it helps teams produce an MVP further focuses on how AI can help: it enables teams to relax some of the constraints they face in developing a sustainable architecture while still meeting the time constraints of the MVP. This frees teams to discover more creative solutions to meet architectural challenges.