Transcript
Thomas Betts: Hello and welcome to another episode of the InfoQ Podcast. Today I’m joined by Vlad Khononov. Vlad is a software engineer with extensive industry experience working for companies large and small in roles ranging from webmaster to chief architect. His core areas of expertise include software architecture, distributed systems, and domain-driven design. He’s a consultant, trainer, speaker, and the author of Learning Domain-Driven Design. But today we’re going to be talking about the ideas in Vlad’s latest book, Balancing Coupling in Software Design. Vlad, welcome to the InfoQ Podcast.
Vlad Khononov: Hey Thomas. Thank you so much for having me.
Balance coupling is the goal, not no coupling [01:07]
Thomas Betts: So the title of your book, Balancing Coupling, and I think a lot of architects and engineers are familiar with the idea of wanting low coupling, we want to have our systems loosely coupled. But as your book points out, that’s really an oversimplification that we don’t want to have no coupling, we need to have a balanced coupling. So can you explain why that’s an oversimplified idea to say, we just want loose coupling everywhere?
Vlad Khononov: Yes. So by the way, a loose coupling is okay. What I’m really afraid of is people saying, let’s decouple things. Let’s have completely independent components in our system, which is problematic because if you ask yourself, what is a system? What makes a system? Then the answer is a system is a set of components working together to achieve some overarching goal. Now, in order to achieve that goal, it’s not enough to have those components, they have to work together. Those interactions is what makes the value of that whole system greater than the sum of its components, sum of its parts. And those interactions is what we usually call coupling. If you look that word up in a dictionary, coupled means connected.
So to make the system work, we need coupling. Now, of course, too much of a good thing is going to be bad. We need water, any living organism that we know of on this planet needs water to survive. However, if you’re going to drink too much water, well guess what’s going to happen? Nothing good is going to happen. Same with coupling. We cannot eliminate it because just as in the case of water, you’re not going to survive, a system is not going to survive. So we need to find that “just right” amount of coupling that will make the system alive. It will allow it to achieve that overarching goal.
Thomas Betts: I like the idea of if we add too much water, maybe that’s how we get to the big ball of mud, that everything is completely connected. And we can’t see where there should be good separations between those couples, you can’t see the modules that should be there that make the system understandable. And I know that’s part of it is, we want to get to small enough modules that we can understand and work with and evolve over time without having to handle the entire big ball of mud, if you will.
If the outcome can only be discovered by action and observation, it indicates a complex system [03:35]
Thomas Betts: So that coupling itself, that’s not the problem. The problem really is the complexity. And I think people sometimes correlate the two that if I have a highly coupled system that everything’s talking to each other that’s causing the complexity. Can you distinguish where coupling and complexity are not always the same thing, one isn’t always the bad?
Vlad Khononov: Yes. That’s a great point. And the thing is, when we are designing the system, we need to find that “just right” amount of coupling to make it work. And if you go overboard, as you said, we’ll end up with that monster that we usually call “big ball of mud”. And that pretty much describes what we are afraid of, complexity. I guess anyone with a few years of experience in software engineering has that experience of working on a big ball of mud project that maybe it works, but nobody has the courage to modify it because you don’t know what’s going to happen following that change. Whether it’s going to break now or it’s going to break a week later after it was deployed to production. And what is going to break? And that relationship between an action and its outcome is my preferred way of describing complexity.
If you’re working on a system and you want to do something, and you know exactly what’s going to happen, that’s not complexity. If you can ask someone, and some other external expert knows what’s going to happen, that’s not complexity either. However, if the only way to find out the outcome of the thing you want to do is to do it and then observe what happens, then you’re dealing with a system that is complex, and that means that the design of that system makes those interactions much harder than we as people can fathom. We have our cognitive limits, our cognitive abilities, if you look at studies, they’re not looking good by the way. And it means that the design of that system exceeds our cognitive abilities, it’s hard for us to understand what’s going on there. Of course, it has something to do with coupling. However, it’s not because of coupling, but because of misdesigned coupling.
Thomas Betts: Yes. And then I think your book talks about the idea of sharing too much knowledge, that coupling is where knowledge is being transferred. And so the idea of cognitive load being exceeded, the knowledge that I have to have in order to troubleshoot this bug is, I have to understand everything. Well, I can’t understand everything and remember it all, so I’m just going to try and recreate it. And in order for me to try and recreate it, I have to have the full integration stack, right? I have to have everything running, be able to debug all the way through. And the flip side of that is somebody wants to be able to have that experience because they’re used to having the big monolith, the big ball of mud. They’re like, “I don’t understand it, so I’m going to just see what happens”.
Once they’re working in microservices, then they get to, “Well, I can’t actually step through the code once I send the request to the other call, how do I know what happens?” How do you help get people into that mindset of you’re making it better, but it’s a different shift of the paradigm that you can’t just run everything, but the benefit is you don’t have to know about it once it goes past that boundary.
Three dimensions of coupling [07:23]
Vlad Khononov: Yes. And that’s the thing about coupling, we are way too used to oversimplify it. As, Hey, coupling is bad. Let’s eliminate all the coupling, that’s how we get modular software systems. However, if you look what happens when you connect any two components, when you couple any two components in a system. What happens beneath the surface? Then you’ll see that coupling is not that simple, it’s not uni-dimension. Actually, it manifests itself in three dimensions. As you mentioned, first of all, we have that knowledge sharing. You have two components working together. How are they going to work together? How are they going to communicate with each other? How are they going to understand each other? They need to exchange to share that knowledge.
Then we have the dimension of distance. If you have two objects in the same file, then the distance between the source code of the two objects is short. However, if those two objects belong to different microservices, then you have different code bases, different projects, different repositories, maybe even different teams. Suddenly the distance grows much bigger. Why is that important? Well, the longer the distance that is traveled by the knowledge, the sooner it’ll cause that cognitive overload. And we’ll say, “Hey, that’s complexity. We need to decouple things”. So distance is a very important factor when designing coupling.
And the third dimension is a dimension of time, of volatility because oh, why do we care? We want to be able to change the system. We wanted to change its components, their behavior. Maybe we will modify existing functionalities, maybe we’ll add new ones. For that, we want to make sure that the coupling is just right. However, if that is not going to happen, maybe because the component is a part of a legacy system, or maybe the business is not interested in investing any effort in that specific area, then the effect of coupling is going to be much lower. So we better prioritize our efforts on other parts with higher volatilities.
Distance and knowledge sharing are intertwined [09:49]
Thomas Betts: So I want to talk about that distance part first. I think that’s a new way of thinking of the problem because I think we can relate to, I’m going to separate this into microservices and that’ll solve my problem. And if you go back to the combination of how much knowledge is being shared, and how far away it is. Well, if I have all the code in my monolith, then the distance between the code is pretty low, right? I can change all the code all at once, but that also leads to a lot of complexity because I might not be able to easily see what code I need to change because there’s too much of it.
Now, if I take it into the microservices approach, I can say, I only need to change this. There’s only so much code to look at, I can understand it. But if I say, if I make a change here, I also need to make a change in this upstream or downstream service, that they have to know that I’m making a change. Then you’re saying that, that’s where the knowledge comes in, the knowledge being shared is tightly coupled. Is that a good explanation of what you’re trying to say?
Vlad Khononov: Yes, yes. That’s where complexity gets complex. Essentially, we have two types of complexities when working on any system. First, let’s say that you’re working on one of its components, and it is a small big ball of mud, let’s call it a small ball of mud. Then we could say that the local complexity of that component is high. We don’t understand how it works, and if you want to change something, we don’t know what’s going to happen. Now, there is another type of complexity and that’s global complexity, and this one is about the interactions on a higher level of abstraction. Say we have our component and other components of that system, and they’re integrated in a way that makes it hard to predict how changing one of the components is going to be, whether it’s going to require simultaneous changes in other components. So that’s global complexity.
The difference between the two as you mentioned is distance. And way back when the microservices hype just started, people wanted to decouple things by increasing the distance because previously we had all the knowledge concentrated in a monolith, let’s call it the old-school monolith. Everything in one physical boundary. Now, back then decoupling involved extracting functionalities into microservices, so we increased the distance. However, way too many projects focused just on that, on increasing the distance. They were not focused enough on, “Hey, what is that knowledge that is going to travel that increased distance?” And that’s how many companies ended up transforming their old-school monoliths into new shiny distributed monoliths. So they kind of traded local complexity into global complexity.
Coupling is only a problem if a component is volatile [13:04]
Thomas Betts: And that only becomes a problem when that third element, that third dimension of volatility rears its head. Because as long as those two things don’t change, the fact that they share knowledge over a long distance shouldn’t matter. But if one of those has to make a change and it has to affect the other one, now you’ve got the distributed ball of mud problem, that everything in two different services has to change. You actually made the problem worse by going to microservices. So that’s where all three factors have to be considered, correct?
Vlad Khononov: Yes, exactly. And that’s funny because all those companies that tried doing that, of course, they didn’t decompose their whole systems on the very first day of that microservice endeavor. No, they start with a small proof of concept, and that proof concept is successful. So they said, “Hey, let’s go on. Let’s proceed and apply the same decomposition logic everywhere else”. Now, the difference is that POC is usually down on something that is not business critical, its volatility is low. So you are kind of safe introducing complexity there. So the mistake was taking those less business critical components, extracting them, and thinking that they will achieve the same result with other components of the system. And of course, there once you step into that distributed big ball of mud area, well, suddenly microservices became evil and people started praising monoliths.
Thomas Betts: Right. We didn’t understand what we were doing, we didn’t understand why we were trying to accomplish it. We thought the problem was “everything’s too close, we’ll solve it by just moving it apart”. But if you don’t factor in, how is the knowledge changing? How is the volatility affected? Because yes, that first one might work, it doesn’t matter if they’re close together in one monolith or separate. If there’s no volatility, if things aren’t changing, it doesn’t matter where it lives.
But once you get to, this is something that we’re going to be making changes to really quickly. Because that was the other thing that people said, if we go to microservices, we can make changes really quickly, and then they maybe make even more changes faster, but they run into all these issues that separate teams in separate modules and separate microservices are trying to change things all at once, and then they lead back to, we have to still have all this communication, or we have this major integration step that just you weren’t ready for it because you did the thing wrong. When you make them move to microservices, you have to consider all three factors. What is changing? And if I know it’s going to change, what do I do differently then? Because obviously we still want to break those things up, but how do I say this is going to be a volatile module, it’s going to have core business, it’s going to be evolving. What’s the solution then? Because I want to be able to change it.
Distance affects where code lives as well as the lifecycle to maintain related components [16:22]
Vlad Khononov: Yes. That dimension of space distance is very tricky, and what makes it even trickier is that it has, let’s call it sub dimensions. So first we have that physical distance between source codes. The greater that distance gets, the harder it is going to be to modify the two components simultaneously. So that’s one thing. We have another force that works in the opposing direction, and that’s lifecycle coupling. The closer things are, the more related their life cycles. So they will be developed, tested, deployed together. If you have components implemented in the same physical boundary, for example.
As you go toward the other end, then you are reducing those lifecycle dependencies. And then we have social technical factors, those two components are implemented by the same team, or do we have to coordinate the change with multiple teams? And suddenly the distance can grow even larger, and the lifecycle coupling will be reduced even further. So distance is super important, but as you mentioned, what makes it all, let’s call it painful, is that knowledge that is going to travel that distance.
Thomas Betts: Right. So if I know that this thing is going to be changing, in some ways those changes affect the knowledge that is being shared, right? If I’m adding new features and functionality, that means there’s more knowledge in this module. And if I have to communicate those changes, that’s the challenge. So is the trade-off of, I’m going to have more volatility in this module, I have to reduce the knowledge that’s being shared, reduce that integration strength of how tightly those two things are coupled. Is that a matter of defining good API batteries, for example?
Vlad Khononov: Yes. So we have to manage that knowledge that we are sharing across the boundaries, we have to make it explicit. Now, the thing about knowledge is, as you said, the more knowledge we’re sharing, the more cascading changes will follow because the more knowledge we share, the harder chances that the piece of that shared knowledge will change, and then we’ll have to communicate that change to the other component, to the coupled component.
Four levels for measuring coupling [19:10]
Vlad Khononov: Now, how do we evaluate knowledge? What units should be used to measure knowledge? That’s a tricky question. It’s tricky, and I’m not sure we have an answer for that. However, what we do have is a methodology from the ’70s called structure design. And in it there was a model for measuring or for evaluating interdependencies between components of a system called module coupling. That model had six levels, they were focused around the needs of systems that were written in those days. But essentially these levels describe different types of knowledge that can be exchanged across boundaries of components.
In my model, in the balanced coupling model, I adapted module coupling and changed its name to integration strength. I had to change its name because the levels of the model are completely different because again, they have to be accessible to people working on modern systems. I reduced the levels to four basic types of knowledge to make it easier to remember them. And if you need finer-grained details, then you can use a different model from a different era called connascence to measure the degrees of those types of knowledge.
Intrusive coupling [20:47]
Vlad Khononov: So the basic four types of knowledge are from highest to lowest. First of all is intrusive coupling. Say you have a component with a public interface that should be used for integration, however, you say, “Okay, that’s fine. I have a better way. I will go to your database directly, pick whatever I need, maybe modify it”. In other words, intrusive coupling is all about using private interfaces for integration.
Once you introduce that dependency on private interfaces, you basically have a dependency on implementation details. So any change can potentially break the integration. So with intrusive coupling, you have to assume that all knowledge is shared.
Thomas Betts: Right. That’s the classic, if you have a microservice, you own your own database. And no one else is allowed to go there, they have to go through this boundary. And I like that you’re calling back to, these are papers written 50 years ago. And no one was talking about microservices there, no one was talking about having several databases, but it’s still the same idea; if I can structure this so that in order for this to go through, it has to go through this module. That’s why C++ evolved to have object-oriented design to say, “I have this class and it has behavior, and here’s public and private data”. And that’s what you’re talking about, if you can just get all the way through, there’s no point in having that public versus private interface.
Vlad Khononov: Yes. Yes. It’s funny, if you look at one of the books from that period, one that I particularly like is called Composite/Structure Design by Glenford Myers. And if you ignore the publishing date, it sounds like he is talking about the problems we’re facing today. It’s crazy. It’s crazy.
Thomas Betts: What’s the next level after that intrusive coupling?
Functional coupling [22:45]
Vlad Khononov: Yes. So after intrusive coupling, we have functional coupling. And here we’re sharing the knowledge of functional requirements. We’re shifting from how the component is implemented, to what that component implements, what is that business functionality? Again, that’s quite a high amount of knowledge that is shared by this type because if you share that kind of knowledge, then probably any change in the business requirements is going to affect both of the coupled components, so they will change together.
Model coupling [23:22]
Vlad Khononov: Next, we have model coupling, which means we have two components that are using the same model of the business domain. Now, DDD people will get it right away. But the idea is when we are developing a software system, we cannot encode all the knowledge about its business domain, it’s not possible. If you are building a medical system, you’re not going to become a doctor, right? Instead, what we are doing is we’re building a model of that business domain that focuses only on the areas that are relevant for that actual system. Now, once you have two components based on the same model, then if you have an insight into that business domain and you want to improve your model, then guess what? Both of them will have to change simultaneously. So that’s model coupling.
Contract coupling [24:17]
And the lowest level is contract coupling. Here we have an integration contract, you can think about it as a model of a model that encapsulates all other types of knowledge. It doesn’t let any knowledge of the implementation model outside of the boundary, that means you can evolve it without affecting the integration contract. You’re not letting any knowledge of functional requirements across the boundaries, and of course, you want to protect your implementation details.
Examples of the four types of coupling [24:51]
Thomas Betts: Right. So just to echo that back. If you’re talking about, you said DDD people will get this right away. If I have a new invoice coming in that I want to pay, maybe I have an expense management system where somebody says, “Here’s a new thing to pay, I’m going to submit it to the expense management system”, and it has to go through an approval process to say, yes, it’s approved. Then all the way at the end we have our accounts payable person who’s going to log in and say, “Oh, I need to go pay this invoice, I have to pay the vendor”, right? There’s an invoice that flows all the way through the system, but if you say, “I need to know how is it going to get paid at the end, all the accounting details upfront”, it’s tightly coupled.
If you think about it from who’s doing the work, you might have the invoice request that starts in expense management, and then the paid invoice. And those ideas of, I have one model, but the words sound the same, but ubiquitous language says in this domain, that’s what this means. And I work on accounting systems, so the invoice, whether you’re in accounts payable or accounts receivable, we both have invoices, but they’re exactly the opposite. Am I going to pay someone or is someone going to pay me? And so ubiquitous language helps us reduce the cognitive load because I know in this space, I’m only talking about this part of the workflow because it’s satisfying this person, this role, they’re doing their job.
And so that’s going to the levels of coupling you’re talking about. The contract coupling says, I’m going to hand off from here, to the next, to the next, and I don’t have to know what’s going to happen a week from now with this because once it exceeds my boundary, I’m done with it. And the intrusive coupling is, they’re all editing the same database record and everybody knows about all the details. And somewhere above that is, I have to know that there’s this next workflow of pay the invoice versus submit the invoice, and everybody knows about those things. Is that a good example of how to see those different layers in there?
Vlad Khononov: Yes, absolutely. Absolutely. There are so many creative ways to introduce intrusive coupling. There are such interesting death-defying stunts we can pull. For example, maybe you’re introducing, not a dependency, but you rely on some undocumented behavior, that’s intrusive coupling. Maybe you’re working in, let’s say an object-oriented code base and a component that you are interacting with returns you an array or a list of objects, and then you can go ahead and modify it. And because it’s reference type, it’s going to affect the internals of that component. So that’s another creative example of intrusive coupling. By the way, a reader of the book sent it to me. And I was like, “Oh, why haven’t I thought about it when I was writing the book? It’s such a great example”.
Modularity is the opposite of complexity [28:01]
Thomas Betts: Yes. Well, I think what you’re describing is, that’s the difference between the local and the global complexity, right? We think about these as microservices, I’m going to separate big modules out. But the same problems occur within our code base because even if you’re working in a monolith, you can structure… This is where the book talked about modular monoliths. You can set up your code, so even if it’s stored in one repository, you can make it easier to understand. And that gets to, this class doesn’t have to know about the 900 other classes that are in the project, I only know about the 10 that are close to me.
Vlad Khononov: Yes. Exactly. And by the way, it brings us back to the topic of complexity, or rather the opposite of complexity. So if complexity is, if we’re going to define it as the relationship between an action and its outcome, then modularity is the opposite. It’s a very strong relationship between an action and its outcome. So if we want to design a modular system, we want to be able to know what we have to change, that’s one thing. And the second thing is, once we make the change, what’s going to happen? That I would say is the idea of modularity.
Modular monoliths can reduce complexity [29:19]
Vlad Khononov: Now, how can we do it? How can we achieve what you described? Let’s say that you have a monolith that can be a big ball of mud, but it also can be a modular monolith. If the thing is, the core ideas are the same. You can increase the distance, you don’t have to step across its physical boundary. You can introduce distance in the form of modules within that monolith. You can put related things together because let’s say you have one boundary with lots of unrelated things. And how can we define unrelated things? Things that are not sharing knowledge between them.
So if they’re located close to each other, then it will increase the cognitive load to find what we have to change there, right? So we can reduce the cognitive load by grouping related things, those components that have to share a knowledge in logical groups, logical modules. And that’s how we can achieve modular monoliths, which is by the way, in my opinion, the first step towards decomposing a system into microservices because it’s way easier to fix a mistake once you are in the same physical boundary.
Thomas Betts: Right. You’re keeping the distance a little bit closer, you’re separating it logically into separate namespaces, different directory structures, but you’re not making a network call, right?
Vlad Khononov: Exactly.
Thomas Betts: That’s definitely increasing the distance. You’re not necessarily handing over to another team. You might be, but maybe it is still the same team just saying, “Hey, I want to be able to think about this problem right now, and I don’t want to have to think about these other problems”, and so let me just split the code. But that causes you as an architect designing this to say, “What makes sense? What do I move around? Where am I having the problem understanding it because there’s too much going on, there’s too much local complexity? And let’s look for that and figure out how do I increase the distance a little bit so that the knowledge that’s being shared stays within the things that are close”. And you start looking for, have I introduced distance while not reducing the knowledge, right? That’s what you’re trying to do, is have the knowledge transfer go down that integration strength when you’re adding distance, right?
If shared knowledge is appropriately high, then balance it with distance [31:45]
Vlad Khononov: Yes. Yes, absolutely. We always want to reduce integration strength; we want to always minimize the knowledge. But if you’re familiar with the business domain, you kind of know that, hey, here, I need to use the same model of the business domain, here we have closely related business functionalities. So it doesn’t matter if you want to reduce it to the minimum, you can’t. You have to remain on that level of, let’s say for example, functional coupling. Once you observe that level of knowledge being shared, then you have to take it into consideration, and balance it with another dimension, which is distance. Don’t spread those things apart because otherwise that’s going to be cognitive load, and as a result, complexity.
Thomas Betts: Right. And again, this is where the volatility comes into place. So if I’m focused on, let’s go from our big ball of mud to having a more organized modular monolith. Then I can look at, oh, where are we seeing lots of changes? Where’s the business evolving a lot and where is it not? And so I can now focus on, if we’re going to pull one service out, because let’s say we actually have scaling needs, we need to make sure that this part of the system can grow up to 10 times the size, but the rest of it, we don’t need to scale up as big. Those types of things you can look at, well, what’s volatile? And then if you pull it out of that monolith, you say, “I’m adding the distance, have I reduced the knowledge to a safer coupling level?” I haven’t kept that high integration strength, that you still know about my private methods and how to call my database even though I pulled you out because you haven’t actually done anything to solve the volatility problem, right?
Evaluating volatility requires understanding the business domain [33:35]
Vlad Khononov: And volatility, initially it sounds like something simple, the simplest dimension of the three. Oh my god, it’s not. It’s tricky because to truly predict the rate of changes in a component, it’s not enough to look at maybe your experience, or at the source code because there are things we can differentiate between, essential volatility and accidental volatility or accidental in-volatility. Accidental volatility can be because of, or design of the system, things are changing just because that’s the way the system is designed. And accidental in-volatility can happen. Let’s say that you have an area of the system that the business wants to optimize, but it is designed in such a way that people are afraid to touch it. And the business is afraid to touch it, to modify it as well as a result. So to truly, truly evaluate volatility, you have to understand the business domain. You have to analyze the business strategy, what differentiates that system from its competitors. Again, DDD people are thinking about core subdomains right now.
Thomas Betts: Yes.
Vlad Khononov: And once you identify those areas based on their strategic value to the company, then you can really start thinking about the volatility levels desired by the business.
Thomas Betts: You mentioned things happen internal and external, so the business might have, we want to pursue this new business venture, or this was an MVP, and the MVP has taken off, we want to make sure it’s a product we can sell to more people, but we need to make changes to it. So there are business drivers that can change the code, but there’s also internal things. Like I just need to make sure my code is on the latest version of whatever so that it’s not sitting there getting obsolete, and hasn’t gotten security patches or whatever. So some of those, the system’s just going to evolve over time because you need to keep, even the legacy code, you need to keep up to date to some standards. And then there’s the, no, we want to make big changes because the business is asking us to, right? So the architect has to factor in all of those things, as well as I think you mentioned the socio-technical aspects, right? Who is going to do the work? All of this comes into play, it’s not always just one simple solution. You can’t just go to loose coupling, right?
Balancing the three dimensions of coupling [36:13]
Vlad Khononov: Yes. It’s complicated. I’m not going to say that it’s complex, but it’s complicated. But the good news is that once you truly understand the dynamics of system design, it doesn’t really matter what level of abstraction you’re working on. The underlying rules are going to be the same, whether it’s methods within an object or microservices in a distributed system, the underlying ideas are the same. If you have a large amount of knowledge being shared, balance it by minimizing the distance. If you’re not sharing much knowledge, you can increase the distance. So it’s one of the two, either knowledge is high and the distance is low, or vice distance is high but knowledge is low. Or, or things are not going to change, which is volatility is low, which can balance those two altogether.
Thomas Betts: Right. So if you just looked at strength and distance, how much knowledge is being shared over too long? That looks bad. But if it’s never going to change, you don’t care. If it does change, then it’s not balanced. On the flip side, if it’s going to change a lot, then you need to think about the relationship between the integration strength and the distance. So if there’s not much knowledge being shared over a long distance, that’s okay, or if there’s a lot of knowledge shared over a small distance, that’s okay. So you can have one but not both, if things are changing. But if things aren’t changing, you don’t care.
Vlad Khononov: Yes. And of course, things are not changing today, maybe something is going to change on the business side tomorrow. And as an architect you have to be aware of that change and its implications on the design. The classical example here is, I am integrating a legacy system, nobody is going to change it, and I can just go ahead and grab whatever I need from its database, that’s fine. Another classic example is, again, DDD influence, some functionality that is not business critical, but you have to implement it, which is usually in DDD lexicon is called a supporting subdomain. Usually they’re going to be much less water than core subdomains. However, business strategy might change, and suddenly that core subdomain will evolve into a core one. Suddenly there is that big strategy change that should be reflected in the design of the system. So it’s three dimensions working together, and whether it will end up with modularity or complexity depends on how you’re balancing those forces.
Thomas Betts: Right. And I think you got to the last point I wanted to get to is, we can design this for today based on what we know, but six months, six years from now, those things might shift not because of things we can predict right now. And if you try and design for that future state, you’re always going to make some mistakes, but you want to set yourself up for success. So do the small things first. Like if it is reorganize your code so it’s a little easier to understand, that seems like a benefit, but don’t jump to, I have to have all microservices.
And I liked how you talked about how this can be applied at the system level, or the component level, or the code level. I think you described this as the fractal approach of, no matter how much you keep looking at it, the same problem exists at all these different layers of the system. So that coupling and balance is something you have to look at, at different parts of your system either inside a microservice at the entire system level, and what are you trying to solve for at different times, right?
Vlad Khononov: Yes. And that’s by the way, why I’m saying that if you pick up a book from the ’70s, like that book I mentioned, Composite/Structured Design, it looks way too familiar. The problems that they’re facing, the problems they’re describing, the solutions they’re applying are also going to be quite familiar once you step over those terms that are used there because those terms are based on languages like FORTRAN and COBOL. Yes, you need some time, some cognitive effort to understand what they mean. But yes, the underlying ideas are the same, it’s just a different level of abstraction that was popular back then. Not popular, that’s all they had back then.
Wrapping up [40:57]
Thomas Betts: So if you’ll want to follow up with you or want to learn more about your balanced coupling model, any recommendations of where they can go next?
Vlad Khononov: Yes. So on social media aspect, I am the most active on LinkedIn at the moment. I have accounts on other social networks like Bluesky, Twitter, et cetera. Right now LinkedIn is my preferred network. At the moment I’m working on a website called Coupling.dev, so if you’re listening to this, I hope that it is already live and you can go there and learn some stuff about coupling.
Thomas Betts: Well, Vlad Khononov, I want to thank you again for being on the InfoQ Podcast.
Vlad Khononov: Thank you so much, Thomas. It’s an honor and a pleasure being here.
Thomas Betts: And listeners, we hope you’ll join us again soon for a future episode.
Mentioned:
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.