Transcript
Michael Stiefel: Today’s guest is Lizzie Matusov, who is the co-founder and CEO of Quotient, a developer tool that surfaces the friction slowing down engineering teams and resolves it directly. Her team also co-authors the Research-Driven Engineering Leadership, a newsletter that uses research to answer big questions on engineering leadership and strategy.
She previously worked in various engineering roles at Red Hat and has an MS in engineering sciences and MBA from Harvard. And what she’s not thinking about building productive engineering teams, you can find Lizzie spending time around the parks of San Francisco.
Welcome to the podcast. I’m very pleased that you decided to join us today.
Lizzie Matusov: Thank you for having me.
Getting Interested in Building Productive Engineering Teams [01:39]
Michael Stiefel: I am very excited to have you here. And I would like to start out by asking you, how did your interest in building productive engineering teams get started, and what was the aha moment when you realized its relationship to software architecture?
Lizzie Matusov: Well, thank you so much for having me, and I love that question. So when I first started in software engineering, I worked at Red Hat, but I had a bit of an interesting role. I was a software engineer in their consulting arm, which basically meant that myself and a group of engineers would come into a company that has a problem they’re looking to solve, and then we would basically design the solution, the architecture, implement it, and then leave it with them and move on to the next project.
So I got to do that in biotech, in financial services, in an innovation lab. And the really incredible thing about that job was that every few months we would bring a team together and we were building not just the architecture that we wanted to then execute on, but also the team dynamics were new every single time. And then we would complete our work and go off onto the next project. So it taught me a lot about not just the technical challenges when you come into a company, but also the human challenges and how those are very important inputs into your ability to build software.
And then when I went to Invitae, which was the second company I worked for, I had a different experience. We were in-house, all working together, really getting to know our team dynamics, and we were finding that there were various areas that we were getting slowed down that weren’t immediately clear to us were system challenges.
And then what we started realizing is that the impacts that were beyond software were the human elements, so how the teams collaborate with one another, their ability to prioritize deep work with collaborative work, the ability to document effectively and how that has downstream impacts on the next project that gets built. And so seeing those two different experiences was what planted that initial seed to me that thinking about productive engineering teams as more than the tools they use is actually the way to think about building the highest performing, happiest engineering teams.
Human Behavior and System Architecture [03:59]
And so I think that humans are a critical input into system architectures. When you think about those incredible architectural diagrams that you build and all of the considerations and latency and caching and all those things that we always come to think of first in software engineering, if you just zoom out one layer you realize that there’s an entire complex system of who are the humans involved, how are they able to execute, how do they change with the systems, how do we think about their own inputs and outputs? And that to me is just such an interesting lens on software development that affects all of us, but we don’t always take that layer upwards and think of it there.
Michael Stiefel: I think that’s a very interesting insight because I spent most of my career as a software consultant, and like you, I saw a lot of different places. And one of the things that I found, and I’m sure this will resonate with you directly, is software projects when they failed, rarely failed because of the technology. I mean, in a couple of cases, yes, there was a bridge too far and it didn’t work, but most of the time, the overwhelming number of the time was like the Pogo cartoon. We have met the enemy and they are us. It’s the humans just couldn’t do it.
So if we start to adapt that lens and look at architecture that way, what architectural processes or ideas would drive improvement in software delivery, which implies the question is what is success and who judges that? You talked about the humans, but humans are all the way through the process, from the end user to the actual people building the system. So how do you develop those architectural practices that drive success however you want to define success.?
What is Software Productivity? [06:01]
Lizzie Matusov: When you think about the definition of productivity, just the most basic definition, it’s outcomes over effort. And outcomes is, did we deliver the right thing for our customer? Did we do the thing that our customers needed in order to get value? And there’s all of the variables that play into that. And then you think about effort, which is how difficult was it for us to get there? And sometimes that math doesn’t work out and oftentimes it does, but that is actually the true definition of productivity.
And so when I think about the systems that play into this concept of productivity, I’m thinking about things like on the outcome side, are we pointed in the right direction? Do we have the right understanding of who our customers actually are? Did we validate that we’re solving the right pain points? Did we build the right thing in that direction? And then we think about all of these aspects that impact the effort. So what were our developer systems like? What was our tool set and were we able to work through it? And how are the humans working together to achieve that?
And I think there’s the common things that we think of with productivity, like what is your deployment frequency or how quickly does code go from development to in production. Those things are important too, but actually many of those human factors play into both the outcomes and the effort in a much more dramatic way than we often even realize.
So that’s what I think about when I think of the core definition of productivity. I also think of there’s numerous frameworks that different companies and research institutions look at. There’s a common one called the SPACE framework, which was developed by the team at Microsoft, and that looks like a very holistic view of productivity. I think it stands for satisfaction, performance, activity, collaboration, and efficiency.
And that’s a great way of looking at it, but an even simpler one is actually one that was developed by Google and it’s just three things: velocity, quality, and ease. How fast are we moving, how high are we keeping the quality, and how easy is it for us to work? So these are different frameworks you can look at that help you answer, are we getting more productive, are we building the right thing, and are we doing it at a level of effort that works for us?
Michael Stiefel: And I presume when you talked about ease and efficiency, that also factors in, are you burning your people out, are you treating them like human beings or not? Because I’m sure you’ve seen it and I’ve been there too, where you have management that uses developers as slaves or interchangeable, and that’s not a very pleasant place to work. You can hit the customer perfectly, but are you a decent human being as you’re trying to develop this software?
Team Morale: Is Your Engineering Organization a Profit or Cost Center? [08:58]
Lizzie Matusov: Yes, absolutely. And there’s this saying of how do you think of your engineering organization, do you align it as a profit center or a cost center?
Michael Stiefel: Right.
Lizzie Matusov: Exactly. I think the way that you align engineering will dictate how you think of the humans within that organization. So if they’re a cost center and you’re like, “Gosh, we got to decrease costs and get the most value out of this thing”, and that’s where you often find these cases of really high burnout, really short-term minded, often not aligning engineers to the product work because they’re just interchangeable parts that we’re just trying to get the cheaper and most cost-effective version.
Then you think of the profit center version, where engineers are really seen as driving revenue for the business. And the better you can align them on that, the more revenue you’ll achieve.
Michael Stiefel: When you say that, I was thinking that’s something that I knew about early in my programming career. Fortunately I wasn’t at that place, but I don’t know if you remember or have heard of Ken Olson and the Digital Equipment Company. But what they used to do, Olson would set two teams to compete against each other to achieve some product, and the team that won got the bonuses, got the stock options, but the team that lost got nothing.
And you could imagine what the morale was in the team that lost. Because what you’re talking about is very often a projection of the leadership, and the leadership’s attitudes towards things gets translated into engineering. It’s amazing to me how much people’s views of technology, people’s attitudes, it’s basically on their outlook on the world. And that’s something you just sometimes can’t change.
Lizzie Matusov: I think about this concept a lot, and I often ask myself, why do we look at engineering with such a different lens than we might look at product or sales or marketing or even operations? Those organizations tend to value the more human-centric aspects of teamwork and thinking about the right goals and achieving them much more than we see in engineering.
And I think part of it has to do with the fact that it’s easy for us sometimes to get really caught up in the very, very deep weeds. And those weeds are things that a CEO probably does not know much about. I don’t expect them to know about how to provision software or Kubernetes clusters or thinking about latency or even knowing what-
Michael Stiefel: And you hope they don’t think about those things.
Lizzie Matusov: Right. Yes, leave that to the experts in engineering. But I think sometimes we get really caught up in those things. And as engineering leaders and as an organization, we don’t always do our part in helping people translate that into the impact for customers.
Architecture and Satisfying Customer Values [11:57]
And I think that’s changing. I think there’s been a lot more conversation in the last few years about making sure that engineers can communicate their value to the customers, both from the perspective of, again, helping the company achieve their revenue goals and their overall company goals, but also to help build a little more empathy for what it looks like within the engineering organization. And so that we can start prioritizing all of the factors that build into our ability to deliver that software, whether it’s our tools, stability of our software, or it’s the way that our teams are organized and how we’re set up to achieve those goals from a human side.
Michael Stiefel: I often think that the responsibility for that really is the architects. I mean, yes, it’s good if engineers understand business value, but someone very often has to explain that to them. And one of the goals or abilities of a good architect is to talk to the business people, understand them, and explain to them enough technology so they understand, “No, you can’t have this in three days”, or, “If you have it in three days, what you’re going to ask for in three months is not going to be possible”.
On the other hand, the architect also has to go to the engineering staff or the DevOps staff or whatever’s … and explain to you, “This looks stupid”. But actually, if you look at it from the broad point of view of business value or long-term success, it actually does make sense. Because I think sometimes it’s too much to expect that from an engineer because it’s so hard these days to just master what an engineer has to master to just produce software, that yes, it’s good if they understand the business value and it’s positive, and I would encourage every engineer to talk to customers, but in my experience, if there’s no responsibility someplace, it doesn’t happen.
That’s what I like to think. The architect is the one that talks to the management, not only about the business value, but all these things about software delivery and teamwork and things like that, because they’re in the unique position to see the implications of all these things that maybe an engineer can’t see because they’re worried about, “What am I going to do with the daily stand-up?”
Lizzie Matusov: on my feature.
There is No One Measure of Productivity [14:41]
Michael Stiefel: Right, on my feature. Or I believe you’ve talked about how velocity sometimes can be very misleading in terms of what you’re actually producing, so it’s the architect that has this … And this is why I was interested in talking to you because it’s the architect who can see this from a disinterested point of view because they’re responsible for things like security, for the user of interaction at the highest level, because they’re very often the ones who will only see all the capabilities. And I don’t know if that corresponds to your experience or not.
Lizzie Matusov: I think that’s correct. I think it’s definitely very difficult at an individual contributor level to be able to understand all of the outside forces or the other areas that play into their ability to, again, achieve business outcomes at the right level of effort.
I think that the support staff, the folks that are really in charge of thinking about how can I support the organization, which the architects are such a critical piece of that, engineering management or leadership has some role in that, those are the people that are in a really great position to understand all of those forces and to understand how to translate that into the changes that need to be made to support the organization. Now, what we often find is that people will start trying to look for the one data point or the one metric, let’s say, that matters-
Michael Stiefel: Silver bullet we used to call it.
Lizzie Matusov: The silver bullet. And we also know Goodhart’s Law. And we are all engineers by heart so we know that if we pick a single number, we can all rally around changing that number while the core problem is still happening right in front of our eyes.
Michael Stiefel: When I was a junior programmer, the emphasis used to be how many lines of code you produced. And I guarantee, if that’s what you’re measuring, you are going to get lots of lines of code.
Lizzie Matusov: Oh, yes. I wish you had AI tools, you could get tons more lines of code written for you.
Productivity and Performance Are Not the Same Thing [16:48]
Michael Stiefel: Which actually brings me to a point that I would like to explore, because I’m thinking of several ways this conversation can go. One thing is that I’ve always believed that the metric you use to measure productivity should not be the metric that you use to evaluate engineers’ performance, because that set up … Actually, I think this is a general rule in any part of society, but I think particularly in the software area, it sets people up for failure and sets organizations up for failure.
Lizzie Matusov: I think that you’re absolutely right. The research also validates that perspective. Unfortunately, looking at productivity and performance in the same light is very difficult to do.
Now, what you can think about is how do we understand the performance of the team and how do we align ourselves on achieving those performance goals? But you often find, particularly when it comes to IC engineers, the work of software development is so multidisciplinary that it is just impossible to pick a single number.
And I had this experience too, where I remember once being told that the number of code reviews I was doing was far below my teammates. And I was thinking at that time like, “Gosh, I’m tech leading a new project. I’m sitting over here with all of these other teams, working on basically what’s the system architecture so that I can make sure that they’re set up for success for this new key project. Should I stop working on that and just start working on approving people’s PRs to change the perception of my own performance?” And the answer is no. That’s not in service of the overall goals of the organization or the business, but unfortunately that’s the single metric that gets picked.
And so what we often tell people, and the research shows this, is, one, think about the team as the atomic unit and think about the ways that individuals are in service of that team. There’s a great analogy about soccer teams. You would never judge a goalie based on how many points they scored because the goalie is not supposed to … If the goalie is over there scoring points, you’ve got a big problem. You think about the team and how many points they’ve scored and what were the roles of all of the players within that team to achieve that overall outcome. And that’s how you should be thinking about software development teams as well, how are they, as a team, working together to achieve that overall outcome?
Michael Stiefel: And that’s another interesting analogy from another point of view because sometimes if the goalie fails it’s because there are too many shots on goal, and that’s the result that the defender is not doing their job and the goalie is just overwhelmed. So if you looked at the goalie and said, “This is a lousy goalie”, no, it’s really you got to look at the team as a whole.
Real Software Engineering Research [19:36]
We have a tendency to look at the proximate cause of things and not the ultimate cause of things. You talk about research, and I think people should get a little insight into the fact that what you’re talking about is not just case studies or something … this as what I’ve seen in my experience and I’m extrapolating, but there’s actually solid research behind these findings.
And at the same time, before we get to some of the more detailed things, is how can architects and software practitioners in general find out about this research, and understand that you are trying to do is something people have attempted from day one with maturity models and all kinds of things that we can talk about in a little more detail. But you, I think, are beginning to actually succeed in being able to ask interesting questions and in many cases actually answer them.
Lizzie Matusov: Yes, I think that research … It’s interesting, because when we think of research, we often think just of what’s happening in a very academic setting. And as practitioners we wonder, “Does that actually apply to us?” And it’s true, sometimes in a very academic setting. It’s very difficult to create all of the real-world variables that create a practitioner’s job when they’re in a software development organization.
But the research has expanded and evolved so much, and particularly in this frontier of productivity and achieving outcomes and the efforts involved with it, the research has really exploded. So you don’t just have the university lens, you have researchers at universities working with researchers at companies like Microsoft and Google and Atlassian and so many other organizations, to basically understand what are the factors that make the highest performing, happiest, most productive engineering teams, and what are the outcomes that come from making those changes?
So what we try to do in our work, and our company is very heavily rooted in the research, we work with researchers directly, we ingest all of their findings to make our own product better, but we also just think that fundamentally engineering leaders should have better access to that research. Now, I fully understand that it’s not always easy to ask engineering leaders to be sifting through Google Scholar, looking for the relevant paper, and then reading a 50-page analysis of the paper and the context and the findings.
And so for our part, we try using Research-Driven Engineering Leadership to make it much more easily accessible and digestible so that an engineering leader could say, “Hey, I’m thinking about introducing LLMs into my system. What are some of the considerations I should think about? Is it possible that I might be concerned about security?” And they can go and see, “Oh, actually there is a paper that looked at the most common types of bugs introduced by LLMs. Let me take that finding. Now, let me do the research and dig into the paper and figure out how to apply this with my team”. Instead of doing what we often do, which is to just figure out the problem as it’s happening to us in real time.
So we try our best to bring that research into light, and also to credit the incredible researchers who are out making these developments and making these findings that can actually help software development move much faster.
The Difficulty in Researching Complicated Societal Problems [23:07]
Michael Stiefel: We can actually make an analogy here, and let me see if this resonates with you. Because there is an element in our society, very important to us, that has this exact same problem. It’s the relationship to the practice of medicine to medical research. The human body is a highly nonlinear, not always in equilibrium, for lack of a better word, mechanism. I don’t want to get into metaphysical debates now, but just from this narrow point of view, a mechanism.
So you have people doing medical research in very well controlled … or even statistical research in epidemiology, which is sort of analogous to it because you should also make clear to people that there’s strong statistical evidence that you apply to this. In other words, there’s a certain amount of rigor. Because too many of us, when we hear about engineering research, we say, “Ah, more BS from the ..”. But there was strong basis to what you do.
But this is the exact same problem you have in medical research. You have the clinician who has a much more complicated patient. In fact, I can even drill down even further on this. Medical trials show that drug X can improve condition Y. But with medical trials, when they test drug X, they make sure that the person who they’re running the trials on, generally, has no other conditions besides condition Y.
Lizzie Matusov: Correct.
Michael Stiefel: But when you come to the clinician, they have people who have conditions, A, B, C, Y, E, F, so they have a much more complicated problem than the one of the researchers …
And this is in some sense, I think if you like this analogy, the same problem that software development research has and you’re beginning to come to grips with, by trying to take the theoretical research in isolated conditions and deal with it in the very messy world of software engineering.
Lizzie Matusov: Yes, that’s exactly right. And I think there are many branches of research now that are starting to really get at that. And so those are the studies that we often love to find, where they acknowledge and embrace the complicated nature of it instead of just isolating for a single condition that will never be the case in the real world.
Michael Stiefel: It took medical science a long time to develop the way this interaction … And it’s still not perfect. As we saw during the pandemic, all the pieces don’t always fit together. But perhaps, I’m just throwing this idea out there, that the people who are doing the kind of research you do, can look and see how the medical world addressed and solved these types of problems. Obviously human beings are more important than software, but that doesn’t mean that the abstractions and the problems can’t shed light on each other.
Lizzie Matusov: Yes, that’s true. And I think also one of the interesting things I’ve noticed, ingesting all of this research and getting to know some of the various styles, is that we’re definitely moving more into a world where, again, we embrace those imperfections and we still allow for those findings to drive practical applications.
It’s a very fair argument to say that sometimes … For example, only relying on perceptual data to help form a trend or a finding about how software engineering teams should work, maybe it’s imperfect because maybe there’s other data that suggests something different. But it still has practical value, and oftentimes what we’ve actually found is that that perception data is a stronger signal than what you’re seeing in some of these system metrics.
And so I think what I’m noticing much more over the years is that we are allowing for those imperfections to still be there and to acknowledge them, and we have limitation sections for a reason, but to still be able to extract value from the findings, as opposed to being so hung up on, again, the messiness that might create a confounding variable. We acknowledge, we address it, but we move forward.
And a great example of this actually is that there was a study recently that was done trying to model the types of interruptions that happen in software engineering teams, and what are the different types of interruptions and the complexity of them. So if it’s a virtual interruption, like a Slack ping pops up while we’re talking to one another, versus a coworker opens the door and has a quick question for you, versus a boss that comes in and asks if you’re free for five minutes, and they basically did all of these studies. And one of our thoughts was they only studied probably about 20 to 25 people in this, so there’s definitely a fair argument that maybe the exact percentages aren’t correct because it’s a small sample size.
But what was so interesting is that we actually published the findings of that study and gave people the links. This one actually blew up on Hacker News because people read it and said, “Yes, that’s my experience. That is exactly what happens to me”. And so you just get pages and pages of validation from engineers who have lived that experience. And so it is a little bit imperfect, but it represents people’s experiences and you can extract those findings to help improve your team dynamics as a result.
Finding Good Enough Research [28:44]
Michael Stiefel: And again, I think what the software world has to accept is the fact that this research is a story in progress. We’re not discovering Newton’s laws of gravitation, which incidentally were shown to be an approximation by Einstein’s special theory of relativity, but you are building useful models.
My favorite example of this is Ptolemy’s model of how the planets and the Sun went around the Earth, worked until it became too complicated. Then the Copernicus model is the one that we use today, and it’s very useful, but in point of fact actually it’s untrue, because according to the general theory of relativity, all spacetime is shaped in such a way, but the Copernican model is good enough for everything we need to do.
So what we’re looking for, and this is also … good enough models, good enough research, and not to critique it from the point of view of, well, maybe in some absolute truth it’s not right. As you experienced with the interruptions, that was good enough research.
Lizzie Matusov: It’s true. And again, there’s value to be had in it and we should absolutely be looking for the outliers in cases where these models don’t perform, but that doesn’t mean that we should discredit all the work. And so we are starting to see much more of a broad spectrum of research.
Expecting More of Software Engineering Research Than Other Areas of Research [30:20]
Now, I will say, another interesting finding in spending more time with the research, is there are different bars depending on the different types of institutions and publications. For example, if you want to be published in Nature, it’s a very, very structured process with a lot of peer reviews, a lot of data. And I think that bar is really important, especially for the nature of the publication. Sometimes you get into these cases where there’s research where, again, they look at 300 engineers or they only look within one company. And you can say that that’s an obvious limitation of the study, there’s still value to be had from those insights. And I think those are important.
I do sometimes see cases in which there are papers that are published, they just don’t even meet that bar for good enough, but they’re still out there and they’re still being used, and sometimes they’re antithetical to the myriad of other findings that you might see. And so it’s a spectrum. I think you do have to apply a layer of judgment. I think we try our best to understand the researcher’s perspective of when should research be considered relevant versus when are the confounding variables may be too great to consider that finding. But that’s something that you do have to build up a muscle for seeing that. And if you see a study with only eight engineers and they’re all of one specific demographic and that proves a generalized point, you might want to ask yourself a little bit more about that.
Michael Stiefel: Right. But the point is this is a general problem in research. This is not just a problem in software engineering.
Lizzie Matusov: Absolutely.
Michael Stiefel: You see this across the board. And sometimes that study, maybe even though if N equals 15, might be worth doing because it shows something. So now it’s worth doing the N equals 200 or the N equals 500 study.
Lizzie Matusov: It’s the signal.
Michael Stiefel: Where you can’t do the N equals 500 study, unless you have the N equals 20 study that shows maybe there’s something here.
Lizzie Matusov: Absolutely. I agree.
Michael Stiefel: So I think in some respects we’ve put a too big a burden and asked software engineering research to be something that we don’t even ask engineering research or medical research to be, because these are all messy things.
What are the important findings for an architect or an engineer to look at? I mean, I’ll start it off by saying that when I was first studying software engineering, which I actually taught in academia for a while,there was a great deal of emphasis on maturity models, which to me, as I think about it, is problematic.
Let’s say you have a child. That child’s going to evolve and develop. Well, you don’t point to them and say, “Well, this is what a mature adult does”. Let the child evolve. And you have an idea what maturity is in the back of your mind, but you have to look at the particular child, the particular environment, the analogy here being the particular engineer, and the particular company in the particular industry, and decide what is important for that. And not some abstract model that says you are on level two now and to get to level three, you have to do X, Y, and Z.
Structures and Frameworks for Explainability [33:48]
Lizzie Matusov: That’s a good question, what you’ve asked. Thinking about this audience, I think that what I really love and I gravitate towards is structure and frameworks that can create some sort of explainability around humans as a piece of the software architecture puzzle. And so in 2016, I believe, with the Accelerate book, that’s Dr. Nicole Forsgren’s book, this framework of DORA really became quite popularized. DORA actually stands for the DevOps Research Assessment group. They’re now part of Google. So when we talk about the research that’s coming out of these giant companies like Microsoft and Google, a lot of that comes from the DORA research group tht’s now embedded there.
And basically what this book popularized were four metrics that help you understand the overall health of your systems, the DORA four metrics. And those four metrics are deployment frequency, lead time for changes, change failure rate, and time to restore services. And that was a really great way of thinking about the health of your overall systems.
What’s actually interesting is also in that book, Dr. Forsgren talks a lot about the social dynamics that play into the overall performance of an engineering team, things like psychological safety for example. But those were never placed into that framework in a way that was easy to digest. And so what we’ve seen is that over the years, the DORA four metrics have exploded in popularity, but they haven’t really addressed that human element and the role that humans play in the performance and productivity of the products that they build and the work that they do.
Using the SPACE Framework – Relating Architecture and Team Performance [35:33]
So those researchers in 2022 came out, actually again with Dr. Forsgren and other researchers at Microsoft, and put forth an idea of a new framework, a new research-backed framework, which is the one that I had mentioned earlier, the SPACE framework. Again, that stands for satisfaction, performance, activity, collaboration, and efficiency. And the idea is that these five categories are meant to help you understand and create a framework for understanding the overall productivity of your engineering team. And now going back to my earlier definition of productivity, helps you understand are you delivering value to your customers and what’s the effort of doing so?
And so I think as a research paper and a framework, this is a brilliant one for architects, for engineering leaders to really think about, because it embeds the complexity between the performance of the systems and the collaboration of the teammates, or the efficiency of your workflows and the satisfaction and ease of how your teams do their work. And the more you spend time with it, the more you realize that this framework does a fantastic job of combining all the pieces that we intrinsically know impact software engineering, but maybe haven’t been able to put forth in a systems-like way.
Michael Stiefel: You know a lot of developers and architects are very detail oriented. And when they listen to some description of a framework and they’ve heard about frameworks, could you give an example of how you would use this framework to come to a very specific results-oriented point of view? Because you’re dealing with some people who are very anal-retentive and once they hear frameworks, their mind starts to glaze over.
Lizzie Matusov: Yes. It’s a great question. You can really think about it in a number of ways. One of the interesting details of the SPACE framework is that it is specific enough to help you see the different inventions, but it is also vague enough that each team can find their own way to apply it in a way that makes sense for them. So I’ll give you an example.
Let’s say we’re trying to understand what is the overall productivity of an engineering team. We have some suspicion that things are not working well, there’s some friction. The natural tendency is to look at a single metric, let’s say, velocity. Well, velocity is not really necessarily a great metric to look at. And you can think of it quite frankly like an activity metric, like how many checks did we get in a single week? And so what the SPACE framework would ask you to do is, okay, consider velocity as a single metric on a category, but then start looking at other things. Look at the overall performance of the system. So is the system highly stable? And when you quickly deliver features to customers, are they able to use those features because the software is well-designed for that?
And then you can think about, okay, what about satisfaction? Maybe we can look at the satisfaction of the code review process. Perhaps we’re moving really quickly, but the team is like, “We keep skipping all these checks and I’m miserable and it’s going to come back to me, and our team is going to be finding all these bugs because we’re skipping all these important steps”. So then you would want to look at a metric like what’s the team’s satisfaction with code review? And then you might want to look at something else, like the team’s ability to collaborate. So how distributed are the code reviews? Are we moving really quickly because there’s actually only one engineer reviewing all of the code? The other six engineers never get that opportunity for knowledge transfer, and so that one engineer quits and suddenly the team’s overall performance just sinks into the ground.
And so those are examples of ways that you can use those five dimensions of SPACE, and start asking questions about what the factors are that are impacting your team and start applying that to getting to the root cause of how to improve your team’s productivity.
Michael Stiefel: You use a Likert scale to quantify these things?
Lizzie Matusov: Yes, highly recommend a one to five point Likert scale for those perception metrics. And also highly recommend making sure perception metrics are a part of your analysis just as much as system metrics are.
Michael Stiefel: In other words, what you would do is you’d come up with a set of questions or a set of dimensions, you’d assign a Likert scale to each one of those. And then you looked at the system metrics and then you do some statistical analysis to relate all those.
Lizzie Matusov: Exactly. And you might find that in this overall picture there’s a clear challenge or issue. For example, maybe the team is moving really fast, but what we’ve identified is actually that the quality of the output is causing more problems on the other side. So they’re not able to achieve that performance for their customers because they’re moving too fast and breaking things.
Michael Stiefel: And presumably, after you have that finding, you might want to do another one to drill down. See, what I want to get people to understand is that this can be quantitative.
Lizzie Matusov: Absolutely.
Michael Stiefel: This is not just qualitative.
Lizzie Matusov: Absolutely.
Michael Stiefel: Because in an engineers’ minds qualitative equals BS.
Lizzie Matusov: Absolutely.
Software Engineering Research Uses Tools Developed in Other Areas of Research [41:04]
Michael Stiefel: So what I’m trying to get across to the listeners is that by using well-understood statistical techniques that people use in other areas of research are now coming to bear on the problems of software engineering and software development. So this is not something that you’ve invented-
Lizzie Matusov: No.
Michael Stiefel: … but you’ve borrowed.
Lizzie Matusov: Absolutely. And it’s, again, as you said, applied in so many different industries. I think we, as systems thinkers, often have … maybe focus a little bit too much in looking at just the telemetry data, but as engineers, we ourselves might know that that can give us signals onto the what or give us certain symptoms, but we need to get to a root cause analysis. And when we’re doing a root cause analysis, you need more complex factors that might play into, let’s say, the software development life cycle.
I also really loved what you said earlier about coming into it with a set of questions and then thinking about which dimensions you should be looking at in order to answer those questions. That is exactly how productivity should really be looked at, is what question are we trying to answer or what goal are we trying to achieve? Do we want to increase the stability of our software? Do we want to help our engineers be able to deliver more under easier circumstances? Do we want to reduce burnout? Do we want to keep our retention high? And then what questions should we be asking ourselves? And then how do we look at both telemetry data and perception data to create that overall picture and get those findings?
Michael Stiefel: And actually, what you’re describing in some sense is the scientific method, because you have a hypothesis and you think of a way of confirming or denying the hypothesis. But the step before that a lot of people don’t understand is that you actually build a model.
Lizzie Matusov: Correct.
Michael Stiefel: You can’t ask a hypothetical question or have a hypothesis without having a model of, let’s say, in this case what the software development process is. And sometimes what you find out, as you examine these questions, that your model is wrong.
Lizzie Matusov: Yes.
Michael Stiefel: And this is the problem I have with maturity models or any kinds of abstract models that people have imposed, whether it’s agile, whether it’s waterfall, for example. And I don’t know if you realize that no one ever really advocated for waterfall, it was sort of a devil’s advocate and the original paper was a devil’s advocate approach. But whatever approach you want to have, they embody a model, and this model has come about because you generalize people’s experience.
For example, I’ve heard, and I don’t want to go down this rat hole but just to give an example, people have criticized Agile on the grounds that the original people who signed the Agile Manifesto were world-class programmers, and they could probably work on any system and make it work. So in other words, you develop a model, you develop hypotheses and questions, you run experiments, you do analysis. And not only do you have to look at your hypothesis whether it’s right or wrong, but you have to look at, is your model of the software development process correct.
And then once you do that, you can start to get at some of these gnarly questions about where is agile appropriate? Where is Scrum useful? Where is …whatever these things are.
Lizzie Matusov: Yes. Or what’s our process and where are we being suboptimal, where are the edits that we can make? Exactly.
Michael Stiefel: This is just the early days as we like to say, but I think there’s a great deal of promise in what you’re trying to do, and people should not look at it through the eyes of this trick never works.
It’s like, for example, people criticizing engineers … People should read, and I’ve done this, read scientific papers from the 16th, 17th century or engineering papers from the 18th century, how they try to calculate artillery projectiles when they didn’t have the differential calculus. So in some sense, this research is only going to get better.
Lizzie Matusov: Absolutely.
Michael Stiefel: And you are in the early days, and it’s like a plant that you have to water and flower in order to get where you want to go.
Lizzie Matusov: As you said, we’re finding the models as we learn more, as we ask more questions, as we study more.
Michael Stiefel: And hopefully in the end we’ll be better for it.
Research and The Architectural Process [45:48]
What I want to do now is look at some of the findings that you’ve found and relate them to the architectural process. One of the findings that I believe I read about was the idea that you should define your architecture first and then worry about how you organize your teams. I know what Conway’s law is. I mean, it’s drilled into all of us. I remember reading it when I read The Mythical Man-Month way back early in my career.
Lizzie Matusov: Yes.
Michael Stiefel: The book Accelerate talks about the Inverse Conway Maneuver, where you actually define your architecture and then you evolve and define your team.
Lizzie Matusov: I mean, the Accelerate book does a fantastic job of talking about all of these concepts and really asking these deep questions. And I think what I really love actually about … Accelerate broadly brought forth this way of thinking about the complex systems of software development and that curiosity mindset that allows us to challenge things like, yes, what is the right way to think about Conway’s law, or how do we think about not necessarily in Accelerate, but Scrum versus agile, how do we think about systems health versus team health? I think that Accelerate did a really phenomenal job of that.
The Limits of Current Software Engineering Research [47:12]
Michael Stiefel: The question then becomes what kind of teams, what kind of companies are ripe to use this type of research? Should they have to evolve to a certain point or can any company … I mean, for example, if you are dealing with a system that’s like we call the big ball of mud, how do you start to think your way into this productivity or performance point of view?
Lizzie Matusov: It’s a great question, and I actually think it’s not for every company. There are definitely cases in which I wouldn’t recommend using this type of framework to think about your systems and the people that are organized around them.
One example of that is when you’re a really early stage startup and you are just thinking, “How do I deliver value?” It is not about optimization at this point because you have not reached the point where you’ve earned the privilege of optimizing your systems. So if you’re a team that’s like your first 30 employees, focus on getting to product market fit, for example. Another example where that’s not a great fit is when you have these gigantic organizations where technology is maybe a supporting part of their work, but that technology is actually not really evolving or advancing and we’re really just in maintenance mode. There isn’t this desire internally to develop that technology further, so kind of in that static area. Not a great fit to think about this type of systems thinking.
Where it is a great fit is when you have this mindset and desire to iterate, to improve the core system that you’re working on. Maybe you have a system architecture and you have a team architecture and you’re trying to deliver 15% more value to your customers next year. And you’re starting to think about, “What are the levers that I can pull so that we can achieve more value for our customers now?” If you’re in that type of mindset, I think this is a really important thing to think of.
And I think what’s more interesting is that up until about 2022, the common way of resolving that question that I just posed was to add more headcount. Just get 10 more engineers, which all of us engineers know that there’s a cost to that as well, but it is a very attractive thing to consider, huge cost. Nowadays, we think a little more carefully about it. We’re in a different market environment and we need to think about, quite frankly, efficiency. And so what we’re actually finding now is that organizations that are looking to increase the amount of value they deliver to their customers. Maybe that’s through a more performant system, maybe that’s through more features, maybe that’s through the more highly available system or more R&D on the side. Those companies are starting to look at the whole picture, which is not just, “How many humans do we have to do the work, but also what are the processes in place and which of those processes are suboptimal for us right now?”
And for those types of companies, I think this is a really great fit. But I’d say if you’re on a highly stable system that is not likely to change, and if you’re in that zero to 0.5 stage where you’re still trying to just mature the product, this is not really where you should look.
Team Effectiveness Is Built In, Not Added On [50:42]
Michael Stiefel: Interesting, interesting. What I’d like to look at right now is where can architects and engineers find this research? Because one of the things that the research talks about is, for example, how loose coupling not only allows a more flexible system, but actually contributes to team productivity because they can get more done. So the architect has to think … And this is very often a lot of architects don’t do this. There’s an old saying, I forget, it was Zenith had it, it was a television manufacturer, they said, “Quality is built in, not added on”, was their slogan.
So for the architect, team happiness, team effectiveness, things like security, have to be built in, they can’t be added on. And in fact, this goes back to what I believed for a long time, that one of the responsibilities of the architect is to deal with things that you can’t write a use case for: security, scalability, the total user experience. And this is another one, team effectiveness, team productivity, because if you don’t think about this from the get-go it’s not going to work.
Let’s put out an example of … You talked about one of the things that contributes to team productivity is the ability to put stuff out relatively frequently. They see satisfaction, they see things being accepted, there’s a positive emotional feedback loop. You feel like you’re checking things off and you’re going someplace. But if the architect doesn’t build the system with the appropriate set of coupling, with the proper integration environments or the lack thereof of integration environments, because if you require an integration environment, which means you have a bottleneck which gets in the way of doing things, you then have to design this from scratch.
Lizzie Matusov: It’s true.
Michael Stiefel: You have to write interfaces to allow the separation of concerns that allows the teams to operate independently and let the teams do their own things, whether it’s picking their own tools or hiring the people they need.
Lizzie Matusov: I think that there’s a very important point there, and I’ll slightly modify what you were saying. I believe it is so critical, and I think architects do a really great job of zooming out and thinking, they should be thinking, how is this system going to be impacted in five years, 10 years, 15 years, and all of that.
But we’ve also probably all been in situations where we walk into a system where it isn’t optimal for the modern time. I even think back to when I was working at Red Hat and the thing of the times was microservices. It had blown up, it had just blown up. And it was like, “Okay, well what do we do with all these monolithic applications that have been developed for the many, many years prior?” And even now we’ve evolved away from microservices into the right size services. Not everything has to be a single microservice.
And I think those evolutions in how we understand technology, they happen and we see sometimes that happens on a five-year time horizon, sometimes that happens on a 10-year time horizon. What I think is really powerful about this framework is if you walk into an organization or a team or a system architecture that is suboptimal for the world you live in today, you can actually use this framework to help you iterate in the right direction towards making that change.
So the best case is always to have something in place when you’re building from the ground up, to have that architect that has that five, 10, 20-year time horizon and can anticipate those changes. But you can also have a case where you’re walking into a suboptimal system that you want to improve upon, and you can actually use the same type of principles to understand, “Am I making my investments in the right areas of the system, or should I be looking elsewhere based on what the data tells me?”
Where To Find The Research [55:13]
Michael Stiefel: Where would people find some of this research on a continual basis to see how findings come out? Where would they look?
Lizzie Matusov: Great question. There’s a couple of things that I can offer. One for the very curious mind who wants to really just dive right in over a whole afternoon, honestly, the best place to start is Google Scholar. You can actually use scholar.google.com, start looking at the topics that matter to you, whether it’s software engineering productivity, system design, anything of the likes, and you can start looking at the relevant research that’s been coming out. There’s waves of time where it’s more or less papers coming out. For example, there’s an organization called IEEE and they have annual conferences. And so you’ll actually see a wave of new papers come out right before the deadlines for those conferences. That’s one place you can look.
If you maybe don’t have the time but are looking for that continual stream of research to be delivered to you, my plug is to check out Research-Driven Engineering Leadership. The idea is that every week we cover relevant recent papers that answer tough questions in engineering leadership. And that could be something like what is the overall performance improvements that various LLMs have in your system, to things like what considerations of the hybrid work environment should software engineering teams be thinking about that are specific to how they work? So it has a full spectrum in thinking about the impacts to software engineering and answering some of those thorny questions that don’t really have a single clear answer yet.
Michael Stiefel: We can put where to find these things in the show notes so people can get at them. This has been fascinating to me, but we only … I only have so much time for comments. I could talk about this for hours.
Lizzie Matusov: Me as well.
The Architect’s Questionnaire [57:04]
Michael Stiefel: But I like to emphasize the human element of things. I like to ask all my guests a certain set of questions. And I would also like to ask you, how did you realize the relationship between architecture teams and leadership?
Lizzie Matusov: Good question. I think that many of us who spend time in engineering start to see that inkling where those human aspects, like leadership and these social drivers like autonomy and dependability and purpose, start to play into our work more than we expected when we first walked in. You first walk into your software development job and you’re like, “I got my tools. I’m going to develop software, and that’s the job I came here to do”.
And then you start seeing, “Oh gosh, my work is really dependent on getting clear requirements”. Or, “Gosh, when I have a teammate that I just can’t collaborate with, I can’t get my work done”. And then you start realizing it’s about so much more than just the code that you’re writing. And so I had that natural evolution where I went from thinking very independently about the work that I was doing to thinking about the team and the system around me, to then realizing how impactful leadership, the team around me, the constructs of our system are in helping us achieve the work we set out to do for our customers. So that was my evolution.
Michael Stiefel: In other words, reality hits you in the face.
Lizzie Matusov: Yes. Slowly, slowly, all at once.
Michael Stiefel: Yes, yes. I think it was F. Scott Fitzgerald, or it was Hemingway who said, “Bankruptcy happens gradually and then all at once”.
Lizzie Matusov: Exactly.
Michael Stiefel: Even though you do what you like, is there any part of your job that’s the least favorite part of your job?
Lizzie Matusov: I think that one of the challenging bits of my work, because I sit at the crossroads of software development, thinking about teams, and also just running a company, is just how often I have to dive into a situation where I am learning something new. And I think that that is the good and the bad. You stretch your brain all the time, and I think that there’s something so wonderful and interesting about constantly learning new things, about the depths of software engineering dynamics and how teams work and productivity. But also things like how to do payroll when I learned that for the first time, setting up benefits, making sure that we have all of our ducks in a row thinking about marketing.
And so it has stretched my brain in so many ways. And I will say that most of the time I am just elated about the pace of learning. And some of the time I’m a little bit exhausted and just thinking about the joys of when you are working on a system that you know super well, and sometimes you could just rest your brain for a moment and know that things are working in steady state. So I do miss that feeling sometimes.
Michael Stiefel: Yes, making sure that the State Department of Labor or the IRS doesn’t get in your way is not exactly the favorite thing you have to do.
Lizzie Matusov: But it’s very important work.
Michael Stiefel: Yes, yes, yes. Is there anything creatively, spiritually, or emotionally about your research or what you do that appeals to you?
Lizzie Matusov : Oh, so much. I think as I’ve come into this problem and this desire to solve it, I have just been so fascinated at seeing the different schools of thought and thinking about engineering teams and what makes them tick and how do we think about the humans and the systems.
And I just love thinking about software engineering as a system beyond just the code that we write. And I think that now is actually an incredible time to be thinking about it because so many of our processes are being tested with new tools and technologies, like all of this AI tooling that begs these philosophical questions. We don’t have to get into it, but the key question of will AI augment or replace engineering? I have a strong opinion about this, but I love to see how our society is grappling with these big questions and that my job actually puts me at the forefront of these questions and how we think about that.
Michael Stiefel: I mean, it forces certain questions to be answered that perhaps we would rather not answer. But as you say, this goes straight to what it means to be a human being and what is the role of software. And software is not an insignificant part of the world now, and-
Lizzie Matusov: It’s everything.
Michael Stiefel: Yes, it is. Unfortunately or fortunately, it is everything. What don’t you like about doing research?
Lizzie Matusov: A little bit of what I mentioned earlier, that you need to apply a bit of a filter to understand when is a finding a signal, when is it something a little bit more definitive that you can act on. And I think I sometimes see papers that make crazy claims and then when you dig into the research and how they did their findings, you realize that actually this might not be something we want to rely too heavily on.
Unfortunately, research is a really strong tool to explain a point. And so when there are studies that are done that might not necessarily meet that threshold, if they prove a controversial point, they can still get picked up and used to prove something that is antithetical to what is reality, but just for the sake of proving that point.
Michael Stiefel: People have Bayesian priors and there’s no way to get around them.
Lizzie Matusov: Absolutely.
Michael Stiefel: Do you have any favorite technologies?
Lizzie Matusov: Oh gosh, they are changing so much. I think my answer would be very different depending on what day you find me. But I think because of my work these days, I’m actually spending a little bit less time in the weeds on software development today and a little bit more time thinking about engineers as customers and engineering leaders as customers.
And so what I’ve actually been really enjoying, and this is a little bit dated in a sense, but I think that AI transcription tools that allow you to transcribe a conversation and then look for key insights, it’s just been so powerful for my work to be able to revisit a conversation and revisit the key findings to think a little bit more about what did these engineering leaders say, what did they mean. What did they not say, in a way that I haven’t been able to analyze before.
Michael Stiefel: This reminds me of … I recently found out that there’s a feature on Zoom, which is enabled, that if you come late to the Zoom conference, it will summarize what has happened before and tell you if anybody assigned you any to-dos.
Lizzie Matusov: It’s so cool. It just allows you to catch up so quickly and to recount so much more effectively. It’s great.
Michael Stiefel: It’s a little scary because as we all wonder what the models actually do understand and what they’re making up.
Lizzie Matusov: Yes, absolutely.
Michael Stiefel: I’m just waiting for the first office argument, if they haven’t happened already, where, “Well, you said this”. “No, I didn’t say that”. “But the LLM said”.
Lizzie Matusov: Yes, you do have to apply a bit of a layer of reasoning on top of some of its own beliefs.
Michael Stiefel: Very often people are ready to believe whatever because it fits into their
Lizzie Matusov: Yes. I’d like to keep the human in the loop a little bit longer.
Michael Stiefel: What about doing research do you love? What about doing research do you hate?
Lizzie Matusov: What I love about the research is just that there’s so many brilliant people thinking about the hypotheses that they’re trying to test. And going out and having that curious mindset, coming back with findings and then sharing those findings with the broader industry. I love that mindset. I love that experimental thinking. It’s really been such a joy to work with researchers to evangelize their work and to find ways to get it into the practitioner’s toolkit. So that’s something that I really love.
And then something that I don’t love as much is, again, trying to think about which research applies based on what bars they’ve been able to set, as far as the sample size that they’re looking at, or what variables were they including or not including, and how can that either serve as a signal or as an actual finding. And I worry sometimes when we look at research that is meant to be a signal and then see it as a panacea or the overall finding, when really there’s more work to be done.
Michael Stiefel: Or sends people in the wrong direction.
Lizzie Matusov: Exactly.
Michael Stiefel: What profession, other than the one you are doing now, would you like to attempt?
Lizzie Matusov: I am a very creative person I believe. That’s actually what draws me to software development and being in this space, but it also manifests in very different ways. A hobby or something that I really enjoy is interior design, which is very random for some people, but I think that creating a physical manifestation of what makes you comfortable and what expresses your identity across different rooms or an entire space is just such a unique expression of a human being. And I really love to think about that to help people.
I love that interior design also brings out some of those kind of subconscious elements of how we operate. For example, the rule of three is for some reason, when you see three things bunched together, that brings your mind peace, versus when you see two things of equal height next to one another. There’s something about it that invokes chaos in your brain.
And so I think if I weren’t doing this or maybe in a future chapter of my life, I’d love to dive into some of those elements of creativity and think about a much more physical representation of my creativity.
Michael Stiefel: When you sell your company and become rich, you’ll be an interior designer.
Lizzie Matusov: It’s funny, some of my friends now, as they move apartments and are thinking about their space, I now have various conversations going where they’re sending me things or asking me, “How do I achieve this vision in my head?” And it brings me so much joy. It’s definitely a very fun hobby.
Michael Stiefel: Do you ever see yourself not doing your current job anymore?
Lizzie Matusov: I think that if we do everything right, I’d like to achieve our scale of impact in a time horizon that is shorter than my entire life. And so I would like to achieve our impact and then see it out in the world and then allow myself to focus my energy on something else.
Now that time horizon might be 10 years, 15, 20, maybe five if we do everything right quickly, but I definitely imagine that I will one day be doing something different. And I also think what’s great about my job is it evolves over time. And so in a way, I do a new job every single year, but I think I would like to keep focusing on this domain and supporting software engineering teams until we feel we’ve achieved the impact that we set out to achieve.
Michael Stiefel: And as a final question, when a project is done, what do you like to hear from your clients or your team or your peers?
Lizzie Matusov: I love stories of impact, particularly when it relates to engineering teams being able to achieve more in a sustainable, effective, high value way. So I love to hear the stories, for example, from our customers when they tell us, “Your software helped us unlock this key area of friction that we were able to lift, and that effort allowed us to suddenly move faster and to build this project on time. And then our customers got this value. And then we were able to be a more sustainable, effective engineering team”.
And it’s just a win-win for everyone. The customer wins, the engineering team wins, the business wins. When those areas converge, I get so much energy off of those stories. And in many ways, those stories are what keep driving me towards continuous improvement, continuously talking to engineering leaders and looking for the ways that we can actually help them achieve those goals.
Michael Stiefel: Well, thank you very, very much. I found this very fascinating. A lot of fun to do.
Lizzie Matusov: I did as well. Thanks for having me.
Michael Stiefel: And hopefully we’ll get a chance to maybe do this again in the future.
Lizzie Matusov: I would love that.
Mentioned:
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.