By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: [Video Podcast] Frictionless DevEx with Nicole Forsgren
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > [Video Podcast] Frictionless DevEx with Nicole Forsgren
News

[Video Podcast] Frictionless DevEx with Nicole Forsgren

News Room
Last updated: 2026/03/02 at 6:40 AM
News Room Published 2 March 2026
Share
[Video Podcast] Frictionless DevEx with Nicole Forsgren
SHARE

Watch the video:

Transcript

Thomas Betts: Hello and welcome to the InfoQ podcast. I’m Thomas Betts. And today, I have the privilege of speaking with one of the most prominent and important minds in DevOps and developer productivity, Dr. Nicole Forsgren. She has led productivity efforts at companies like Microsoft, GitHub, and Google is the author of two bestselling books, Accelerate in the second edition of the DevOps Handbook. Her newest book, Frictionless, talks about identifying and removing developer friction. Now, back in November, she spoke about that subject during a keynote at QCon San Francisco, and so that’s what we’re going to be discussing today. Nicole, welcome to the InfoQ podcast.

Nicole Forsgren: Thanks so much for having me.

Why Friction is a Useful Lens for DevEx [01:07]

Thomas Betts: Now at QCon, one of the things I remember you talked about was this emphasis that developer productivity is about more than just faster builds or better tooling. We need to look at those friction points that constantly are slowing us down. Can you explain why talking about friction is a useful way to frame the conversation about DevEx and maybe give us some examples of what you mean by friction?

Nicole Forsgren: Yes, absolutely. So, I think friction can really help us think about how to improve and look for the best areas we can improve development, right? Traditionally, you could look at processes that were very manual or required a lot of process because many times, those are very fragile and they’re going to break. The same holds true right now with AI, right? It can surface in different ways, right?

Some things just show up like magic, all of the code generation completion that we’re seeing right now. But that can also end up creating additional friction points in buildup, whether that’s through launch processes or code review or things that used to be manual around release and deploy, which is common in many companies. Now we have this massive backlog, right? And so, things that are friction are often good indications of where things are brittle and possibly about to break as we start to increase load and speed.

Who Should Care About DevEx? [02:16]

Thomas Betts: When you talk about things that are brittle and about to break, that’s not just on the developer’s machine or that one service, it can be like a company-wide problem. How does it affect everything? Who needs to care about DevEx? I think there’s one of those the hardest problems in software is naming things and developer experience sounds like, “Oh, it’s just what that coder has to deal with”, but it’s a bigger problem, so how do we make the other people see this problem and how does it affect us as a company more broadly?

Nicole Forsgren: That’s a really good point, and I think DevEx can be a little challenging there, right? It can be development friction, it can be value delivery, but the reason where people should care is as we keep telling executives or as they keep reading as leadership is reading all of these bombastic headlines saying that we can spin up entire new products and features in an hour and deploy them out. That’s exciting.

Not entirely real, but where that’s the case, I think that’s where it can be really insightful and informative to leadership is if we want to get on this fast path, if we want to be able to leverage AI to do all of these incredible things, we need to look at some of our systems and processes because right now a lot of that is centered around developers or that development pipeline, that development experience, but like you point out, it’s not always developers. It could be a security and compliance review where historically, there was a big back and forth, maybe there’s a little bit of security theater where you said no and you negotiated for a few weeks.

That’s really, really tough now and now overwhelming, already understaffed many times security and compliance teams or what does release and launch look like, right? I know many large companies right now that manually select a candidate build and then run it through test and run it through canary. And anytime you have a bunch of manual work and handoffs and unique decision points, the more you increase load and friction, the more something like that can break. Like you said, it’s not just a build. It could be an approval process that is brittle and breaks and then the business cares because we’re seeing things change so rapidly.

I mean, some of the latest models this morning are doing things that they couldn’t do three months ago, two months ago, and so if we’re trying to keep up with the pace of competition and the pace of the industry, even if it’s just accelerating some of our feature development, any additional friction highlights things that are slowing us down from our work, but also these kind of potential fragile breakpoints.

Exposing Friction Through Faster Feedback [04:42]

Thomas Betts: Yes, I think it goes to the idea of things that move at the speed of humans versus the speed of computers, and we’ve had a lot of processes developed over the years that work fine at the speed of humans. It takes a few days to go through the email process and that’s fine, and I can concentrate on other faster things over here while I’m waiting for that to go in the background. As AI and other automation is permeating everything, it’s not just I can write code faster, but I can create user stories faster.

All of that gets compressed and now the whole process starts working at the speed of computers, if you will, and that’s where that friction, you rub against it faster and faster and faster and that starts a fire.

Nicole Forsgren: Exactly, and especially now that we’re seeing so much interesting and exciting development in agentic workflows, right? It’s sort of like builds used to be slow, so we sped up the builds and then we parallelize the builds so that they could kind of be running concurrently. We’re seeing the same thing, right? It’s no longer one person and an agent or one person and a couple of agents. It can be a developer orchestrating a bunch of agents that are then kind of orchestrating their own work and solving their own problems, and so any friction point really is amplified.

Metrics for DevEx [05:52]

Thomas Betts: Yes. How do we start measuring this stuff? What are some of the metrics that we’ve had in the past and do they still work with all this changing to go to an age of AI where there’s just more automation? Do we have the same metrics, or do they change?

Nicole Forsgren: This is the big question, right? I think some of the metrics will remain the same. Some of the metrics will definitely change, right? Lines of code is a good example of a metric that was never good.

Thomas Betts: And it was awful.

Nicole Forsgren: But it was always brought up. Now, we’re in a space where on the one hand lines of code is a complete nonsense metric, right? Because with a reasonable prompt, I can generate hundreds of lines of code. On the other hand, it might be useful, but only in very certain contexts such as what does our code base look like and how is that evolving over time and what does that mean for current and future model tuning and model training, right?

Because if we’re tuning models on historical code bases that we’re architected and designed and coded a certain way or we have assumptions about that, what does that mean when maybe soon, much bigger proportion than zero, at least a percent of our code bases was machine written, right? What’s that mean for feeding that loop? And for a bunch of folks in ML and AI, we know that the data that you put in is just amplified, and so something that we might not catch in our initial data and training and inference set can turn into something kind of unexpected. So, that’s one example of a metric, and I think there’s no one metric that matters, right?

A lot of the frameworks that we have from before are still quite relevant today. So, if we look at DORA, DORA focused on the pipeline. It was speed lead time, how long it takes to get through. It was deployment frequency. It’s kind of the volume for set period of time. We’re looking at change fail rate, which is a quality metric. MTTR, which is recovery now. So, back in the day when I started a bunch of that work, that was one of the best things we could do is really instrument and engineer the pipeline, the software development pipeline.

We could reduce variability, increase predictability, but many of these frameworks still apply across all of our product development work, our software delivery and feature work because we can go back up into ideation, we can go into implementation and coding, we can go into production and speed matters and throughput matters and quality matters, and that can help us evaluate our process.

If we’re looking at developers or developer productivity, SPACE is a good framework here as well, right? We can look at satisfaction. Is the developer satisfied with the tools that they have and the pipelines that they have? Performance, that’s our quality outcomes again, right?What’s the outcome of a process? A is activity. What are all of our counts, lines of code? It’s not great. Number of commits, number of PRs, number of reviews at a time period, right? Communication and collaboration. Where are people getting most of their information from, or how can our systems communicate? Are APIs holding up fairly well? And then efficiency in full, how long does it take?

I think there’s probably an opportunity to extend a couple of these. One is some of this may be true for agents as well, right? How easy is it to use for an agent? What’s an agent’s satisfaction? We might not ask them, although we might, but that might surface through different types of friction, right?

We may also want to add a couple dimensions. Trust comes to mind here, right? Trust is…it’s always been important. I know in the sysadmin community, we have always centered on trust, right? Sysadmins and SREs. Now developers are thinking about it more, because they used to be working in fairly deterministic systems. And then cost, which cost has always been a thing, but now we’re having to make real explicit trade-offs between what’s our compute cost? What’s our capacity? Do we want to deploy a thousand agents to do a hundred things to replace a person if then all of our systems fall down?

There are a lot of metrics that I guess this is a long version of, I don’t have an answer for a specific metric because it will really depend on your environment and your context and what data you have available, but also what questions you’re trying to answer.

DevEx and Systems Thinking [09:46]

Thomas Betts: Yes, I think what you’re getting to is we’re trying to measure productivity. Again, we go back to, we call it developer experience, and if we have better developer experience, and I think your keynote was called from friction to flow, how great DevEx makes everything awesome. It’s like, okay, we have this, but with the developer and the developer experience in the name, you keep thinking that’s the thing we’re trying to improve, and it’s always been about the bigger picture. It’s always been about software delivery, and that’s what we’re trying to do.

Nicole Forsgren: And I think that’s a good point, right? The systems view gives us better insight into what’s happening. Even if when we start, we might focus on developers because that’s where we can get some early signal, but it’s always about a system. A developer is always working within a system, whether that’s a manual system, a purely manual system, a system built on mainframes, a system that’s highly automated in the cloud or now a system that uses possibly a bunch of AI agents and AI tooling.

Thomas Betts: Yes. Well, going into your example of lines of code being a horrible metric, it was sometimes used as that stand in proxy for productivity. Like, “Oh, he’s able to write a hundred lines of code a day versus 10 lines of code”. I’d rather have 10 perfect lines of code that we actually use versus a hundred lines of crap.

Nicole Forsgren: Right. And sometimes, the right thing to do is to delete code, right? How do you measure that?

AI Agents, Code Quality, and Developer workflows [11:08]

Thomas Betts: But if you’re looking at the goal of the system got delivered and the system remained stable and didn’t have downtime and we’re able to make changes, lots of little changes very quickly, being able to measure that comes back to, well, it’s still called developer experience, but it’s all of these other factors, and I want to poke on your idea that I think you just kind of sussed out was asking the agent about their developer experience. The agent, if you have a coding agent, you ask them how do you think it’s going or are you asking them to help me summarize my metrics and help me summarize DORA or whatever else?

Nicole Forsgren: Well, I will say right now, I’m not asking agents, but there might be a world where we do that quickly, right? By the time this gets released, we may be in that world, but I do think there’s an opportunity to ask whether in words or take a look at some of the data points that speak to how agents are developing and delivering code to look for the things we look for when we talk to people like, right? What’s really difficult with what you do? What do you swear at all the time, right? Do we see a particular process or something that agents keep having to retry several times, right? Do we see friction showing up somewhere or we can ask them, right? We can find out what the outcomes are, right?

We can see the downstream impacts of their work, or what they’re able to deliver. And so, I think there’s an opportunity to maybe interrogate the context of agents and AI in which they work, which will speak to both how well the agents can do things and how well we’re equipping them in our systems and also let us ask and think about better ways to equip and enable devs to answer these hard, creative, challenging problems that computers aren’t ready for yet.

Thomas Betts: Yes. And I think that gets back to, I wanted to ask about just how workflows are evolving. I think we tend to focus on the AI coding agent. That’s the easiest thing. I’ve got GitHub Copilot, there’s Claude Code. Like,  this writes code for me and it moved from write this function to I ask the agent to do the work and it thinks about it and then solves it. That agent model is moving throughout the software development life cycle. We’ve got product management agents that sit next to the PM to help do that. The scrum masters or other people that are helping to manage the backlog or write the stories or break things down, they’re showing up in all these different ways because it is fundamentally just talking about language.

Large language models are good with coming up with the language. I think it was the C in space you talked about, it was communication and collaboration. That’s what I want to ask about is how do we take what we’ve been trying to do with getting humans to communicate better, and does that naturally just move to if there’s an agent in the loop, if you have a good communication pattern that’s going to help you use these agent tools.

Communication, Documentation, and Conway’s Law in the AI Era [14:02]

Nicole Forsgren: I suspect it will. And here’s one example you touched on, which is breaking down work, providing very well, clearly-structured requirements to engineers for engineers to implement. And we know there are differences between very clear requirements and clearly design docs. And then there’s much more ambiguous docs on what level of an engineer can handle that and what level of engineer we would hand that to also, or just the challenge of having a lack of clarity in docs.

Well now if we have very, very clear design docs and feature docs and feature specifications, that communication just like it was higher fidelity with a person is now also higher fidelity with an agent because like you said, it comes down to language, right? So, the better we can communicate and specify, and that’s one example. It can also be having more clearly defined and documented APIs. So, when an agent needs to use it, or internal libraries that aren’t really externally documented, a very kind of one-off thing inside of a company. Sure, a lot of devs were probably kind of fine with it if they used it several times, if they knew its edge cases, if they knew where the dragons were.

But now you’ve got a junior level dev who can kind of do senior level things, but needs a lot of guidance, and so this can also speak to APIs and how our systems communicate.

Benefits and Risks of AI as First-Line Support [15:23]

Thomas Betts: Yes. Yes, I think if you had good system boundaries, how do the systems talk to each other are the same ways the teams talk to each other the same way that the organization is structured is the way your software is structured. Conway’s law comes into this, I don’t think anyone’s written the AI corollary to Conway’s law yet, but it still applies if you break up and have a bunch of agents helping your teams, and that’s how you’re structuring your organization. It’s like, “Oh, we have these people and they each have this agent that sits down and helps their role”. That’s going to show up in your architecture, I think.

Nicole Forsgren: Yes. And for a couple of years now, we’ve been talking about how on the communication side, how AI is impacting communication within and among teams and how we want to think about that and what is good and what is not so good. There are a lot of cases where we see that a junior dev would historically go to the senior on the team and ask lots and lots of questions, especially when they’re onboarding or when they’re learning a new code base or picking up a new project, and that’s great, right? I think one of the best things a senior engineer can do is unblock people on the team. That’s also a challenge.

I’ve had senior engineers on my team that that’s all they can do and they feel a sense of pride in it, but they’re doing so much unblocking, they don’t get to do much of their own technical work, and it’s like that’s kind of tough because very proud of one thing and a little sad that they can’t do another. Well, now we’re seeing that a lot of engineers are going to AI first, they’re asking questions, and this can be good because it can free up some of our time, but it can also end up leading you down weird trails and rabbit holes and turtles all the way down and you end up in a weird spot where you never thought you’d be, right.

And so, we can also think about are there good checkpoints for asking an agent or having a conversation with an AI chatbot for long enough who knows your code base, and then when do you pop up and confirm and verify your plan and your findings with someone on your team who really knows the code base?

Thomas Betts: And I think that that new person on the team, new person at a company, it depends where you’re starting off. If you’re starting off at a startup and there’s no code, right? You’ve got to build everything from scratch. There’s no examples to lean on. If there are examples to lean on, you’re like, “Oh, well, here’s what you can do. You can ask the agent, well, help me explain the code that I’m reading”. But if you also have legacy code and you know this isn’t how you want to do stuff and you want to move to something better, but there isn’t that next example of here’s what better looks like, you might get the feedback like, well, this is how it’s always done. Just keep doing that.

Nicole Forsgren: This matches our pattern.

Thomas Betts: Yes. Whereas the senior is going to say, that’s how we did it. We want to stop that. We want to do something new, and that might not be captured yet in any kind of documentation that the agent’s going to pick up or the new dev’s going to pick up. So, you still need those people involved.

Documentation, RAG Models, and Sustainable Practices [18:06]

Nicole Forsgren: You do, you do. I think it also kind of spins back a little bit to we need to work in ways that are appropriate to our context, right? It’s very different to being a tiny sprinty startup and a larger company. There are also things that will for, I’ll say universally, I may take this back in 10 years, universally remain true, like good docs, right? You can be at a sprinty startup, but without a README or basic docs, that is only sustainable for a certain amount of time. At some point, that is going to break, right? You’ll get enough time from when the code base was started. You’ll need those docs.

And so, AI right now, there’s a lot of really interesting and promising AI tools that can take a look at your code and at least help document your code. That’ll remain true moving forward because as we are prompting or asking for feedback or asking for agents to do work for us, they will reference the code that we have. They’ll reference the documentation that we have. If we’re building a RAG model, what do we want to be clarifying for them so that when they do work, it’s more clear either for a human or for an agent.

Building Stakeholder Buy-In for DevEx [19:11]

Thomas Betts: I wanted to take us in a little bit of a different direction. This is some of the stuff you talked about in your keynote and it was also in Frictionless the book, that how do you get this to be something the whole company is going to buy into, right? Taking it from the idea of I want to remove this friction to getting the buy-in to say, we should focus on this, we should invest in this. We have to make it better because these are the things. How has that stakeholder buy-in changed, if at all? Or maybe you just start with how does it start without the AI? How do you go from, I have some metrics or I can start capturing metrics to, I can tell a story that my CEO and CTO are going to believe?

Nicole Forsgren: There are a few things that are going to remain fairly true, which is we want to align with the priorities of our business and of our company and our organization. We want to align with and understand the problems that they have so we can solve those problems. And so, a lot of the time, it’s about contextualizing the data that we have and the things that we propose to solve those problems or to align to something that we know is definitely top of mind. Now, that strategy might look a little different depending on where you are in the org and where your org is.

So, if you’re in an org that this is not on the radar, they don’t care, there are other fires to put out and you’re an IC or you’re a manager, kind of a first line manager, then there are certain things that you can do locally to kind of have an impact and that you could start thinking about bubbling that up. At the other side of the spectrum, there’s also the case where the CTO or the VP of infrastructure has declared developing, delivering software faster and with greater quality and more reliability and stability is top of mind.

And then you’re hired to lead that effort, whether it’s a DevEx improvement or a measurement improvement, that is a little different in some ways in that your audience will be slightly different. Likesome of your rollout strategies may be a little different. You’ll still partner with teams, but you can think about what teams you want to be with your work. You’ll still be communicating out. Your stakeholders might change a little bit as a first line manager, as an IC, I’m probably not going to have one of my first readouts be to the CTO.

But in both cases, we want to align with the business priorities, the business needs, the problems that we can solve because then it resonates and people, they’re in a meeting and they’re nodding their head saying, okay, I see this. I understand why this aligns the way that I need it to.

Using DevEx to Improve Product Quality [21:40]

Thomas Betts: And just to give an example, what have you had companies saying, oh, our customers are reporting a lot of bugs. There’ve been, a lot of releases have gone out and issues have been detected by our customers, and that’s affecting our quality scores. We want to focus on that. It becomes that top level thing. And if you can say, “Well, I think one of the ways we can do this instead of like, well, just don’t write bugs, we should look at DevEx and how do we improve that”, you can tie that back to, well, how often do we release? Do we have daily builds? Going to things that are in Accelerate and the DORA metrics. If you do it faster, you get better at it.

If you say, “Oh, we break stuff; we shouldn’t release for six months”, it’s just fundamentally not true. Right?

Nicole Forsgren: Right. Well, and that’s a good point. So, if you’re in a situation where your products and your features, they’re not very reliable or they have a lot of bugs, your customer engagement and your customer feedback is quite low, right? That is something where you could go to leadership wherever it is you are to that appropriate level of stakeholder and say, I understand that this is a big problem for us. I understand that there are some things being worked on. This is a high priority. We’re trying to figure out how to address this problem.

One of those ways we can do that is to look at the developer experience, in which case they’re like, “What?” We can say yes, because I can roughly look at, I understand traditionally what our software development lifecycle looks like, and I can identify some of the key areas where these should have been caught. Let me go take a look, because my hunch is that some of these, and you’ll probably find this right, are very manual processes or they’re very, very old test suites and flaky tests that have not been cleaned up because it hasn’t been a priority. And so, many times for an executive, linking those things as a priority isn’t obvious, right?

For those of us working in systems, we can see how a handful of processes or gates are directly linked to that, but if you’re not swimming in software all the time, it’s not very obvious, and so just kind of couching it in that way can be super helpful.

Security, Compliance, and Risk-Informed Approaches [23:40]

Thomas Betts: Yes, I think that goes to the idea that these are friction points, right? We’re not saying get rid of security and reviews and safety and all that stuff. Just figure out how to make them not block. And sometimes, recognizing that by adding that extra step that’s a security review, you might also be causing this other backlog to come up. And so…

Nicole Forsgren: Right, exactly.

Thomas Betts: Oh, well, we’re going to wait and only test every two weeks because we have to do a manual test. All of a sudden, a lot more stuff gets in there and you aren’t able to test every little change versus I changed two lines of code. Check that, I didn’t break that.

Nicole Forsgren: Yes, exactly. Exactly. And in terms of security and compliance review, we can also take, and we’ve seen this for years, what old is new, again. We could take a risk informed approach where for things that qualify as kind of low risk changes, those can go through an attestation model, they can follow regular, the prescribed testing model, and that can be very lightweight and it’s auditable. It’ll be tracked, it’ll be followed, but that should be, and surprisingly in many large companies it’s not and small companies that should be very different from a high risk change that we need to make. That really requires eyes on it. It requires a discussion. It requires a lot of people involved.

But if you treat everything the same way, then either your high risk changes are zipping through a process that wasn’t designed for them, or your low risk changes that are probably fine with some automation are now getting eyeballs and discussions and hours of meetings that aren’t appropriate, and at some point, people are going to glaze over, so we will make lower quality decisions for those high risk releases.

Thomas Betts: Yes, yes. Again, the answer is always, it depends, right? You need context-specific responses. Some way to trip and say, “Hey, this is a high risk. Go through the extra process, or this hasn’t met the threshold. This is an easy, rubber stamp it goes through.

Nicole Forsgren: Yes, exactly.

Qualitative vs. Quantitative DevEx Data [25:27]

Thomas Betts: So, I think there’s still something to be said of how do you go in and say, we’ve captured the data, we’ve done surveys. I think your book talked about qualitative and quantitative measurements. You can get metrics out of the system sometimes, but they might not be useful. It might just be the things you can get. Lines of code, you can get lines of code. Does it tell you anything? Or you can get, how often do we release software because we’ve got those things hopefully are instrumented. And the surveys were to gather more of the qualitative feedback. If you ask developers what sucks, they will tell you what sucks.

Nicole Forsgren: They will tell you what sucks. And I think right now, especially as AI is changing so many things and it’s inviting so many people to rethink and reinvent what the SDLC looks like, right? In some cases, a couple years ago I said, there will be a world where we might collapse the outer loop, right? I don’t think we’re quite there yet, but we are probably getting close. In a world where we’re going straight from prototype to prod, it’s really hard to step back and say, “Well, I’m going to only trust system data because that’s what is reliable when for the most part, we’re creating our new tools and systems so quickly that we’re not instrumenting them”, right?

That is usually not highest on a new developer and internal product or even external product builder’s list is instrumentation, telemetry, and observability. The other thing is it’s changing so rapidly that by the time you build the instrumentation and get it into the data warehouse or the data lake or the data fabric and do all the calculations, it could be weeks. And when things are changing that rapidly, it can be very powerful and at least better than nothing to ask a handful of developers or observe a handful of developers, what is your development workflow like now? What are the biggest blockers that you have right now? And sometimes it’ll be a process, sometimes it’ll be a system.

Sometimes, it’ll just be a mental model. They’re not used to thinking about things totally differently, but system data can’t surface that, right? It can’t tell us the why. So, sometimes we can skip, especially when things are rapidly, rapidly changing. If we have no data, surveys and interviews and observations are going to be your friend.

Thomas Betts: Yes. And this isn’t a do one thing or do all the things, it’s a matter of getting prioritization, right?

Nicole Forsgren: Yes, yes. It’s having a portfolio approach, right?

Where to Start: Quick Wins and Local Momentum [27:42]

Thomas Betts: Yes. How do you identify where should we get started if we’re going to actually make an effort, do you tackle a big thing? Do you take a small thing? Do you focus on something that you’re sure is going to work? Where do you go?

Nicole Forsgren: I would say some of this is, it depends, but there are some fairly durable guiding principles here, right? Where to start, talk to a bunch of developers always, right? Because even if you’re stepping into a fairly mature program that has metrics and data and numbers, chatting with a few developers will at minimum give you context. And many times, you’ll surface additional insights and additional pieces that don’t have instrumentation that you would’ve missed, right? So, that is super helpful. 

We also probably want to start with a quick win, right? There’s often low-hanging fruit. What can we do to test our understanding of the problem?Test one of our approaches, build momentum, get a shared win among the team. Historically, internal infrastructure work isn’t always celebrated. There are some company cultures that do, and I love it there. Not everyone else is used to that because it’s not something that goes to a customer. So, if we can start building that momentum within the team, but also kind of showcasing those wins other places and showing even if we kind of squint what it unlocked or what it will likely help us achieve by the end of year that we thought was going to be impossible before, that’s good. So, once I’ve talked to people, I’ve gotten that quick win, what comes next?

I think it really depends on where you are in the org. I kind of alluded to this earlier. If you’re doing this yourself, and this is a grassroots effort, for sure, start within your scope of control. Who can you talk to? Can you partner with your team? Can you do a hack week? Can you clear up some paper cuts that are not just mildly annoying, but also you can see, you have a line of sight to how that impacts several developers? How does that create friction and confusion and slowness?

If you were hired in by the CEO to fix this, then your scope can look a little bit different, right? Your stakeholders are a little bit different.You’ll still be partnering with teams to pilot very, very likely, but that’s also an opportunity where if the whole internal build and release system is broken as an IC, that’s probably not the best place to start, right? As an executive, still not the best place to start, but could be a very real candidate earlier on in the project because you have that scope and control across much more of the org.

Scaling DevEx: Team, Department, and Org-Level Patterns [30:08]

Thomas Betts: Yes, I think, again, it depends. I’ve seen everything from one developer taking one day for their, it’s hack Thursday, we just take Thursday off and we do tech Thursday, and we do a little project on their own to a whole team saying, we’re going to take a day or a week to actually an entire company that said, “We’re going to dedicate a week. We gave ChatGPT to everyone. We got an enterprise license, take the time to learn it”. And every department from programming to marketing was working on their own way to leverage these tools. It’s one of those things, if you figure out this worked for a small group, can we make it a slightly larger group? Can we make it company-wide? Is there that type of adoption? But again, your mileage may vary based on who you are and what you’re trying to do.

Nicole Forsgren: And the team and the company culture, right? So, again, it’s a tough one of those, “It depends.” But, most people in the space, engineers, PMs were very smart, right? We’re pretty good at taking the pulse of the org and what we think is going to fit. And so, that’s why sometimes I’ll reflect back and we do in the book, right? Here’s a handful of things you can do. Pick the one that is going to resonate with the most within your company. Pick the one that won’t be just a complete opposite of what the culture says, right? Find one that’s a good fit because that will also help build that momentum.

Prioritizing Improvements with Quick RICE [31:23]

Thomas Betts: I know you like your acronyms, and one that I learned from the book was RICE or the quick RICE technique for how to prioritize improvements. Can you talk about that a little bit?

Nicole Forsgren: Yes, absolutely. So, RICE is a way to think about prioritizations. PMs use it all the time in product, and we do this when we do RICE. We’ll do: Reach: What’s your total addressable market? So, how many developers inside the company are impacted by this particular project or friction point? Impact: How big do we think that impact will be, right? Will it be on the order of seconds or minutes or weeks? Confidence: How confident are we that we can achieve this thing that we’re trying to fix, or we’re trying to remove? And effort: How hard will this be? How long will it take?

Now, in traditional RICE, what we see is people using, and we outlined this later in the book as well, numbers, right? So, come up with numbers, do a calculation. We want RIC divided by E, right? Because you want the highest payoff for the lowest amount of effort. Quick RICE though can at least be helpful when we’re just getting started, we’re kind of looking for a quick win and we can just hunch it. Is it high, medium, or low? Because many times we probably have three or four or five candidate projects, right? I’m sure someone will say, “I know this, will fix it.” Cool. Right? Go ahead and throw it on the list. When you have your interviews, when you look at the data, you’ll probably have a couple of candidates.

Quick RICE can help with that because we can just say, “Okay, this one’s high, this one’s..”. If you’re high across RIC and low on E, that is probably a pretty good candidate for starting. If you’re low on reach, low on impact, low on confidence, and super high on effort, probably not the best place to start unless again, context. Maybe you’re in a research org, right? Maybe you’re doing R and D and you’re supposed to time box something for a month or a quarter to try to do things that people think are impossible.

Thomas Betts: Yes. Quick RICE is the gut check, but I did remember getting to the algorithm of here’s the simple math, but you have to come up with numbers. And that did seem like, okay, I’ve got to do a lot more weighting of everything. I’m not going to finish that in five minutes. I’m going to spend a couple days on it.

Nicole Forsgren: Right, exactly. And again, very contextual. Some organizations, they need the number. They want to see the calculation, they want the math. Sometimes though, you can say, these are the things that I wanted to evaluate. And sometimes and later, when we do kind of the more detailed RICE, we include a handful of additional factors you might want to consider, which is how is it aligned with the organizational goals? What other type of a handful of DevEx concepts do we want to think of in addition to just reach and impact? Because that can also speak to the ability to get momentum, right?

Is it closely aligned with an ongoing, well-funded effort? That’s good to know, right? Because if it’s kind of on the borderline, that would probably nudge you over.

Using LLMs for Communication, Justification, and Blind Spots [34:09]

Thomas Betts: Have you had any good luck with using LLMs to help you write these justifications and write that communication to the various stakeholders, or help you ask the right questions?

Nicole Forsgren: Yes and no. I will say that I have found it best if I start with a pretty solid bullet point list, at least a rough draft, and then I try to nudge it to fit my voice and pick up things that it kind of missed or it said the thing and it used the word, but that’s not what I meant at all, right? And then I found it really useful to do a couple things, right? So, once I have the message that I know I like, maybe it can help me draft it, then I can ask it questions. What have I not considered? What are the holes? What are the criticisms I should be ready for? Who will be my likely supporters and who will be my likely detractors?

And then I can take it and I can ask it to help me with comms, which we are historically bad at comms, but we need to treat this like a product, right? We can’t Field of Dreams it. We can’t just build it and everyone will come. We can’t force them to use it. Developers aren’t going to use anything they’ve been forced to do, right? We tried that with CI/CD tools and everyone just spun up their own Jenkins, right? So, then what I’ll say is, “Okay, help me frame this for an executive audience. These are their key pain points. This is what they care about. Frame this for an engineering director audience. Reframe this for ICs and what they care about”.

And so, then it can help me think through how I want to communicate this to different audiences. Now, I have done this historically in the past. I love this as a tool because many times, it kind of aligns with what I’m thinking. But every once in a while, it’ll pick up something or it’ll suggest an idea that I probably hadn’t thought of. So, it helps me move a lot faster that way.

Thomas Betts: Yes, I do when the LLMs help identify my blind spots. And if I can start keeping things written down so that it keeps track of that, and then they’re like, “Oh yes, I always forget to think about X or somebody’s viewpoint”. It’s really good at saying, “Remember, you need to talk about this too”. Okay.

Nicole Forsgren: Yes. I will say one thing here as well is if I go a few iterations down, it tends to go a little haywire. Even if I tell it like, “No, go back, or, no, that wasn’t what I asked for”. Or even if it’s going well, it can be really helpful to basically clear my cache, right? Stop that chat, start a new one. And many, many of our ages right now keep track of some of our context. But if we want to wipe context, then you’ve got a fresh pair of eyes, right?

Practical Starting Points for Immediate DevEx Impact [36:31]

Thomas Betts: Yes, it’s great to give it that amnesia and just start over. So, for listeners who want to make an immediate impact, what’s one small change, whether that’s for an IC or a team lead or a higher leader in the organization that you think they can start with that might lead to some compounding DevEx improvements?

Nicole Forsgren: Sure. I think the best thing that almost anyone can do is reach out to a developer. It can be a peer. It can be someone in one of the teams that’s high priority and ask them what’s it like? Or sometimes I like a slightly more targeted question, which is, what’s the hardest part about your job? What do you swear at all the time? What’s the thing that is unlocking a bunch of stuff for you? Because that can at least give you an idea of what you’re saying, and then you can ask two or three others and see if there’s any similarity at all. You can also reach out and see what work has been done, see if there’s any existing docs. LLMs are also great for this.

You can feed them a whole bunch of information, ask them to synthesize, because that can surface both what has been done in the past or tried in the past, what worked, what didn’t work. There can also be an implied, this is how it’s always been talked about and it’s never worked before. So, maybe we don’t do that, it’s not going to work. Or maybe that really is the problem, but we need to phrase it differently because people have anchored on that prior performance.

The Future of DevEx: Evolving Developer Roles [37:47]

Thomas Betts: Now, I know this is a topic that you just get really excited about having written several books on it, but what excites you about the future of DevEx? How do you see the field evolving over the next few years?

Nicole Forsgren: Oh, there’s so many good things here. Honestly, I think the thing that excites me the most is to see the evolving role of engineers and developers. And we’ve seen it over years. The earliest years when we’re writing assembly code, we saw mainframes, we saw cloud. We’ve seen so many waves, and I think when we come up with a new technology, it automates away a lot of the work that we’ve done historically. But what it’s done is it’s opened new doors for us to think about new problems we can solve and new creative approaches we can take. And so, I’m kind of excited to watch that shift happen again, and I feel like I’m pretty lucky to have seen at least a couple of those shifts in my professional lifetime.

Thomas Betts: Yes, I like that framing. I think I’m glad I don’t write assembly code or do punch cards, but I’m grateful for the people that did that and built the foundation that I get to work on at a much higher level, like I’m solving your business problem.

Nicole Forsgren: My first job was on mainframes and I loved it. I loved pieces of it. One of the languages that I was in, the text layout was still very constrained to fit punch cards because that was how it started. And so, while sometimes I’ll joke longingly about missing some of that work, it was fun, and a green screen is always great. I’m really excited at all of the progress that we’ve made and where we’re headed.

Thomas Betts: Yes. Well, I think that’s going to wrap it up. So, your new book is called Frictionless: 7 Steps to Remove Barriers, Unlock Value, and Outpace Your Competition in the AI Era. Your co-writer was Abi Noda?

Nicole Forsgren: Yes, and I will mention there’s on my website and on his, there’s a link to a book or the book site where we have about a hundred pages of workbooks linked, and those are free. They accompany the book. So, we tried to give everyone as many tools as possible, or if you can’t afford the book right now or don’t want to get it, jump in and start trying out the workbooks. There’s enough context there that you can do a lot with what’s already there.

Thomas Betts: Sounds great. Well, Nicole, thank you again for joining me today. It’s been a fantastic discussion.

Nicole Forsgren: Awesome. Thanks so much. Good to see you.

Thomas Betts: And listeners, we hope you’ll join us again soon for another episode of the InfoQ podcast.

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article How to Create AI-Powered Content Briefs for WordPress – My Proven Strategy How to Create AI-Powered Content Briefs for WordPress – My Proven Strategy
Next Article Nnamdi Chife rejoins Terra Industries as VP of military relations Nnamdi Chife rejoins Terra Industries as VP of military relations
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Google Antigravity: 20 Game-Changing Prompts for Complete Automation | HackerNoon
Google Antigravity: 20 Game-Changing Prompts for Complete Automation | HackerNoon
Computing
An existing iPhone 16e case will still fit the iPhone 17e
An existing iPhone 16e case will still fit the iPhone 17e
News
Tecno’s got the most modular phone ever
Tecno’s got the most modular phone ever
News
Google’s TV Streamer 4K drops 20%, making it viable Fire TV Stick alternative
Google’s TV Streamer 4K drops 20%, making it viable Fire TV Stick alternative
Gadget

You Might also Like

An existing iPhone 16e case will still fit the iPhone 17e
News

An existing iPhone 16e case will still fit the iPhone 17e

1 Min Read
Tecno’s got the most modular phone ever
News

Tecno’s got the most modular phone ever

3 Min Read
5 Warning Signs Your Lithium-Ion Battery Could Catch Fire (And What To Do) – BGR
News

5 Warning Signs Your Lithium-Ion Battery Could Catch Fire (And What To Do) – BGR

11 Min Read
8 Best YesMovies Alternatives:Free & Premium Streaming Sites
News

8 Best YesMovies Alternatives:Free & Premium Streaming Sites

10 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?