By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: From Friction to Flow: How Great DevEx Makes Everything Awesome
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > From Friction to Flow: How Great DevEx Makes Everything Awesome
News

From Friction to Flow: How Great DevEx Makes Everything Awesome

News Room
Last updated: 2026/03/24 at 6:34 AM
News Room Published 24 March 2026
Share
From Friction to Flow: How Great DevEx Makes Everything Awesome
SHARE

Transcript

Nicole Forsgren: Who here has been playing with AI? Are we hearing that now productivity is solved? Everything is done? Perfect. If that’s you, go grab me a Diet Coke and we’ll meet outside in just a minute. What we’re really seeing, though, is AI is helping some things, but it’s really making a lot of things more challenging, or it’s showing to the executives all the stuff that we’ve known all the time. Very I told you so moment. Because sometimes when we have friction or if things are difficult or it’s a lot of toil and it’s a lot of manual work, AI isn’t solving it, or at least it hasn’t solved it yet. How can we think about improving the way we write software so that it’s good for the company, but it’s also really good for us. I don’t want to burn out, I’ve done it twice. We’ve got the T-shirts, would not recommend. How can we build things sustainably? How can we identify the correct friction point so that we can move faster? That’s what we’ll talk about today. Because we’re stuck in this paradox now, where we can generate all kinds of code in minutes, seconds.

People across any business unit and background can vibe code an app and push it and probably use it for at least a bit amount of time. Deployment is often taking longer. I said days here, I was optimistic. It’s usually months. Does anyone here have a friend who can code really quickly and deployments take more than three months? Just your friend. Who here is just waiting for the Diet Coke after the session and lied because they didn’t want to get caught saying deployments take so long? That’s ok, I feel you. A lot of the biggest companies right now are finding that their deployment times are still several months because we can create code but that doesn’t fix everything else.

When we think about the deployment chain and that outer loop, what’s old is new again. At Knight Capital in 2012, an engineer deployed some code. It was very routine, very standard. The problem is that the deployment script reactivated an old feature flag. It was a manual deployment. There were no automated tests. In 45 minutes, $460 million was gone. When they went back through it and they did the retro, they realized that the daily developer experience really highlighted all of the risk there. What we’ve learned from that, that can never happen again. It would never happen again.

Earlier this year, Jason was using a Replit AI coding assistant to build a database. He explicitly told it, no changes to live data. He set a code freeze. I mean, YOLO. It’s fine. New tools, but a lot of the same problems. If anything, it was just faster now. AI is amplifying a lot of the same problems that we found before. Instead of asking, how do we improve productivity? We should probably ask questions like, what is it actually like to build and ship software here? How can I deliver value faster? Also, how can this be sustainable? Because this old cliche is old but true. Every company is a software company. In some ways, the companies that insist they’re not software companies are the biggest opportunity because they keep insisting that they don’t write software. I don’t know about you all, I haven’t been to a bank holding all of my money in gold bars lately.

What Friction Looks Like

When we talk about friction, it can look like a lot of things. I assume we’ll probably recognize ourselves or our teams in some of this. New hire is still waiting for database access on week 3. A pull request just sits for days. Could be because it’s been assigned to the wrong person, it’s stuck in a queue, someone’s out on leave. The build pipeline crashes, again, time for more xkcd swords in the hallway. Or a deploy requires manual coordination across several teams, several tools, group decision-making about risk. This is not inexpensive. For anyone who’s trying to convince a boss or a manager or a leader that this is important, here’s your slide to take a good picture of. This is across a handful of different studies. McKinsey found that 40% of dev budgets are spent on avoidable rework. Another study said that developers feel about 68.5% productive, and then they calculated that missing 31%, that’s $300 billion in lost GDP.

Another study is estimating that we’re losing $1.52 trillion to technical debt. That’s just some of the friction that we can see. It’s no longer just about comfort, it really is about competitive survival. For any company that’s worried, I don’t really care if my developers are happy.

First of all, it matters, because happy developers make happy software. Also, if you want to retain your business and your customer, we have to be developing and deploying quickly. Here’s just a handful of examples. Onboarding, codebase, integration, process, review, all of our development friction, all of our deployment friction. We know some of these technical solutions, but we should also take a look, because sometimes it’s actually process or approvals or things that are not my job friction that really slow us down or really cause bottlenecks and challenges and questions and problems. Now, again, we’re seeing that AI is just amplifying this. It used to be writing code. Writing code is no longer the bottleneck. We can come up with all sorts of code. It’s making the rest of our friction much more expensive, and it’s making it stand out more. Can our test suites handle that load? Can our build pipelines handle that? Can I review enough PRs for all of the trash that is coming across my system? Not always.

The DevEx Framework

When we talk about improving this, I have found that talking about productivity is not great, because it doesn’t always highlight the things that we want to talk about. We’re not just trying to push more lines of code through the system. We want a developer experience that is easy and seamless and delightful, because then we can remove obstacles. There’s a framework to help us think about developer experience. There are a handful of different ways to think about it. I like these. One is feedback loops. How long does it take me to get the question to an answer? It could be searching an internal codebase. It could be getting a review. It could be getting input back from a build. Did the build pass or no? Next is flow state. Can I focus and think about hard problems?

AI is really changing this, because we used to be able to block out hours of time to really dig in and get some code done. Is anyone working with an AI code assistant? Maybe not at work, but for fun? You don’t just sit and write code anymore, because we’re prompting, and then we’re immediately getting feedback. That flow looks very different. I have to accept code, review, rewrite. It’s like Stack Overflow on steroids. There’s a lot happening there, which feeds into cognitive load. How much mental capacity do I have to do the work that I’m focusing on? Is the process making it worse? Are the deployment tools making it worse? Are the test suites just throwing a failure with no answers? That can be really challenging. I would rather dedicate my limited cognitive capacity on the really hard problems. This is a human limitation. Some studies by Gloria Mark put the maximum time that a human can spend on really hard, deep work at about four hours a day. If I have four hours, what am I spending those four hours on?

We just talked about this. How long between action and outcome? In AI, faster code generation just needs faster validation. Feedback loops, the impact here. If we have a 10-minute build, we stay in the flow. We can experiment quickly. We can make decisions quickly. We can learn all of the time. That feedback loop is great, but if it takes a long time, we’re context switching. I might have to set up a provision, a brand-new environment to work on a different task in a different repo. I have to delay my decisions. I can’t take as many experiments. I’m going to learn much slower. This is just for a two-hour build. I’ve worked with folks that have 30-hour builds. What happens there? Even how do we space them and how do we time them? Flow state, again, how long can we really focus? Because here, the interruptions just aren’t about lost minutes. It’s about losing the context and losing the depth of what we were doing.

Another thing that is really coming up for folks that I see now when I talk is that with AI, we just have more tools to master. We have more things we’re apparently pulling together. We’re working on prompting. We’re working on different AI coding agents. We’re working on MCP. We’re working on agentic workflows. How should we be thinking about this so that we can generate the code and answer the problems that we have the best way? They all reinforce each other. Fast feedback preserves our flow state and it reduces load. Reduced load lets us focus and improves our decision speed. Our protected flow lets us accelerate learning and compound over time. Then we can create new code, new ideas, which then takes us back to that fast feedback loop.

If we have slow builds, the problem, we have two-hour build times that delay our learning. Feedback, it’s going to take me a couple hours to know if my code worked or if it does what I think it’s supposed to do. We context-switch while we wait. We have to remember a lot of parallel tasks because we’re not very good at multitasking. One solution here is that we can do incremental or parallelized builds. We have clear ownership and we have some automation that can help.

What Good Looks Like

We have a few ideas of what good looks like. Have folks here ever heard of DORA metrics? DORA is a framework of four metrics that we found to be very highly correlated with performance. We have two speed metrics and two stability metrics. The speed metrics are deployment frequency, how often can we deploy? Lead time, how long does it take to go from code committed to code running in production? Our stability metrics are change fail rate, of those changes that are introduced into production, how many of them require intervention? MTTR, how long does it take us to act on and mitigate any of those failures? The high-performing DORA teams can deploy multiple times per day. I’ll add a little asterisk here. Or they can deploy whenever the business needs. Sometimes we work on an environment where we can’t just deploy every hour, if we’re going to the App Store or something. It’s a business decision, it’s not a technical constraint. They can often restore in under an hour. Their change fail rate is typically much less than the 15% I put here. It’s often around about 5%. The lead time is under one day. Now, how they do this. Many times, when you ask people how they do this, they will talk about a technical solution. That is very important.

That technical solution enables and it provides low cognitive load. I can focus on the things that matter and not the things that should be engineered away. I’ve protected my flow state and I have fast feedback. These teams aren’t just lucky. They systematically removed friction. Anyone can do this. I often hear large companies come to me and say, “I hear you. I see these numbers. That can’t be real though, because that’s just a startup. They can move really quickly. They don’t have all of this regulation. They don’t have these approvals”. Then, I’ll turn around and a week later I’ll talk to a startup and they’ll say, “That can’t be me. That’s just really big companies because they have all the funding. They have all of the resources. I just have a tiny team”. We see that this is true across all sizes of companies, all industries.

If anything, a few years ago, we did see one significant difference for one industry and it was retail. They did better. Why? Because they had been in such a competitive environment for so long, they had to get better. Otherwise, they were probably out of business. There are too many retail companies and platforms and websites that if we don’t have everything together, it’s really difficult, particularly if you have a brick-and-mortar aspect to your business.

Once people are sometimes finally convinced that this is important, they’ll say, but I don’t have time for this. I have to build the latest app. I have to create the next best product. I have to improve something else. We see that about $260,000 can be wasted annually. Here’s my back of the napkin math. Let’s say you have 20 devs. Each loses 30 minutes a day to just a single friction point, 10 hours a day, 2,600 hours a year at $100 a year. That’s pretty straightforward. Sometimes it just takes a minute for us to do some quick, easy math because we’re already spending time, we’re just spending it on fighting the friction instead of removing it and improving it.

Three Strategies for Making the Business Case

Has anyone here tried to make a business case for tech transformation, DevOps, making the tools better? It was a rousing success? It’s interesting because I get pulled in to help large companies and it’s usually about two questions. It’s about, how do I make the business case, and how do I change the culture? We’ve got the tech figured out. We don’t, but we know how to. We’re a bunch of very smart people. How can I explain this to the business? There are three strategies that I’ve found are very good: visibility and accountability, simple data with clear action, make sure it’s actionable, and then dollar impact. We just walked through one example of that. Here are some examples that I love here, for visibility and accountability.

Dave Anderson was at Amazon and he was responsible for improving platform error rates. He also had zero direct authority over any roadmaps. That feels a little stuck. I’ve been in this position. I’m guessing some of you have too. He had this great strategy where he created an S-Team Report, so this is the executive team. This is the CEO’s direct. It went up to the S-Team every single month. There were error rates by team. They were stack ranked. The VP that owned that area was identified and highlighted. Within about a week, directors were rushing to his office to get off the list. Sometimes we can just surface what is happening in ways that can bring attention, because it’s not that people don’t want to improve systems, it’s that they have a lot to do. If it’s not a leader or executive’s priority, frankly, it’s probably not your priority. How can we help create that priority?

Another one, Max Kanat-Alexander shared this example from LinkedIn. I assume we’ve all seen, we have all the dashboards? All the dashboards. I love me a good metric. I love some data. Too much data can be really challenging for folks to just get context in. What Max did is they built out what they call the Developer Insights Hub. They had a ton of data points, but they started very simply.

For example, if you went to the platform and it highlighted, build times increased by 20%, you could then ask yourself a question. Say, why did build times get worse? You can drill down, and you could find, for example, mobile devs in Singapore only working on this particular repo saw this big problem, and it was so outsized that it impacted everything. That’s actionable. If I show every single build time across every single repo and every single geo and not aggregate it just in case, that’s just a flood of data, and it’s really hard to act on. The third example comes from Block. When they were making the business case, they really cared about developer satisfaction, developer experience. That is really difficult to have it resonate, and so what they did is they focused on dollar impact of friction. They broke it down into two numbers.

One was how much friction they could easily identify and how much was that costing, and then they identified avoidable incidents and calculated that dollar amount too. The result was that in just 12 months, they had millions in documented savings and an increase in developer satisfaction numbers, because when your systems are easier to use, they’re a lot nicer to work with. The nice thing here is that when we improve our platforms and when we improve our internal systems for developers, it’s often a win-win. The challenge is making sure that we communicate it the right way.

The 7-Step Process to Start

Now we have buy-in. Congrats, everyone. We got buy-in. We said go forth. Now where do I start? I’ve been working with Abi Noda for the last year or two. He’s the co-founder and CEO of DX. We found that there are about seven steps that companies walk through, so this is based on conversations with hundreds of teams, large companies, small companies, tiny startups, several different industry verticals, and they all follow the same general pattern. Step one, just talk to someone. We want to collect some data, but often the best way to get good data and the fastest and cheapest is just go talk to a handful of folks. Next is to start small and get a quick win. Step three is to use data.

At some point, you’ll probably need some more signal and some more data. Then you’ll decide strategy and priority because you will probably walk away from your data collection efforts with a laundry list of things that need improved. We can think about prioritizing a couple different ways. I’ll talk about that. Then we need to sell the strategy. We need to convince people that this is the right move. We need to drive change. I say at your scale because it could be your job at your company. You could be a director or a VP. You could be an engineer who’s just decided, “This sucks. I want things to be better. How can I create change where I am?” Then there’s also some middle ground in between there. After you’ve made some changes, evaluate and show value. The nice thing about this framework is you can start anywhere you are now. If you’re walking in and nothing’s been done, you might want to start at step one. If you’re walking in to an ongoing initiative that’s been happening for a couple of years, you might want to jump in to decide strategy and priority.

We always want to start with listening. I will recommend that whichever step you start at, go talk to a handful of folks. It’ll give you a much better idea of what it’s like, what their current challenges are, what they’re facing. Before any surveys, before any metrics, just ask what’s getting in their way. Ask what they swear at every day. They will tell you. We’ll tell you what sucks. Then we start collecting some patterns. We can map the workflow. We can take a look at the processes. Often, we see friction in handoff points, points between systems.

Then the next step is to pick a first win carefully. This is where we want to have a quick win. Here’s my quick heuristic for this, it’s visible. It matters enough that you can see it and it’s visible. When I say everyone can see it, a developer will notice it, and an executive, we can communicate it to them in a way that is meaningful. Next is that it’s achievable quickly. We want to be thinking about weeks here, not months, like a quarter at the most. It can benefit multiple teams. Here you will probably start with one team and do a deep dive, but the problem should be big enough that once you find a win in one or two teams, you can generalize it across many different groups. This is great for a quick win.

After that, I often see teams come back and they say, now I don’t know what to do next. I did the obvious low-hanging fruit. I did the obvious quick wins. Now we want to capture some data. There are lots of different kinds of data here. System, any kind of telemetry that we have, any logs. We also want to look at outcomes. What do developers achieve? Productivity, lead time, defect rates, and then impact. This is really what matters to the business. Do we see revenue, satisfaction, time to value? These are the trickier ones. I would at least start with those first two categories. Don’t count out things like surveys. It can be a lot faster and a lot cheaper to get some pretty reliable survey data across thousands of engineers or hundreds of engineers versus instrumenting all of the systems. Instrumentation is expensive. What if we’ve instrumented the wrong thing?

Once we have all that data, we should prioritize. I like to use the RICE framework. We’re looking at reach. How many people will be affected by this? Impact, how much will it improve their work? It could be hours saved. Confidence, how certain are we that this is possible or doable? All of those three we want to have higher. Higher is better. Then, effort, we want to be low. How difficult is this? How much time will this take? How much headcount will this take? How much engineering time will this take? Here’s one example. Let’s say we have three possibilities. We have flaky test automation, streamlining a code review process, and monitoring dashboards. The reach on the first two is high. On monitoring dashboards, it’s about medium.

The impact, similarly for the first two, is high. For monitoring dashboards, it’s a medium. Now when we get to flaky test automation, we’re fairly confident that we can do this. We know we can do the code review process. Monitoring dashboards, we also know we can figure this out. Now, effort. Flaky test automation is very high. A lot of times, this takes a lot of effort. AI is helping here. Streamline code review process, effort could be low. This could be super straightforward. Then monitoring dashboards is about a medium.

In this example, I would start with the code review process: broad impact, high confidence, quick wins, not too much effort. The other nice thing about this is we can experiment locally. We can see what’s working, roll it out across a couple other teams. It’s a quick and easy way to jump in and find improvements. Beyond just RICE, it’s really helpful to take a step back and consider a few other things that are specific to your context. I had already mentioned reach, but frequency. How often does it occur? Pain severity. Is this a minor annoyance or a complete blocker? That’s part of frequency and impact. Many times, people will absolutely hate something that they have to do, but they only have to do it monthly, versus something that’s a slight annoyance, but it’s every single day.

Strategic alignment, does this support business priorities? Maybe even, is there already an ongoing effort in this area that we can catch a tailwind from? Can we align ourselves to something that’s already ongoing so we’re not starting from scratch? Dependencies would unlock other things. There have been a couple times where I saw something look like it’s pretty obvious. There was like a top two, but the second one would unlock everything else. It was a blocker for everything else that we wanted to do, so we started there.

There are some common mistakes that we often see in improvement. The first one is just starting with data, throwing metrics at a thing. This can be challenging for a couple of reasons. One is because data just doesn’t talk to people. Some people really love metrics, but if you love a data point, you always want a story to go with it. The other challenge here is if you’re starting with metrics, and this is not a very mature measurement program, you’re just taking whatever data points are already conveniently available that are probably not meant to measure or communicate the thing that we’re trying to communicate.

The second mistake is trying to do absolutely everything at once. We can’t do everything. It’s much better if we can pick one thing and do it well. The third is optimizing only for execs. If a tree falls in the wood and no one hears it, does it make a sound? If developers don’t notice, this is going to be really challenging. Then the fourth is just treating it like a project, where we’ll kick one thing out and we’ll be done. We really want to be thinking about sustainable improvement here, and engineering improvements into our system as we go on.

Scaling DevEx: Three Patterns

I said we then also want to be sharing at our scale. Local scope, any IC can do this. Any IC on a team, you can pick something small. You can pick something that’s obvious to your team. Maybe you can pick your own or suggest a Friday hack day to your manager. If we can prove value and document learnings, that’s really impactful, because then you can also start sharing it across the company and across the organization. Next is middle ground. If you’re maybe an engineering manager or a second line, again, look for those common challenges. Build a coalition. Get folks who really care deeply about this. Find your champions. Start creating more reusable solutions that other folks can use, tooling, runbooks.

Then, finally, if this is your day job, if you’re embedded in the org, you can think differently about the scope of projects that you tackle. You can think about resource allocation and flexing resources back and forth. We can measure a little more systemically. Again, local scope, think about our own processes and tools. What does my team use now? What can I improve myself? Here is where you really are the example that creates the demand, because good news tends to travel pretty fast. If things are a lot better on a team or a couple of teams, that works really well. For middle ground, this is where we want a few more proof points, and create slightly more scalable solutions. When it’s just you and your team, you can start just by doing custom work. We still don’t have mandates here. At global scope, this is now strategic infrastructure. We start thinking about it in terms of larger scale infrastructure efforts.

The Impact of AI

How does all of this change with AI? Some things don’t change at all. We still have the fundamentals. The SPACE framework helps us think about the data points we want to measure. S is for satisfaction. Are developers satisfied with the tools or the processes they have? P is performance. What’s the outcome? Quality outcomes. What’s our test pass rate, or what’s our build fail rate? A is activity metrics. These are the ones that can be counted. This is what most people think about when we think about data. This is lines of code. This is the number of pull requests. Anything that can be counted. C is communication and collaboration. This could be PRs. This could be API calls. This could be how often developers are taking meetings.

Then E is efficiency and flow. How long does it take to get something done? These still apply in AI. The specific focus might be different. We might be asking about an AI tool instead of a generalized workflow. We have tons of examples here. DORA metrics, I mentioned they capture end to end. This is still super important with AI. If anything, we’re seeing that a lot of folks are doubling down on inner loop even more than they had before, just writing the code and submitting it, to doing code review.

Then everything past there was just not my problem. DORA focuses on that because we’re seeing that while the inner loop is speeding up, the outer loop is still just as slow and maybe even struggling. We do see that workloads have changed. Now instead of just measuring lines of code or a feedback loop on a PR, we should also be looking at prompting. How long does it take to get an answer to a prompt? How long does it take to get progress on a handful of prompts so that we can move forward? When we’re looking at agentic workflows, how are developers thinking about steering and creating the agents that are there? Then we have to review and validate AI generated code. Now we probably want to ask questions like, do our review processes capture AI generated code and improve AI generated code as well as they did historically? What is our code survivability of code that was written by AI? Does it still make it through the pipeline? Is security different? Is reliability different? What things change here? Lines of code was always a bad metric. Has anyone been in a meeting where someone wanted to measure lines of code?

If not, bless you, sweet summer child. This comes up all the time. I will say one thing I like is that AI has made it fairly obvious that lines of code is an absolute nonsense metric, because now we’re getting way too many lines of code. It’s just so verbose. We have so many comments. We know that sometimes the best thing you can do is delete code. How do we want to think about this now? Again, prompting efficiency. How long does it take to get a suggestion that lives? How hard is it to validate? Can we use AI to validate? Are people just spending all of their time in code reviews now because they’re getting work slop? How do we want to think about trust calibration? This is super important. Because if we trust too much, we’re shipping bugs. If we trust too little, just don’t use AI. We’re wasting so much time triple checking. Workflow delegation, what work goes to AI and what work goes to humans? What are the implications of that? How does that change our cognitive load? How does it change our understanding of the system and our mental models? How does it impact speed?

The most important thing here that I would say is to use, in research, what we call mixed methods. We want data from systems and we also want data from people. The system data can tell us what is happening, maybe what is going wrong. It often can’t tell you why it’s like that. This example came up in a team that I was chatting with a couple of months ago, actually. There was a huge rejection rate on any kind of code suggestions for an auth aspect. The owning team just couldn’t figure out why. They’re like, this isn’t working. Is it awful? The answer happened to be that AI was hallucinating a lot of security vulnerabilities. That gave them some action. They just constrained AI on a lot of security-critical code, which ended up saving the team a lot of time.

Now, they’re still exploring avenues and solutions in this space. It freed up so much time that folks were spending. It still takes time to have to jump in and read code and reject it. Although I think some folks were just auto-rejecting, because we know there’s code smells for AI code. Now we knew why. I would invite folks to think about doing some of this yourself. What are we waiting for? We have so much opportunity in the code and the codebases that we’re writing. The organizations that are winning with AI right now, they’re not just adopting new tools. They haven’t just given everyone Gemini CLI, or GitHub Copilot, or Cursor. Pick your code of choice this week. There are at least a dozen good ones. What they’re doing is they’re really removing the friction so that developers can get that code in front of a customer. They can run an experiment. They can actually move faster. All of the big companies that I’m seeing are actively trying to find ways to remove that friction now. I would suggest we all think about this in our own work.

Action Items

If you wanted to do just one thing, here’s one suggestion. For ICs, map your workflow for a week. What are the times when you lose 30 minutes to just nonsense that shouldn’t be a thing? Where is there a challenge? Chat with someone else on your team, says interview one peer. Chat with someone else. See if they see the same friction points that you do. If you’re a team lead, do a quick 30-minute retro. What slowed us down this week that wasn’t the work? Or maybe, what slowed us down, again, that shouldn’t have taken that long? Because code review is work, but is something in that process really difficult? Was a legal review taking a long time, and it always takes a long time? Because again, a lot of times, what’s holding us back and what the biggest points of friction are is a process. For leaders, go talk to at least three people. Maybe do some napkin math for some of the friction points. Just get an idea and a feel for where the friction is in the organization. Remember that small changes really compound into transformation. Once you identify that one thing, then ask yourself, what is the smallest thing I can do? What is the easiest and the quickest thing I can do to remove this friction point?

Resources

I have a new book coming out. It covers a lot of these steps in detail if anyone is interested. This QR code will point you to a website where we have about 100 pages of free workbooks. There are so many free workbooks here. We have templates and frameworks. I have the RICE prioritization framework with examples. I have Quick RICE, which is heuristics, and I have more detailed RICE. We have interview guides. I have example surveys and extra survey questions. Because sometimes if we want friction, there’s about three or four questions you can ask. That’s in there. Tons of rubrics and spreadsheets.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Amount of AI-generated child sexual abuse material found online surged in 2025 Amount of AI-generated child sexual abuse material found online surged in 2025
Next Article The Hidden Transportation Problem in America’s Top Business Travel Cities The Hidden Transportation Problem in America’s Top Business Travel Cities
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Nvidia CEO Jensen Huang Says He Thinks Artificial General Intelligence Is Here
Nvidia CEO Jensen Huang Says He Thinks Artificial General Intelligence Is Here
News
Conversational Data Analytics with SQL Embeddings | HackerNoon
Conversational Data Analytics with SQL Embeddings | HackerNoon
Computing
Build apps faster with Microsoft Visual Studio Professional 2026 for under
Build apps faster with Microsoft Visual Studio Professional 2026 for under $50
News
TeamPCP Hacks Checkmarx GitHub Actions Using Stolen CI Credentials
TeamPCP Hacks Checkmarx GitHub Actions Using Stolen CI Credentials
Computing

You Might also Like

Nvidia CEO Jensen Huang Says He Thinks Artificial General Intelligence Is Here
News

Nvidia CEO Jensen Huang Says He Thinks Artificial General Intelligence Is Here

5 Min Read
Build apps faster with Microsoft Visual Studio Professional 2026 for under
News

Build apps faster with Microsoft Visual Studio Professional 2026 for under $50

4 Min Read
Your wireless router is now banned from sale in the US
News

Your wireless router is now banned from sale in the US

3 Min Read
Anthropic Rejected The Pentagon’s Surveillance Push – And The Fallout Could Be Massive – BGR
News

Anthropic Rejected The Pentagon’s Surveillance Push – And The Fallout Could Be Massive – BGR

12 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?