Transcript
Davis: I’m going to be talking about enabling developer productivity, an internal and intentional evolution of the platform. In case you were wondering or confused if my words were not clear enough in my description, this is not about Kubernetes, and it’s not about internal development platforms or internal developer platforms, but I want to talk about platforms. I’m Jennifer Davis. I’m an engineering manager at Google. I am also an author. I’m very passionate about building communities and enabling people in different ways.
Developer Productivity
The first part of this, I want to talk about developer productivity. What is it? It’s totally about lines of code, and the number of artifacts you make. It’s getting into that flow. It is? No, not at all. It’s hard to describe. Let’s think about it from a business outcome perspective. Ultimately, the business wants us to build more value, faster all the time. It’s not about an individual’s value. It’s about whatever your company is building, whether it’s a product or a service. If you’re building cars, it’s about building the whole car. If you focus on a single element of that car, the seats or the windows, you’re going to just create artificial bottlenecks that impact the whole team. You’ll be wasting your opportunities to improve. It’s not about the individual, it’s about the team and the whole.
If we look at the research coming out of the DevOps research folks from DORA, it’s been about nine years now that they’ve been doing this research, and there’s a new survey out right now that you can take and participate in. What they’ve done is identified a set of outcomes that are predicted by the team’s performance, the organization’s performance, and all of these are dictated by capabilities.
You can predict how performant a team is based on a number of things. When we think about it, it’s those components, those capabilities that define the developer experience. Those are things like, how hard is it to write code? How hard is it to maintain code? How quickly can you get feedback about your code that you’ve written? How much autonomy do you have in choosing things and selecting how you’re going to solve problems? Maybe you don’t get to say this is the problem, but you get to choose how you do those things. Those are the capabilities that can predict your performance. It’s about your developer experience.
There are hindrances to dev experience. A lot of times if you think about Conway’s Law and that organizations design the systems that will produce and deliver this, and it’s impacted by the communication structure, then those same challenges in anything that we use, we can see them. It’s one of my little hobbies, is to evaluate and look at what are the communication flaws I see and how different services are out there by what I can guess is going on within that company, based on their communication structures. In addition to just the dev experience, for me personally, or for you individually, it’s about thinking beyond ourselves. It’s something that some people have innate. It’s like, I see a problem.
I can imagine a world in which it’s better, it’s improved for more than one person, but for a whole set of folks. It’s ok if this is not something that’s innate for you, an individual’s experience does not define the DevEx of a product or a service. It’s more the set of experiences that people have. You might not care or need to care about DevEx. You might be the only one in a field. You might not have any competition. I promise you, if your service or product is valuable, someone else is going to look at your terrible experience and say, that’s an opportunity for me to do something better. You cannot care about DevEx, but ultimately you want to care about it at some point.
How do we combat this? It’s with this role called DevRel. DevRel is this interdisciplinary role that centers around people and relationships. Every company does DevRel a little different, but it has the same components of engineering and advocacy and product. It’s a way to combat releasing your org charts with your products and services. Everyone should embrace a little bit of DevRel. I spoke at QCon in London recently, and I was talking about DevRel, and I had totally made this assumption, everybody knows what DevRel is.
Some of the feedback I got, I realized I’m doing everyone a disturbance. I’m doing the exact thing that I talk about, which is making assumptions about what everybody knows. We’re all vulnerable to this, making assumptions, and assuming this is the way. Ultimately, DevRel is going to help you increase adoption, increase engagement, increase enablement, change perceptions about what your company does, but also change your perceptions about what your company does. It’s going to help you identify and drive change that better fits your product or services into the market.
I want to share some of my experience right now, what’s going on and what I do. I said I’m at Google. Really, you might think Google is this huge company. What do you have to worry about any of those? It’s not really a one, it’s more like there’s a lot of us all working at the same company. We’re enabled by different sets of tools and possibilities. My team within DevRel is an engineering team. We create samples. I get the meta of the meta in terms of DevEx. We build the platform that helps other contributors at our company and our partners to build samples. Those samples in themselves are a measure of the developer experience that our products and services have.
Once upon a time, we had a single product, App Engine, and so DevRel built samples for App Engine. Then, as Google Cloud grew, we had a split, a reorg to have focused capabilities per language, because we really care about the idiomatic language experience that every developer comes into an environment: what they’re looking for, what they’re trying to solve. If we show them a Java sample that actually follows some practices in Node, those are not going to sit well, people are going to have a bad experience. They’re going to feel friction. We reorged based on language. Then that was insufficient, we had so many products. Then we reorged based on product. A platform emerged of shared capabilities while building samples, but it’s all fractured, and things kept getting bolted on based on what people needed.
What that meant is, if a product was doing well, they would staff the DevRel team to build the samples, to build the documentation. You have an uneven set of samples in the catalog supporting customers, but customers aren’t coming around, going, I want A sample to do A thing. Sometimes that’s the case, but a lot of time it’s about a journey. I’m trying to build a website, what do I need? I’m trying to build a Pub/Sub like message delivery, how do I do this? When you have a team that’s on a volunteer basis, handling your platform management and contacting different stakeholders and managing their expectations, and then having a bunch of top-down driven initiatives as well, you’re not going to be able to accelerate as different parts of the org need different samples, you’re stuck. We were shipping our org chart.
We started thinking about, what is it that we’re actually trying to do? There’s like, step one, we want to increase value. What’s our value? It’s not just samples. It’s code that our users find valuable. Unless our users are able to be successful with these samples, their DevEx of using the platform, we’re not successful, we’re not doing something that’s meaningful. Ultimately, as humans, we want to build something that has meaning. We came up with a set of principles. What are the things that we want to minimize? What do we want to stop doing? What are we going to try to eliminate completely from the work that we’re doing that’s distracting us? We don’t want to build the wrong things. We want to minimize how much work we actually have in progress, so we can actually deliver samples to customers.
We want to stop context switching platform to staff, to our stakeholders, to actual individual samples, to friction logging. We don’t want to work on bugs. That feels weird. Why wouldn’t you fix bugs? How do you know that the samples that have bugs are actually the right set of samples? You’re cutting yourself short by focusing on something you don’t even know has value yet. Which comes to the final two really critical pieces, we weren’t learning from our mistakes. Everyone would run into a particular problem. They’d share that with the product teams, but it didn’t go across all of DevRel, and so everyone kept experiencing the same sets of challenges. Not following established practices. When it comes to samples, partly, samples are part of the individual products, but partly, samples are a product in themselves. That set of samples that you provide to people, that set of samples that support your documentation, if you have bad samples, you’re going to hinder people’s trust in your samples. You have to think about that as a whole, and you want to have a consistent voice in those samples.
The first step in thinking about your platform is identifying the value that you’re creating and those shared capabilities of the platform. What thing are you taking care of for other folks, so they don’t have to all be specialists in them? Where are we going? This is specific to my team. There is no single platform that works for everybody and everybody’s use case. It’s a journey that we all have to discover, which is part of why I’m so passionate about this subject. This is where we’re going. My hope is to enable and empower lots of contributors, including ourselves, and having a lightweight, flexible platform that enables the creation of much broader set of samples, hence the bigger purple cloud.
Platforms
I said platform, what is a platform? Kubernetes is a platform. Google Cloud is a platform. The platform is the combination of much more than just the tools and technology. It’s the tools, the technology. It’s the processes. It’s the workflows. It’s the collaboration, the communication. It’s learning and development. It’s the environment and culture that you create. Platform, of course, it includes the tools and technologies. We can think of this as, how are you solving your particular problem? If you start out trying to solve the technology challenges and just implement random tools, you’re not solving your underlying value.
You can’t just throw away what exists. If you go and architect something and just try to create the new thing, all of the things you have built in the past are not there, and people are going to get really frustrated and angry, and you’re not going to have adoption of your platform. You have to think about your constraints, what exists today. You also have to question your assumptions. I’m going to talk a little bit later about some of the assumptions we’ve made and how we’ve changed it.
Ultimately, platforms need to be lightweight and evolve. One of the constraints we have is we do all our sample development on GitHub. That might seem strange. Of course, that makes sense. You’re making it open and available to people. Internally, contributors want to use the tools they have available that they know. Why can’t we just use the Google tools? It comes with all these extra measurements. Why can’t we just do that? There are assumptions embedded in that, assumptions that we have as well. We had to question ourselves, are we using the right set of workflows? We determine, yes, we need to create these samples in the open, available for people to see, and be able to explore them and build trust.
We also need to build them here, in this external place, because that’s what our customers are doing. We are the zero of customer of understanding, how does this platform work? If we’re leveraging things internally to build stuff, we’re not seeing all the friction. We’re not experiencing the friction. We’re not getting that feedback back to the products. If it’s so bad, that is something that needs to be improved. It’s being aware of your constraints when it comes to your tools and technology, so that you know when and what to change and how to change it.
You also want to minimize human toil. It’s not making it so that I want to do the thing just because it’s hard. There’s a lot of things we can get machines to do. Some of the things that we use, for example, our GitHub Actions that allow us to label context of pull requests that come in, and basically set that information to who is the most apt and expert in that area to handle reviews on it. We also set up LinkedIn so that when something comes in, people get that fast feedback. Is there something that they need to correct before it actually makes it into the system, or before someone actually reviews it? We set and establish sets of guidelines. In addition to Google style guides for all the different languages we support, we set explicit tiered sets of responsibilities and expectations people have about our samples, and that’s to encourage people to follow and have that single voice.
Platforms include the processes and workflows. Wherever you start from, that’s where you start. You have to test what your assumptions are about those workflows. One of those I mentioned with using GitHub and working in the open. Within a team, you want to also establish some common work item vocabulary, basically set how you’re going to handle issues that come in, have a single intake, how you assess the cost of building things. That way people understand and can communicate, and you build that trust within the team. Trust is crucial for a high-performing team. If you have situations where people can question because work’s not out in the open, or there’s not transparency, or there’s not consistency, you can have distrust.
That person isn’t working on the important things that you say are important, they’re just doing their own thing. That didn’t take much time, where is the actual work and impact? By establishing a set of vocabulary about how we talk about work, it creates more confidence and understanding, as you can see over time. It’s not a, measure the performance of individuals, it’s sharing across a team how they’re performing, so people can build trust.
We also do something called friction logging, and it’s not just for products in development, but it’s also an activity that helps us to work and share information about stuff. Because in DevRel, one of the concerns is, is there like a half-life? Can you only be in DevRel for some amount of time? The truth is, I think everybody should be doing DevRel, but also being able to try things out and explore them as a zeroth customer. You get the opportunity to take on the empathy, look at the DevEx of your product or your service, explore a journey, and provide meaningful feedback to the people building that particular product or service.
It also helps when onboarding someone. We’d have a little exercise of doing a friction log as a team exercise to understand what is it that we’re making assumptions about that we shouldn’t actually assume. Maybe there’s missing knowledge that we haven’t documented. Maybe there’s practices that have evolved in the community and we’ve missed them, and this helps us instill and grow our culture. Which goes to the platform is the collaboration and communication. When you’re trying to solve something, it’s more than just you. It starts within the team. You definitely need open communication and an ability to provide feedback to each other. Because if everyone is just saying, yes, that’s great, you’re not getting that critical feedback to be better or to be able to understand different perspectives.
You need to establish that set of trust and ensure there’s not contempt happening in any way on the team, or stonewalling, not answering people’s questions. Key to this is not building an us and them across the org. Especially within DevRel, we have stakeholders across the whole organization. We can’t get into a mindset of, they just do this, and it causes us so much problems. Because the minute we get into that, we’re going to harm our ability to work with people. We have to come from a place of yes, and. There’s a component of thinking through, how do you do this, not just within your organization, but across the industry as well? The industry frames and changes things. It’s, people have choice.
People are going to take a selection of different possibilities and build things upon them. When you choose a service from here and choose a service from there, GitLab, Datadog, you don’t have the capability to say, I’m going to ignore everything else and not care. If you think about this interteam and across the industry, how we’re building our relationships and how we accomplish and solve problems, it’s really crucial to actually driving performance.
It’s about learning and development. One of the first things we did as we tackled our new team and started thinking as a holistic element of looking across the platform, not just for a specific set of products, we started thinking, we have to be able to scale. How are we going to scale? We just had a set of training where we trained everybody about, how do you update a sample? How do you submit a PR? How do you submit a change list for our documentation, so the change that you made actually shows up in the documentation? All of this is part of a normal cadence.
Instead of just saying, people can figure it out themselves, actually taking the opportunity and making time for people to uplevel. Making explicit documentation about what you chose to do and why, because ultimately, the decisions you make about anything, they have to be available for evaluation, or you’re going to keep evaluating the same options over and over as the platform changes, or the needs of the platform change. New technology and new tools come to be available.
One of the things we do with our samples that we create, as well as with the platform itself, is to document, why do we choose to do this thing? Why are we coding in the open? Why do we use GitHub? That’s a decision record. We want to take note of that. We encourage sharing, we call it show and tell. It doesn’t have any expectations. Sometimes people feel vulnerable about saying, it’s a demo, when it’s half-baked. It’s ok when it’s show and tell, you’re just showing and telling about something you learned, or something you accomplished, something you tested out.
I encourage people to contribute to open-source projects. That seems redundant, because we work in open source, but here’s the challenge, it becomes too insular and siloed. You don’t notice having to work with other people. When you go into open-source projects, you learn better how to evolve your platform, because you’re able to see the different practices across the industry and adjust your expectations of how things could or should work. It’s easy to get caught up in the today, and here’s the focus and here’s the energy.
When you’re contributing to an open-source project, it lets you step away and think about things in a slightly different way. Which goes to the platform is the environment and culture. What is it that you want out of your platform? What is the space that you want to create for people? Thinking about the team rituals and seeding them with things that you care about. Some of the ones I’ve seeded within my team is we start our meetings with music, it gives people a little time. Because we’re a distributed team, we want time for people to connect with each other, and maybe they’ve been busy doing something all week, that music just provides that little easing into that sharing, where we talk through, how are you doing? This is the team temperature check using zones of regulation.
It gives people, if they’re feeling safe, the space and time to talk about how they’re doing and feeling, showing that mutual care for each other. They don’t have to share if they don’t want to. We end the meeting with kudos, just a little bit of gratitude, sets everyone up for a happy rest of their day, maybe fuels them for the rest of the week. When people leave the team, we celebrate it. We don’t just say goodbye. We actually take time to celebrate on those things that we’ve built with them and done with them, and that builds up more trust into the actual building of the platform itself. We play. We create samples and demos that go beyond just, here’s this thing. We think about, how do we engage our active whimsy? That makes it so it’s approachable for other people. We built this train demo: it’s open source. The concept was this game of, can you build a set of components, a working architecture? The logic behind it we calculated, it was like this meta on meta situation where we’re using cloud to build a test on cloud and build education.
Intentionally Evolving the Platform
I’ve talked a little bit about the platform. I’ve talked a little bit about developer performance. Then, how do you evolve the platform? First is keeping in mind the people, because the people are core. A lot of the things I just talked about with parts of the platform are about people. You want to establish an active communication plan. You want to be telling people and informing people on a regular rhythm. You want to make sure you know who you’re talking to and why you’re talking to them. You want to create a RACI, and that’s basically setting up a plan of the clear roles and responsibilities so everyone knows. It’s like the contract. If you do these things, we’ll do those things. Once you’ve identified and documented them, you can embed them into your planning of your technology. Who has capabilities? What are those capabilities? What are the contracts that you’re making, and establish with samples.
For us, the person who owns or is accountable to their sample, if they don’t update their sample, and I can’t automatically update it, then it’s going to get marked as something that can be archived. We’ve agreed to that contract based on the RACI. It’s connecting the effort to the value so people understand, why am I working on this particular thing? Because it helps build value over here. It’s making sure to celebrate the wins.
You can identify the set of metrics that matter to you within your org, a starting place might be DORA or SPACE. When it comes to DORA, there’s a set of metrics that I mentioned earlier that have been shown to show software delivery performance. This is a starting place. For us, when I look at what our environment is, and these are updated metrics since the last time I gave a talk that included metrics, our samples, we have 13,798 samples that we need to monitor and update and maintain. There are another 6000 approximate samples that are not actually in our docs yet. We’re trying to reduce that count so that all of our samples are available in our documentation. We have 8352 distinct use cases, meaning there’s specific journeys that we’re explaining to our developers. How do we think about how we would measure performance or the experience?
Remember, our goal, ultimately, for our platform is a double set of requirements. Right now, we’re focusing on our contributor metrics. Ultimately, we want to empower developers who come to use Google Cloud. Right now, we’re trying to grow our catalog, so our problem is quantity and quality. Our metrics are evolved slightly from the DORA metrics. You can see the hints in them. We want to think about how costly is it to update a sample and to catch problems with it. What is the right amount of effort that we should spend on update our own samples? Areas that are easy to measure are things like time to ship. That’s from the point that you start to submit a PR to the point that it actually gets into documentation, that measurement. It’s shown that high-performing teams are able to do this in hours. It might not surprise you to hear that it takes days to weeks for some of our samples to ship as a baseline. That’s improved.
Rollbacks for us is when a sample goes out into the wild, and then we have to go and make changes to it. That means it’s not like, roll back your production, but something leaked through that actually caused problems, and did not help people. Then, how often we’re delivering samples. A hard one for us to measure is our system green. How much should we be spending on testing our samples? What is that quality? This is the first set of metrics we’ve established to define how productive our developers can be? How effective can they be? Based on these metrics, we can change and adjust what we’re doing.
The first thing we did was we friction logged the sample contribution experience. We’ve done this multiple times, and we’ve gained additional information each time. If you think about how large a company is, and you think about who possibly could help with something, with samples, we could have a large set of folks that could work on samples, except we’ve always thought about it from a, here’s the set of folks, the DPEs, the developer program engineers, they’re the ones working on this code. We want to make it self-service. We want anybody who is interested in contributing samples to be empowered to contribute samples. We need to take on that experience. Talk to the tech writers, talk to the advocates, talk to the sales engineers, talk to the support engineers, find out what is hard about doing this. We’ve uncovered a lot.
One of the challenges we identified is in our review capacity, thinking about how long it takes for code to get into production. Part of this, is there availability for someone to review the code? We realized over time we’d had this set of patterns. We had a whole mentoring program. It took months to get to reviews. We just did not have that ability anymore to have that long lead time. We are taking a risk. We want to trust people to do the right thing, but we want to hold them accountable and make sure we’re measuring and seeing the impact of people’s reviews. Then, we want to recognize the quality behaviors. We have little badges to showcase when people are quality reviewers.
We decided to eliminate flaky testing. Originally, we did some research, and we were like, 78% of our alerts are noise, but they are solving some issues, so maybe we’ll progressively fix this problem. We determined that actually we’re not going to get there anytime soon, and we should quell the noise, so we’ve eliminated our flaky testing. We’re trying out AI. I want to say that for us, core to samples is trust. We know that people copy and paste our sample code directly into their production environment. I’m not saying that’s what they should do, but we recognize it’s what they do. There are things that we have found that we’re exploring with small experiments to determine areas where we can improve the overall experience.
When people file issues across our 100 and something odd repos, what if we’re able to assess things more quickly and consistently based on training from our previous issues to get better results in responding to specific types of issues. We’ve also looked at metadata generation, so that 19,000 samples, 7000 approximately, that are not embedded in documentation. Part of that is because there’s no metadata associated, meaning their title and description, like the intent of the sample. Because the model is trained specifically on our samples, it can provide context and help us to initiate a set of descriptions that helps us get our samples in the autogenerated pages.
We’ve also found that it’s helpful in terms of giving feedback on PRs, so we have those set of extensive style guides. It takes a reviewer knowing and understanding, it takes a contributor knowing and understanding all of those components. If we train a model directly on our style guides, we’re able to get a specific set of feedback that’s helpful that says, here’s where you’re having this problem. It links to the specific style guide issue, and that provides a better experience.
Recap
I’ve talked a lot about different pieces of this, in terms of what is developer productivity and platforms, and my thoughts on them, and about intentionally evolving your platform to solve the value that you are trying to build for your company or your service or your platform. It’s really important, ultimately, when we think about what is the dev experience, and investing some amount of time in that dev experience to help each one of us solve problems that are better for us as an industry.
Questions and Answers
Participant 1: Imagine that I am an IC, an individual contributor in my company, and my company has small silos, each sub-project has their own APIs, their own SDK. When my customers use those sub-projects, each SDK would look different. Being an IC in one of those projects, how can I influence my peers to start to discuss about developer productivity of how to have a cohesive experience over all the platform?
Davis: It goes to the whole culture of the environment. When you’re in a space where your company, it’s very siloed, and that’s what happens, you have to have some kind of leadership change that supports and encourages it. Coming from the grounds up, if you are in this situation, you can reach out and create a technical leads program. That’s one of the things that someone started at Google, actually. It encourages and starts discussions across. When people find these common problems, people want to help. Another part of it is navigating how you’re discussing and sharing or advocating for the problem. When we talk to leaders, as an IC and you’re going to your technical lead, or you’re not considered an official technical lead, but you see a problem.
If you frame it as, there’s this problem, and I need to fix it. In this case, we’re shipping all these APIs, and they’re all different, and the user experience is not quality. It seems obvious, there’s a problem, we should fix this. Ultimately, it comes down to communicating in the language of whoever you’re talking to and knowing what’s important to them. In your case, it’s like really challenging, because you can inspire and get everyone on board, yes, but then do you have investment to make change. You speak to, depending on how people are motivated, all our competitors are doing this, look at that. That’s one way.
Another way is, “I did a friction log. Here’s the things that create a lot of friction”. Talk to support engineers. Get that support there. If you can reduce support costs, because those are very expensive, like by the time something is problematic in the environment and a customer’s reporting, that’s costly. Or if you can improve people’s productivity, that’s a set of things that can change leaders’ minds. You don’t talk about it from the problem. You talk about it from the outcome and how it will support things.
Participant 2: You have 15 awesome examples of things to work on, if you were to pick one to start with, which one would it be?
Davis: My first step for me was figuring out what the problem was. I’ve described a lot of problems, but I didn’t describe the big one. The big one is fragmentation, so that we’re spending a lot of effort spending. The very first thing is to figure out, is there areas you’re spending? Then navigate how you talk to people, your leadership, your peers, your reports, whatever the case is, and identify how you can help people change their minds. It’s not easy. Getting people to think about disabling flaky bot was hard. It takes time to get to a decision where people are comfortable because you’re making change. You don’t want to do wholesale big changes, so you have to identify, what is going on? What are the risks? What are people afraid of? What are people wanting? What are people valuing? Once you establish that, you can tackle whatever the next thing is, and be willing to fail.
Participant 3: I have a question related to one of the measurements that I look at when I try to measure the engagement and motivation of my developers and data scientists. I would like just to get your opinion on that. They are telling us that they would like to see the connection between their actual work and the mission and vision of the company. Sometimes for us as managers, leaders, it’s easy to see that kind of connection, but it’s hard actually to break it down in actual projects, and most importantly, to show them the linkage between their everyday work and the vision, mission of the company or the organization. What would be your advice for this kind of work as a manager and leader?
Davis: If you think about it, and I’m going to add a little more context, samples, one of the things we want is we want to trust. In open source, you really want people to trust you. Any time you talk about tracking and data collection, because you want to collect data to improve, not to do anything bad, but to improve, it causes problems. Then, how do you, as an IC at a company, identify what changes actually matter and what’s valuable? As a whole, our samples, people can go to GitHub, and they can go copy and paste, but how do I know it’s actually helpful?
One thing is, depending on the set of abilities of tracking you have within your teams, you can identify and see how many things are deployed. We have something called Jumpstart solutions, where we can see direct impact of if someone deploys a solution, how long it stays deployed, how it evolves. Are they sticky? Are we enabling people? Are they learning more? What does that impact? When people can see those real numbers of you’ve enabled or you’ve engaged, it’s great, but it’s tricky. You have to map things into a different framing.
Part of this is getting people to talk about what you’re working on, and then tying it explicitly into the larger orgs, and repeating over again the message, “This is to do this, and it’s driving these sets of changes”. Recognizing that if they’re not seeing that value return, if the feedback loop isn’t bringing back and changing their work. If you’re not doing retros or post-mortems or whatever you want to call it, and you’re not incorporating change, it just feels like they’re throwing stuff out into the void. It’s really important to incorporate practices that also enable the learning loop.
Participant 4: You are talking about developer productivity. We have several smart developers on the team who want to be productive, more productive than they are right now, but they all have different definitions of what productive means. What’s your recommendation on reconciling the definitions and how to do it in politically nice terms without offending them much?
Davis: When we think about productivity, that’s why part of it is productivity of a team as a whole versus productivity as individuals. Individuals can be productive in whatever way they want, how they measure, it’s great. For me, I will tell my boss, I need my cookies, because when I get my cookies, they’re not real cookies, it’s just like, good job, that’s my cookie. It’s not even qualitative, it’s just occasionally I need that. I have my own set of metrics for my performance, but it’s one of the things why it’s really crucial, what you measure is going to influence what you get. If you say I need lines of code, you’re going to get more lines of code.
If you say, I want clicks to a URL, I know how to write a nice little tool that will automate clicks to a URL, because I value about different things. To change what you’re saying a little bit, it’s ok for everyone to have different measures of productivity, but clearly articulating with a common set of work vocabulary, clearly articulating what the goal is. We’re building cars, your part is that cog. If you have too many cogs, this other piece needs focus, and this is creating a bottleneck. Encouraging people to have mutual care and reciprocal trust to engage and enable each other. It’s really important as a manager to note when someone is not performing and to manage that in a kind way, because ignoring performance issues harms the team.
See more presentations with transcripts