By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Platform Engineering: Lessons from the Rise and Fall of eBay Velocity
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Platform Engineering: Lessons from the Rise and Fall of eBay Velocity
News

Platform Engineering: Lessons from the Rise and Fall of eBay Velocity

News Room
Last updated: 2026/04/14 at 7:31 AM
News Room Published 14 April 2026
Share
Platform Engineering: Lessons from the Rise and Fall of eBay Velocity
SHARE

Transcript

Randy Shoup: I want to talk about lessons from the rise and fall of eBay Velocity. I’m going to tell you the story of what so far has been the biggest achievement of my professional life. Then I’m going to tell you how I got fired for it. We could also have called this talk this, how we doubled engineering productivity at eBay, which we did, but we still didn’t save the company, which we didn’t. I don’t know how many are old enough to remember the start of eBay. I’m old, so I actually do remember when eBay started.

This was back in 1995, the beginning of the web. It might be hard to remember now, but in the first 10 years of eBay’s existence, it was really a very pioneering technology company. Not just like pioneering in the business model, which it was, but also really pioneering in terms of technology. Here are some things that eBay invented or co-invented along with other places at the same time during the first 10 years of its existence. Database sharding. A real-time search engine, which had never been built on the planet before 2003. Eventual consistency at large scale as distinct from doing everything in these monster synchronous database transactions. Distributed tracing, we had that in 2000. Centralized logging, we had that in 2000.

Feature flags, we called it something different, but we had that in the early 2000s. Guaranteed messaging, so think Kafka with a transactional outbox and idempotent consumers and readback. It’s basically Kafka, but in 2006. SLO-driven configuration of system software. Circuit breakers, we called it something different. Graceful degradation. Staged cluster deployment. Then automated, coordinated multi-cluster rollout. That’s not too bad for the beginning part of the web. That was 30 years ago.

A Household Name with a Flat Business – 2007+

Here’s what’s happened since then. In the year 2007, eBay’s gross merchandise volume, meaning the sum total of the goods and services that are transacted through eBay in 2007 was $50 billion USD. It is today about $75 billion USD. That’s a one and a half times growth. This is what U.S. e-commerce has done over that period. U.S. e-commerce has grown 8x, eBay GMV has grown 1.5x.

If you correct for inflation, which has been 56% since 2007, it looks like this. What happened? I’ve been there twice, my second part of the story starts in 2020 when the CTO who I’d worked with before came to me and said, “Randy, I need you to come back as our chief architect. I need you to shake things up, and I need you to help me bring eBay into the modern world”. That was a pretty exciting idea. I was really excited about this idea because I actually really love eBay. Here’s what eBay looked like in 2020. We had 3,000 engineers or roughly 400 teams in what we called the core product organization. That’s basically the product engineering, customer-facing stuff and everything behind it.

Then 2,000 engineers in what we called the core technology division. That’s all aspects of system software, infrastructure. eBay maintains its own proprietary data center, so it’s like all of that kind of stuff, all and those 2,000 engineers. What also was true at that time was eBay has had multi-quarter initiatives that were going across 50 teams, and sometimes more than 50 teams, 4,500 applications and services around the site that were all actively working. The deployment frequency was about once or twice a month on average for each of those applications. The lead time for change, so the time between a developer committing her code and it actually shows up on the site was 10 days. I had my work cut out for me.

Assessing eBay – Product Life Cycle

Of course, any time you want to do some kind of transformation or introduce platform engineering ideas is you really need to assess where you are. I started with what we would call in Lean a value stream map. I basically looked end-to-end at what did the software life cycle look like. This is my words for it, you can use any words you like.

To my mind, the software or the product life cycle has these phases. There’s planning, which is, how does an idea become a project? There’s software development, how does a project become committed code? There’s software delivery, how does committed code become a feature that customers can use on the site? Then post-release iteration is how we change and iterate on that feature in real time. I couldn’t talk to all 5,000 engineers, of course, but I surveyed a cross-section across the eBay engineering ecosystem. We found very consistently a bunch of problems that everybody was facing.

On the planning side, there was lots of inter-team coordination, tons of dependencies, and literally every team at eBay had too much work in progress. In terms of software development, lots of challenges with build and test time, lots of context switching for individual developers. A very highly coupled architecture, which as you can imagine with the chief architect hat, that was something I was concerned about. Really no service contracts between individual services of those 4,500 applications and services, and tons of hidden work behind the features that we were delivering.

In terms of software delivery, minimal pipelines. There were pipelines, but they weren’t great. Lots of issues with our common staging environment, still a bunch of manual testing in a lot of areas. No fully automated rollout all the way from commit to the site. No canary deployments.

Then even though we had co-invented feature flags 20 years earlier, they weren’t being used effectively or much at all. In terms of post-release iteration, we had lots of gaps in terms of our monitoring, tracking issues, and what I call dysfunctional experimentation, where some teams were not doing any experimentation at all, and other teams were experimenting to the nth degree.

Lots to fix. Should I do them all at once? Absolutely not. We decided to focus in this central area around software development and software delivery. Why? Because if we can improve software delivery, it makes everything else possible by enabling faster change and reducing the cost of change. Back to Nicole Forsgren’s wonderful keynote, by shrinking the size of changes that you make, instead of iterating once a month and getting 12 bytes at the apple during the year, what if we iterated every day and we got 365? What we wanted to get was this.

We wanted, from a planning perspective, instead of big upfront planning that you did the year before, instead rolling planning with small cheap experiments, and then if and only if something is successful, only then do we double down with a big massive coordinated project. From software development, we would like to have small batch sizes. We would like to have very fast build and test iteration. We would like for every developer to check in his or her code every day, so daily merges and deploys. We would like to have a decoupled architecture.

Software delivery, we wanted to have a fully automated test and deployment pipeline. We would love it if it went one hour from commit to deploy rather than 10 days, when we wanted to iterate in production with feature flags. Then, finally, on the post-release iteration side, our goals were end-to-end monitoring of the user behavior, tracking everywhere, small, cheap experiments again, and then rapid feedback on the results. That’s what we wanted to do.

Velocity Initiative (2020 – 2025) – Doubled Engineering Productivity

How did we approach it? We started what we called the Velocity Initiative. We wanted to go faster, so we called it velocity. I started this work in 2020. As I will tell you later, I was let go in 2022, but that work has continued, and most of my team is still there doing this work very effectively. I’m going to talk about the outcomes, and then I’m going to talk about how we achieved them. The main outcome, which I feel super proud about, is we doubled engineering productivity. For all of the teams that we work with, and when I was there, it was 25% of teams, but now it is 100% of teams. For every one of the teams that we work with, we doubled the number of features and bug fixes that they can do in unit time. This is something that’s called flow velocity.

Essentially, it’s how many things can you get out the door, whether features or bug fixes, with the same team, same team size, same team composition. In terms of the DORA metrics, which are super important, as we learned in the keynote, we improved the deployment frequency 10x from 10 days to 1 to 2 days. We improved deployment frequency 10x. Lead time improved 5x, so 10 days to 2 days. Change failure rate, even though we weren’t focusing on those areas, actually also improved 3x, and time to recover improved 3x.

In 2020, this is what eBay’s DORA metrics looked like. Really solid medium performer, between one week and once per month deployment frequency, between one week and one month lead time for change, time to restore service, change failure rate. This is what we did for all the teams after we did the first iteration of this work. We basically moved every team at eBay from 35th percentile, medium performers on the DORA metrics to 75th percentile, high performers. That feels really good.

How did we do it? As I like to say, I had no original ideas through this entire time. It’s really just executing the very standard DevOps Accelerate DORA playbook. Exactly as Nicole told us, we focused on identifying and removing bottlenecks. If there was a bottleneck that bottlenecked a bunch of teams, we focused on that, we released that bottleneck. Then there was the next bottleneck behind that, we worked on that, and so on. As somebody said, once you solve problem number one, problem number two gets a promotion. Tactically or tangibly, we substantially reduced build, test, startup, and PR validation times. We invested really heavily in the reliability and the comprehensiveness of the common staging environment. We did a ton of automation around upgrading the software, around testing, around deployment, and around site speed, which is like user experience latency.

Then we also did a lot of work that was not technical at all, but was really about process. We streamlined a bunch of team processes. We streamlined the code reviews. We removed and streamlined what eBay used to call partner signoffs. It used to be true that if I built a platform component or service, then whenever I wanted to make an upgrade to that component or that service, I would have to explicitly ask all of the teams that used my software, could you please test with my new version? I would have to wait for all of them to say yes, before I could release version number next. Can imagine that doesn’t work very well if you want to move quickly. That was one of the things we tried to get rid of.

How We Worked

How did we work? This is unfortunately new, but it was very effective, which is we collaborated. We had cross-functional leadership. I led from the platform and infrastructure side. Then I had a wonderful partner, Mark Weinberg, from the product engineering side. We both led this program together. We had what I like to call an embedding model. I hired really senior individual contributor, architect-y, tech lead-y people into my team. Then in my phrase, I would send them out into the provinces to go pair with people in individual areas of eBay, learn their problems, help them directly if you could solve them directly.

Then also bring back to the central platform team, here are the problems that we’re experiencing over here in the buying area, or in the selling area, or in the payments area, and so on. Then this was new, but it shouldn’t have been. Platform and product engineering teams really work closely together. When we built something in the platform, we’d say, which teams want to be alpha testers with us? Some number of teams would raise their hand, and we’d work very closely with them on new things that we were building. Communication was super important. We did daily leadership stand-ups.

I and my partner in crime met every single day after lunch without fail. Super helpful to unblock each other and help move the thing forward. We did a weekly team of teams meetings where all the teams that were involved at any given moment in this program all came together and talked about the things that they were struggling with, celebrated their wins, and a little bit of friendly competition about teams making improvements over here and over there. We did weekly deep dives with individual teams. Again, going and asking, what can we do to make your life better? Then a monthly operating review with executive leadership.

None of these ideas is new. We used the DORA metrics, as I was telling you before, as our outcome metrics, as we wanted to drive faster deployment frequency, shorter lead time for change. As our input metrics, and this grew more after I was there, but measures of developer friction. Looking at the pipeline for somebody deploying their software, noticing when things get stuck or when things are bottlenecked, process-wise or technology-wise or whatever, and then talking to whoever, releasing those bottlenecks and making things flow better.

As part of that work, we instrumented the entire end-to-end delivery pipeline. We had timing for every step. We had success metrics for every step, which meant that we really had this global view of where people were struggling with doing software delivery. Also, we could drill in to specific business units, specific teams, specific builds, specific deployments to figure out what we could do to make things better.

Then we just iterated over and over again. This is called the Deming process from W. Edwards Deming back in the ’50s. He’s the one who went and taught the Japanese or worked with the Japanese to do the Toyota production method and all sorts of great stuff. The cycle is called PDCA, Plan-Do-Check-Act: to have an idea, try it out, see if it worked. If it was good, do more of it. If it was not good, do something else. Plan-Do-Check-Act. Here’s how this worked in practice. I would go to teams. Again, we were meeting with every team every week pretty much, “Hi, I’m your friendly neighborhood chief architect”.

If I see that you’re deploying once every two weeks, once every month, ok, cool. If I told you that you had to deploy your application every single day, tell me all the reasons you can’t. That’s not a challenge, that was like an opportunity. They said, “Ok, Randy, do you want the whole list or just the top 150? Here are all the reasons that are preventing us”. They had said these things before, but the previous approach of the platform team was mostly not to think about it. Now I was like, ok, great, you just gave me my backlog. Your impediments are exactly my team’s backlog. As you can hope, that fostered a lot of collaboration and partnership from those teams.

Culture and Behavior

In terms of culture and behavior, one of the things that I think was super important was that we made it fun. We made it fun to do this work. We had regular weekly progress on the metrics. Winning is fun. Making forward progress is fun. This is a lot why we all become engineers. Then again, as I mentioned in that team of teams meeting that we would have every week, the teams would be inspiring each other to improve and also telling them why, like we’re all on the same team here ultimately. This team over in selling improve their code review process or improve their rollout process. People would say, you’ve moved your metrics in this great way. What did you do? They said, here’s what we did, and here’s the tool if you want to use it. Or, here’s the ideas that we had.

Everybody really working together and trying to make the whole place better. In terms of community and sharing. It used to be true that platform teams automated things and wrote tools, and product teams consumed them. That’s not terrible, but I don’t know, product teams are engineers too. A lot of these conversations ended up with product teams automating their own thing. A lot of automation around their code review processes, automation around performance testing, as I mentioned, site speed and so on, automating accessibility testing, all sorts of stuff. All of these examples were ones my team didn’t touch at all. We just inspired it by having the open conversation.

One team decided to take the plunge and it was shared among everybody. We did regular team demos as part of this team of teams. Again, really trying to encourage teams to have fun and to celebrate their wins. Then, again, as I mentioned, sharing tools and learnings really broadly across the organization.

As a consequence, we had a lot of partnership with our customer teams. One of the challenges that eBay has had over time is a culture that was a culture of fear. One of the things that was really important for my partner in crime and I, is to make it psychologically safe even for teams to say that they were struggling. We needed to help people to understand it’s actually ok. You deploy once a month. That doesn’t mean you’re a terrible person, that means you deploy once a month.

Let’s figure out how it can be twice a month, three times a month, 10 times a month, that kind of thing. There’s no terrible, there’s only like where we are, and then can we get a little bit better or maybe even a lot better. Again, everybody doing it together also made that easier. Every team saw other teams having their impediments, having those impediments released, or helped, or whatever, and encouraged everybody to get better. I came from the platform and infrastructure side.

The other aspect of partnership was teams that traditionally would be seen as impediments to flow. Like, I do everything every day except for the security team. Let me go talk to the security team and see what we can do to shift that to the left. I do it daily except for compliance and SOX related stuff. Let me talk with the compliance team. I could do it except I have to do these really long-running accessibility tests. Let’s figure out how we can run those accessibility tests either offline, or faster, or something like that, and localization. You see just the general idea is like, state the problem and then we’ll figure out together what’s the right place to solve it.

Then at least for the first part of the time that I was there, there was really strong executive support. The CEO was constantly highlighting this work in their company all-hands. My partner and I presented to the eBay board of directors. It was mentioned in quarterly earnings calls. Then, no pressure guys, but we kept hearing, this is the most important initiative in the company and we need to go faster.

Scaling the Initiative

Scaling the initiative. We started out with a bunch of pilot teams that represented about 10% of eBay’s engineers. Maybe 300 engineers out of the 3000, maybe 40 teams out of 400. We worked with those teams on a really extended period for the first year. Then, we figured out how things were going and then we also had already removed a bunch of bottlenecks, and then we could start to go faster. Then we would do quarterly cohorts. We would bring in another 10% or another 15% of teams in Q1 of a year, and then another set in Q2, another set in Q3. You’d have these S-Curves, so successive S-Curves of, ok, these teams are starting and then they’re getting more mature, and then we start the next set of teams and they’re getting more mature and so on.

Then we expanded quarter over quarter. I’m happy to report that my former team is still doing that work. Again, now they’ve touched all the teams, which is fantastic. In terms of automation, that’s the other aspect of scaling. One aspect of scaling is making it wider, but the other is making it easier. We did regular deployments for every application and service, as was true of many of those 4,500, even if they weren’t actively maintained. Why would you do that? Because if there is a bug, which sometimes there is, or there’s a security vulnerability, which often there is, you need to be able to re-release it in a safe way. We automated what we call the patch pipeline and we use that for things that should not be behavior changing, like security vulnerabilities, dependency upgrades. We have lots of legacy and proprietary APIs, so as we made improvements to them, we automated all that stuff. There was a lot of great engineering that went into that work. It made it really seamless for a lot of teams to not have to learn how to do this stuff.

AI Across the SDLC

The other thing, which happened mostly when I wasn’t there, but I’m so proud of the team that did this, is they really introduced AI across the entire software development life cycle. Really starting from the CI process and then expanding right onto production and expanding left upstream into the day-to-day developer workflow. Obvious things that you use AI for are code generation, test generation, test data, sure. Legacy code migrations, absolutely. PR summarization and automated code reviews, which help the human code reviewers.

Then maybe not so obvious, but use AI to help manage the CI pipelines themselves. Or basically when a build fails, analyze why. Produce an RCA if it was really bad. Analyze the test failures. Predictively optimize the pipeline efficiency. Then downstream deployment monitoring and automated rollbacks when things went to the site. Then also helping out the humans, so LLM generated, developer support documentation, feedback analysis. There’s a wonderful talk by my former team member, Aravind Kannan at cdCon, where he talks for 25 minutes on all this great stuff. There’s probably 25 or 30 different parts, entirely independent parts where they introduced AI. Really fantastic work.

Mobile Modernization

The other thing I’m really proud of is mobile modernization. eBay was really early, actually, to having mobile apps. Very early, maybe 2005, 2006, eBay had iOS and Android apps out there in whatever they were, whether they’re App Stores or whatever. As a consequence, of course, 15 years of accumulated cruft. Originally, they had to build all their own frameworks because nobody had them. Now there’s SwiftUI. Now there’s Jetpack Compose. Remodularizing the architecture, and there’s a lot of work associated with that. Then, release management.

One of the most impactful things that the team did was go from monthly releases. When I re-arrived at 2020, eBay was releasing their iOS and Android apps monthly. Now they can do it in one day. There’s a lot of work that went into this. In fact, the most skeptical person that we could get to weekly deployments was my mobile release manager. I said to him, let’s see if we can get to weekly releases by the end of this year. This is 2021. Let’s imagine we started January 1st and we were doing monthly releases. By the end of the year, could we have a stretch goal of doing a weekly release? This guy told me, he’s like, can’t be done. Ok, let’s give it a go. You’re probably right. Whatever, we’ll see. Let’s give it a go. Then we’re four or five months in, say it’s April, we try the first bi-weekly release. I’m like, that’s not so bad actually. We did a few more bi-weekly releases.

Then we got to July and the team was like, these bi-weekly releases are going really smoothly. There’s a lot less changes when it’s two weeks versus a month. It’s a lot easier to figure out what’s wrong. Let’s give it a go to see if we can do a weekly release. We did one seven months in, and we never stopped. We never once stopped. We went from — and when I say we, I didn’t do the work — that team went from, it’s impossible to do that, Randy, in 12 months, to 7 months on their own telling me they could do it and did do it. Great work. This is what he said to me when I reconnected with him recently. “Randy, you were kind enough to help me adapt and see the light through air cover and rational small tests of the process”. That’s how we got fast build and release for mobile apps.

The other thing that really warms my heart when I’ve reconnected with a bunch of people on my team recently, multiple people, not just one, but many people have said, “When we’re having a discussion, we ask, ‘what would Randy do?'” Makes me feel good. That’s great. Problem solved.

Why Did Velocity Not Save the Company?

We’re here. eBay is still a household name. Still with a flat business. Let’s take a really hard look at assessing what we were able to achieve and maybe why we weren’t able to get as much as we would have liked. Again, very proud of all the technical improvements that we did. We absolutely improved the lives of every single developer at the company. These were the intended goals. What did we actually get? We didn’t get everything, that’s fine. In terms of software development, we did, I think, fairly get build and test iteration. I think that’s fair. We did get for many applications daily merges and deploys. That’s pretty good. Not for every engineer, but at least for every application.

We got a fully automated test and deploy pipeline, I think that’s actually fair, all the way through to the site, including canary deployments and so on. There’s a lot more iteration in production with feature flags. I think we’d made some good strides on end-to-end monitoring. We made all these improvements, and we should feel proud about them, but why didn’t we save the company? I can think of four reasons. I can think of strategy and planning. I can think of execution and delivery. I can think of technology dead-ends. I can think of organizational culture.

Strategy and Planning

Let’s start with strategy and planning. One of the challenges of being an early pioneer like eBay is what’s called the innovator’s dilemma. This is something that’s postulated by Clay Christensen, that when you are an innovator in a space and are really successful, it’s hard to disrupt yourself. There’s this mental model to like, are you going to be able to disrupt yourself? A very small number of companies have done that. Netflix is a great example. Used to ship DVDs. Now I hear that they do some streaming. It is very difficult to disrupt any business model, but it’s particularly difficult to disrupt a really successful business model.

If you want to disrupt eBay, here’s how you do it. You don’t try to be eBay everywhere and for every category. You become the eBay of a particular category and do that. You decide, I’m going to be the electronics used store, or I’m going to be the musical instruments used store, or I’m going to be the clothing used store. I just named, without naming them, three or maybe 10 eBay competitors. That’s how you do it if you’re interested. The other thing is learned helplessness. We saw that essentially flat graph, flat in constant dollar terms of the gross merchandise volume. All of the people that have grown up within eBay over the last 15 years have had that as their experience there. That doesn’t mean they’re terrible people, but it does mean that what is adaptive in an environment like that is really being very risk averse.

If you’re in this flat, unchanging situation, the correct adaptive approach is to be very risk averse. That’s what happens. The other challenge is that the aversion to risk is well earned. One of the things that’s really been challenging over the long term is eBay’s relationship with the seller community. It seems sometimes that every time we make a user-facing change, it would be met with revolt. When I worked there the first time from 2004 to 2011, I worked on eBay’s search engine. Literally every time we would make improvements to the search engine for the buyers, sellers would get mad.

As an example, we introduced spelling corrections. Let’s imagine you tried to sell an iPhone, but you spelled it I-F-O-N-E instead of I-P-H-O-N-E. There was somebody whose business model was looking for misspelled iPhones, buying low and selling high. That was the arbitrage model that not like one person had, but hundreds upon hundreds of people. Similarly, when we made improvements to the ranking function for search to show more relevant items, again, we disrupted business models because people had learned how to find the really good stuff in there. To go digging, needle in the haystack kind of idea, they would find those things, buy low, sell high.

Literally, every time we made an improvement, it was disrupting somebody’s model. eBay is large enough and enough of an economy that every inefficiency in eBay’s world is an arbitrage opportunity that somebody is already exploiting. Centralized waterfall planning. This is something that’s always been true. There’s an annual multi-month company-wide planning cycle. The way that work works is this. An initiative can only happen if it’s approved by the executive team. An initiative can only get to the executive team if it’s big enough to get on the list that’s presented to the executive team. If you have a smaller project that doesn’t involve tens of teams, it actually can only really survive as being like a rider on a congressional bill. You tack yourself on to some other big project, because that’s the only way it can get approved.

Execution and Delivery

Execution and delivery. I mentioned before that there’s a history of really massive coordinated releases. A bunch of them are complicated, but some of them where the cycle time, meaning the end-to-end time, is measured in quarters of years. More than one of them has involved 50 or more out of 400 teams. One example is eBay managed payments. People may remember that eBay and PayPal used to be one company. eBay acquired PayPal in 2002, and then they went their separate ways in 2015. As part of the separation agreement, PayPal was still the payment provider for eBay for the next 5 years.

From 2015 to 2020, PayPal remained the payment provider. That made perfect sense. What that meant was there’s a deadline at 2020 where eBay is going to have to learn how to take credit cards. I don’t think the work took the whole 5 years. They started maybe the last 3 years. There were about 2,000 people that worked on that project.

If you do the math of 3 years times 2,000 people, that’s $1.5 billion USD in personnel costs alone. Unfortunately, it ended up having slower payments for sellers, because when you use PayPal, you get the money right away. When you go through the banking system, it takes a day to clear or maybe three days to clear. Feature factory. That’s a phrase that we used to use about it. What’s adaptive in this essentially flat business is, if I want to be successful and I can’t drive outcomes, I want to focus on meeting milestones and being really predictable and doing activity. Rewards for milestones as distinct maybe from customer value.

One of the things that I remember most vividly was when I joined the first time in 2004. I was sitting with Karen Casella, my manager at the time. She worked for many years at Netflix and now is retired. The VP of engineering was up. We had a big all-hands of like 1,000 people, much like this. The VP says, “We should be very proud. We delivered 5,000 train seats to the business this quarter”. You’re like, what in the world is a train seat? A train seat is two weeks of engineer work. Think of it as two engineer weeks, two person weeks. A way to restate what she said was, we did 10,000 person weeks of work, 5,000 train seats, 10,000 person weeks of work. If you do the math, that’s saying we spent $60 million, because it doesn’t say anything about the outcome. It doesn’t say we grew revenue by X, we grew profitability by Y, we improved reliability by Z. It said, we delivered this amount of effort.

Technology Dead-Ends

Technology dead-ends. You remember the opening slide about here are all the great things that eBay did in the beginning. They were really great. Really revolutionary stuff. I was so proud to be a part of it at the tail end of that. Also, laggards in some technologies. This happens. For a long time, eBay was generating HTML starting with XML and using XXSLT to generate the HTML. That didn’t age well. SOA with shared databases.

For a long time, SOA was shared databases as opposed to individual databases for the services. In terms of infrastructure, eBay maintained a custom OpenStack fork for a long time. Then that got way out of date with the mainline. Then they went to Kubernetes. Again, a custom fork way out of the mainline. It’s been a challenge running your own data center. Hadoop is still the data warehouse. It’s a proprietary JavaScript framework called Marko. Proprietary mobile frameworks until now. Now it’s SwiftUI and Jetpack Compose, which is great. Originally, and it’s not even wrong, 2006, neither of those things I mentioned existed. Also, eBay late to the public cloud, by which I mean eBay does not use the public cloud. Really late to open source. Obviously, eBay is a consumer of open source.

One of the people on my team was the open-source guru or whatever, open-source manager person at eBay. It was going to be me, but then I was like, it shouldn’t be me. Somebody who worked for me was on that. I really am passionate about open source, as I hope many of you are. I wish we saw more eBay open-source projects. There’s not as much of stuff that eBay is actively maintaining out in the world, versus a bunch of other companies that are similar size and similar name. Pretty late to microservices and isolated databases. Late to continuous delivery. That was what I was there to do. As I mentioned, late to fully automated testing, like end-to-end without human involvement. They’re still struggling to get interface contracts effective. Late to automated canary deployment. That was something that we introduced on my team. Then late to GraphQL.

Organizational Culture

I want to talk a little bit about organizational culture, because I think that is the driver of a lot of these things. People are maybe familiar with this “Accelerate” book. Who has read the Accelerate book? It’s a landmark in our industry. Nicole Forsgren, who gave the keynote, and was very modest about not talking about all the amazing things she’s done. She has given the world the way to do good software. Read this book. It’s a small book. One of the many things that it points out is how important culture is to software delivery performance, and then also to organizational and business performance. She leverages a typology of a sociologist named Ron Westrum, who talks about pathological cultures, bureaucratic cultures, and generative cultures. What happens is, if you have a generative culture, you’re going to be doing really well on the DORA metrics.

If you’re a bureaucratic culture, you’re going to do middling. If you’re a pathological culture, you struggle to be good on the metrics. This is something that really characterizes a bunch of the things that I saw at eBay, and I’d love to see changed. I love the company, I really do. Again, the research that went into the Accelerate book has proven that organizational culture predicts software delivery performance and also predicts organizational performance. If you tell me what your organization’s culture is like, I will tell you that it is way more likely to have this kind of software delivery performance or that kind. Pathological organization, there’s a big culture of fear. Again, I want this to be better, I really do. There’s a big culture of fear, so very highly political. It’s this household name with a flat business situation. Acknowledging failure is seen as rude or even threatening to people.

As a consequence, there’s a lot of empire building for executives. It’s a zero-sum scarcity mindset situation. If I were trying to maximize my success at the company, which I did not, I would try to maximize my span of control over my team’s span of control, and I would try to maximize the size of my team. eBay has an idea that they’re exceptional, and I think that’s not wrong for companies to feel that they’re unique. There’s a reason why the company exists. One of the bad outcomes of that, or at least that I experienced a couple of times, is a syndrome of not invented here. There are lots of industry standard approaches out in the world. Rather than adopting them by default and only then thinking about proprietary mechanisms, the culture is a little bit the reverse of that.

One of the contributing factors, it’s not the only one, is that people tend to stay a long time. That’s great because it’s a fun place to work, but it’s a challenge because in my team of 150, I had 5 people that had crossed their 20-year anniversary. Great people. It’s not about the people, but those people had 20 years’ experience at one place. You see what I’m saying? Top-down waterfall planning.

In many situations, the plan has been set in stone 12 months or maybe even 18 months before. Not a lot of real time autonomy to adjust to market conditions or changes in the environment. There’s a fantastic experimentation platform that is very good from a statistical perspective, but often it’s used either to confirm the ideas that they had 18 months ago about how the feature was going to do, and then in this culture of fear, used to make sure that nothing broke.

The reason why I was fired, the third rail that I touched was pointing out to another colleague of mine that it is not agile to have requirements gathering sprints, then design sprints, then development sprints, then QA sprints, then rollout sprints. That’s what’s called waterfall. In this person’s organization, there was a real culture of terror, unfortunately. We had a bunch of refugees from that team that came into mine, and they could tell some stories. Big empire building from that person.

As I mentioned, faux agile. Unfortunately, there were a bunch of quality issues several years before I had come back. They personally approved all the deployments that their team made for a year. After I was fired, this person was as well. What did I learn? I learned that — this is a reference to Silicon Valley — in order for a transformation to be successful, obviously it needs to be top-down. You need to have executive and leadership support. Obviously, it needs to be bottom-up. You need to have support from people that are actually doing the work at the leaves of the organization. Also, I think the key thing that I would do better if I went back again is middle out, by which I mean engaging peer leaders laterally in the organization and getting them excited about the work as well. Just like the internet, when you see resistance, you should route around it, as opposed to going right up against it.

Unfortunately, my personality is the second, not the first. I think also it’s important to see the whole board. It’s not just important to make technical improvements, which, again, I feel very proud about the ones that we did. Demonstrating the results there. Also, there’s really the big gap, which I hope is filled sometime, is going upstream into that planning and making that less waterfall-y, less top-down, and so on. Another lesson I learned is that it’s a lot harder to change a 5,000-engineer organization than it is to change a 100-engineer organization. I work at a 100-engineer organization, which is called Thrive Market, and I’m very happy at transforming that.

Questions and Answers

Participant 1: I had a question about number two, the route around resistance. Will that feel people left out of the conversation? How do you manage the FOMO that can almost feel like being threatened? Because in some ways, it feels like they’re being pushed aside.

Randy Shoup: I think you’re assuming a little bit in the question, but there’s nothing wrong with that. I’ll say two things. In a culture of fear, pointing out that somebody could be better can be perceived as threatening. Does that resonate with people? People can be better. If you’re in a culture of feeling very fearful, pointing out, you’re two, whatever that is. You could be three. That can feel threatening to people.

Rather than, in my naive and straightforward way, you should really be three, and I’ll help you be three. I’m here to help you be three. Instead, maybe leave that for the moment and do something else. Is there the possibility that someone would have FOMO and feel left out? Actually, that would be a good thing. Like, you guys don’t want to be helped, that’s cool. That’s all right. I’m going to start over here. This did happen over these 5 years. It’s like teams would see, you doubled the productivity of that team over there. I want a little bit of that. Your question is exactly the way that it should work when it works well.

Participant 2: I think you’re in a really interesting point, because I find when companies come to this bad place, the mistakes were made 5 to 10 years before. I’m interested circa 2015, what had to change at eBay to avoid this bad culture? Have you thought through what you would do differently in your first stint?

Randy Shoup: I thought a lot about what I would do differently. To your specific point, I was there from 2004 to 2011 as an individual contributor on the search engine. Then I did other things for 9 years. Then I went back in 2020. I actually don’t know what was there in 2015. This is true at any place. I think your general point of the seeds of today’s challenges were sown not even a year ago often, but 5 years, 10 years, 15 years. I think that’s really right. There’s a reason why there’s a culture around companies. Google has a culture. Amazon has a culture. Netflix has a culture. These things are long-lived. What would I do differently? It’s this talk.

Participant 2: What should the company have done differently?

Randy Shoup: What should the company have done differently? Resolve these issues. As an example, instead of only doing yearly planning. It’s good to do yearly planning, because it’s good to have a plan, but then you should be able to deviate from it. All aspects of this. These are all things that I would change. I’ll just say again, there’s so many people that I really like and respect at the company, and I want it to be successful. I genuinely do. This is maybe my plea to the world. Please help them make it better. I couldn’t.

Participant 3: In my experience, culture tends to be top-down. As you have executives rolling over, sometimes it can be hard to sustain. Is it the middle-out kind of an approach to be able to build a foundation for that culture that would survive executives rolling over?

Randy Shoup: Executive change is a thing. It wasn’t a thing that affected this work, although it totally does other work. Just as a factual matter about this particular transformation, it wasn’t really about executive change. It’s actually more about executive not change. That’s open and honest. That’s what I was trying to hint at. Top-down is a thing. It’s a thing in the world. I’m a top, in my career. I have an S in my title. It can be challenging both as executives change and the new person comes in with crazy new ideas that are totally different, or alternately executives don’t change and they keep the old way of doing it. What’s the Anna Karenina principle? It’s like every unhappy family is unhappy in different ways, but every happy family is the same. It’s that. You want just enough change. You want the Goldilocks situation.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article The TechBeat: Microsoft Generative AI Report: The 40 Most Disrupted Jobs & The 40 Most Secure Jobs  (4/14/2026) | HackerNoon The TechBeat: Microsoft Generative AI Report: The 40 Most Disrupted Jobs & The 40 Most Secure Jobs (4/14/2026) | HackerNoon
Next Article Apple @ Work Podcast: PocketMDM brings Apple Business Manager to your pocket – 9to5Mac Apple @ Work Podcast: PocketMDM brings Apple Business Manager to your pocket – 9to5Mac
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

From Satellite Signals to Neural Networks | HackerNoon
From Satellite Signals to Neural Networks | HackerNoon
Computing
What Happens When A Pilot Ejects From A Fighter Jet At High Speeds? – BGR
What Happens When A Pilot Ejects From A Fighter Jet At High Speeds? – BGR
News
Google brings its Gemini Personal Intelligence feature to India |  News
Google brings its Gemini Personal Intelligence feature to India | News
News
Google Adds Rust-Based DNS Parser into Pixel 10 Modem to Enhance Security
Google Adds Rust-Based DNS Parser into Pixel 10 Modem to Enhance Security
Computing

You Might also Like

What Happens When A Pilot Ejects From A Fighter Jet At High Speeds? – BGR
News

What Happens When A Pilot Ejects From A Fighter Jet At High Speeds? – BGR

5 Min Read
Google brings its Gemini Personal Intelligence feature to India |  News
News

Google brings its Gemini Personal Intelligence feature to India | News

3 Min Read
Trump 'not a big fan' of Riley Gaines after AI Jesus image criticism
News

Trump 'not a big fan' of Riley Gaines after AI Jesus image criticism

0 Min Read
Your next flight might offer the fastest internet you’ve ever had in the skies
News

Your next flight might offer the fastest internet you’ve ever had in the skies

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?