By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Your Platform is Not an Island: Embracing Evolution in Your Ecosystem
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Your Platform is Not an Island: Embracing Evolution in Your Ecosystem
News

Your Platform is Not an Island: Embracing Evolution in Your Ecosystem

News Room
Last updated: 2025/11/26 at 10:25 AM
News Room Published 26 November 2025
Share
Your Platform is Not an Island: Embracing Evolution in Your Ecosystem
SHARE

Transcript

Rachael Wonnacott: My name is Rachael Wonnacott. I’m the Technical Product Owner for the Kubernetes platform at Fidelity International. As associate director for container platforms, I’m engaged with all things developer experience, the application lifecycle, and also supply chain security.

Back in the day, though, I spent 10 years as a hands-on platform engineer, but uniquely, despite the fact that platform engineering as a term only really became popular within the last 3 to 5 years, I’ve always been engaged with platforms, all the way back since 2013. What that’s given me across my career is a unique perspective for how all the different methodologies we’ve used, with particular reference to DevOps, have ultimately led us towards what we know as platform engineering today. What I’ve noticed as a theme at these conferences, when we talk about platform engineering, we often talk about selling the developer experience. While I won’t try to argue that that’s not important, this particular talk argues that it’s not the destination, it’s a lever. That, in fact, what platforms should be doing is delivering business value.

Drawing on my firsthand experience of leading strategic platform initiatives, specifically within the enterprise, I want to share how platform design is only successful when you take into account three things. That’s organizational maturity, organizational structure, and your culture. This talk is going to explore how to balance the tradeoffs between technical experience, developer productivity, and measurable business impact, recognizing that it’s rare that one size will fit all.

A quick little bit of context, Fidelity International is an international asset manager. We were founded in 1969. We operate across 27 different countries. We look after over £950 billion of other people’s money for both individuals and institutions. Today’s talk, I want to let you know about why developer experience matters, but actually isn’t everything. How to frame platform investments in terms of value to the business stakeholders. How to design for integration with a broader software delivery lifecycle, and the hidden risks of treating your platform like a product in isolation. In this talk, I’m going to walk you through how to build a platform for your specific organization.

Your Organization Is an Ecosystem

I’m going to give away the conclusion right at the start of the talk, and that is, your platform is not an island, but your organization is an ecosystem. For those of you who studied biology, you may well be familiar with the following definition of an ecosystem, and that is a group of biological organisms interacting within their physical environment. In the organization, that can be analogous to your people, your various teams, and your organizational structure within which they fit. For me, I quite like this theme of biology, as organizations tend to grow organically, and over time, your team interactions will be subject to evolution.

The question then becomes, do they survive? In a startup, some of you might work in startups, it’s possible that you’re actually just one team, where people are wearing many hats. As your company evolves, you may not have any clear-cut team boundaries, if indeed you even get to have more than one team. In a larger startup or a scale-up, you might start to see teams naturally emerge, but you’ll probably find that these tend towards technologies. I’m not saying that’s a bad thing, but if you don’t have intentional focus on your organizational design, these will end up as silos later down the line. Larger organizations, by sheer virtue of their size, have more possible combinations of teams and structures, and so it’s going to be much harder to retrofit for intentional organizational design later.

The challenges facing smaller organizations versus larger organizations are really quite different, and the very requirement for a platform is typically indicative of you having multiple teams, so you probably don’t really need a platform in a startup, particularly if you’ve got one 10-star full-stack developer wearing all of those hats. This talk focuses on enterprises. I’d like to be really clear about that. In simplest terms, when I say enterprise, all I really mean is a large organization with many teams, but also probably many years of tenure. What happens with many years of tenure is you have a long historical list of previous ways of working and practices, and so your ecosystem is no longer just your people and your organizational structure, but it’s your processes and your maturity.

Organizations that predate cloud quite often have structures that are heavily influenced by their on-premises infrastructure, and that makes sense. We were founded in 1969, so at that time, most, if not all of our infrastructure was managed in-house and required a highly dedicated skilled team to do so. At that time, you could have been a specialist network engineer, but as we’ve moved to cloud deployments, it’s no longer enough to simply know network engineering, albeit it is an essential skill. A combination of the capabilities of the technology, the skill sets of the people, and the accepted working practices of the time delineated a much closer or rigid relationship between the application team and the infrastructure teams, and as many of you may remember, probably not fondly, application teams would write the code, package it up as a little present, and throw it over the wall. They were hoping for the best.

Legacy Structures in a Cloud World

In the world of cloud, much of these originally clear-cut boundaries between technologies, or perhaps just layers of the stack, have actually been brought together, are now blurred. In the previous world, like I’ve said, you could have one singular focus skills, perhaps network engineering. I typically mention that because that was my starting life as an engineer, was in networks, whereas in the world of cloud, we need to have multiple skills. It also means that even if pockets of the organization have moved towards DevOps, or better yet, towards the product model, there’s probably still a legacy organizational structure that’s emulating your original technology. These two things are now going to be incongruent, and ultimately, that creates multiple paths for friction.

In my opinion, it’s also probably fair to say that those older organizations or enterprises, will need to pass through the hybrid model in order to reach the pearly gates of cloud native, if indeed that even is still the goal. We often see these silos as specific to technologies, but due to Conway’s law, this trickles down into influencing the design of software and applications. On-premises dependencies for your app will increase the number of interfaces and contributes to what we lovingly call application sprawl, and overly distributed architectures. The more teams that you have, the more people that you’re probably going to need to speak to, and unfortunately, that means an increased number of working practices, and probably it’s going to be far harder to reach any kind of consensus. If you work in a large organization, I’m sure that will resonate with you.

Why exactly am I telling you about the origins of DevOps when you probably know it already, or better yet, lived through it? Actually, as the grandfather of DevOps himself, Patrick Debois, said, “We’re not actually DevOps separately. We’ve got 15 teams, they’re all running their own stuff, and we really need to start collaborating. All we’ve done is shift the complexity of frustration to that number of teams”. This is perhaps the reality of DevOps at scale, hence seeing it in the enterprise.

Even in the perfect world where everyone moved away from silos, we’ve still introduced complexity and created frustration via other mechanisms. This is how I was feeling on the way here. It’s still how I’m feeling now telling you that DevOps isn’t working. When my boss Dean sees this video, it’s going to be him, because I’m going to quote him publicly for the second time. “It’s a maturity thing too, people now realize not everyone is going to be as hot at everything I was promised originally with DevOps. Or it’s just very expensive to do, as organizations explode with complexity with every team doing it their way”.

The Reality Within the Enterprise

What’s the reality in the enterprise? It’s messy. This isn’t even the full list. I could probably do a presentation just on this topic alone. Most are operating in that hybrid model that I’ve mentioned, part cloud native, part on-prem, all tangled up. That means legacy technology, legacy processes, and legacy team structures, and that’s all very much still in play. They might tell you differently in presentations, myself included, but there’s definitely some of that going on behind the scenes. Cloud applications depending on on-premises, that’s not very cloud native, that’s technical debt in disguise, and the result, overly distributed systems, fragmented ownership, and friction at every interface. DevOps teams will often clash with the more traditional setup, not out of bad intent, but because they’re simply speaking different languages. When every team does DevOps their own way, you’ve not just got duplication, worse, you have divergence. Multiple ways to do the same thing, some overly complex, very few efficient. It’s not just frustrating, it’s expensive.

Why then do we build platforms? I think it’s quite simple. We’re trying to reduce friction and amplify flow. We want to ship faster, reduce duplication, and cut through that complexity. We want our engineers to be solving business problems, not burning time on boilerplates or waiting on other teams. I’m sure they feel the same way too. We want to do it consistently, so every team isn’t reinventing that same wheel time and time again. Platforms can help us to scale best practices, lower costs, and create a smoother, more predictable delivery pipeline. It’s fair to argue that while DevOps works exceptionally well for an individual team, especially when you’re able to manage all of your dependencies yourself and not talk to anyone else.

The first team that I worked in was just like that. It went super quick, but not super far. At some point, you will need to talk to another team, and that means you need to go out into your ecosystem. Eventually, your organization probably wants to address the duplication, the complexity, and the cost challenges that I’ve already articulated. In much the same way, you need to consider where your platform sits and how it will operate within your enterprise. That is to say, your platform is not an island.

Container Platform Case Study

I’ve set the scene for platform engineering and a little bit about where developer experience sits within that. Now I’d like to give you a container platform case study from all of the learnings over the last 2 years at Fidelity International with the team that I look after. There was a time before public cloud. If you remember the original definition of cloud infrastructure as being simply somebody else’s computer, then Fidelity was pretty cutting edge. We were doing this back in 2012, 2013. We were hosting a private cloud on physical tin, literally in the basement, much like the IT crowd, hence the picture, using the open-source technology, Cloud Foundry. A little fun fact as well. When I moved from physics to software engineering, my aggrieved oppressor started calling me Jen, which if you know the show is not a compliment. Platform engineering has become extremely popular, but as I’ve already said, it’s not a brand-new concept.

PaaS, or platform as a service, was popularized by the company Pivotal with the fantastic work that they did with Pivotal Cloud Foundry. When we started our foray into cloud, that is actually the product that we were using. We started out with the commercial offering, and then we moved on to eventually migrating to a fully open-source platform, still on physical tin. This platform did successfully reduce cognitive load, and was pretty popular with our developers. We wanted it all. The promised cost savings of public cloud, that reduced infrastructure responsibility that you get with consuming an IaaS. We really wanted every single team to be doing the DevOps model and operating as product teams. It wasn’t too much to ask. How did we try and achieve this? We did this by automating the provisioning of cloud accounts in AWS that were essentially mini data centers for each team, where each team is an application workload team. We implemented a shared responsibility model, and we took that concept from AWS themselves.

That relationship defined the interactions between the workload team and the platform team, whereby the platform team was to design, deploy, and support all of your core infrastructure services. Things like your routing, your DNS, we had internet egress via proxy, connectivity back to on-premises, all of that lovely boring networking stuff. The workload team were responsible for any additional resources that they deployed, their application code and their configuration, their data, but most crucially, the uptime and the performance of their applications. This actually worked really well for some teams. It’s still working really well. Again, only really for some teams. I mentioned earlier that large scale organizations will likely have a cross section of maturity, and Fidelity is certainly no different. What we found is that it really isn’t one size fits all.

This is directly from an architecture review board slide that I presented when we were pitching the requirement for a platform, and that’s why it’s in the Fidelity cover. This quote on screen is from the head of a business technology team, and has quite a lot of sway within the organization. What he came to me and said was, “I would love to focus on core business delivery and not worry too much about the underlying infrastructure. Not because I don’t want to, but because I would rather like the experts to manage it and I manage my business applications”.

I was a little bit more junior at this point and very purist in my approach, and so, I thought he was talking rubbish. I was like, we should just continue. We’ll evangelize. We’ll let them do DevOps. As we started to do the research and collect some data within our organization, we found that our application teams were only spending 10% to 20% of their time writing code. The rest of that time was running infrastructure, and arguably running infrastructure badly. I’m like, ok, maybe there is a reason for us to do something here.

That is actually me in 2018 at Cloud Foundry Summit, Basel. Why exactly am I showing you a screenshot of a bright-eyed younger version of me? While it feels slightly egregious to be quoting myself, what I want to show is that we’ve come full circle. While we were looking to move away from our on-premises infrastructure, there were still many elements of that PaaS platform that our developers really liked. Given that my customer base for my new cloud-based platform were those developers, I needed to understand what they were interested in. There are limitations to Cloud Foundry. I could spend a whole talk just about that migration from Cloud Foundry to Kubernetes.

The main one that we suffered from was that it was quite opinionated and it wasn’t great for data-intensive applications. Fortunately, during the 12 years that we were running this Cloud Foundry platform, there was another technology gaining popularity within the CNCF community. I am the technical product owner for the Kubernetes platform. Of course, that technology is Kubernetes. To conclude, we want the benefits of public cloud, but we also want the benefits of a managed platform. What I was saying here back in 2018 is that we really only want the developer to think about the applications and the data, and that aligns with what that head of the business technology was saying. We want to abstract all of those layers of the stack that you see below.

New Cloud Operating Model Analogy

We presented this new cloud operating model. Again, this slide comes directly from an architecture review board. I just want to illustrate the conversation that we will have with the Kubernetes hotel versus the public cloud house. What we’re recognizing here is that there’s a fork in the road, and developers should have a choice. Do they want to go to the Kubernetes hotel or do they want to go to the house? The reason that we’re using this analogy is that it’s much like going to a hotel in the real world. The service is given to you.

You don’t get to choose the color of your bed sheets or necessarily what’s on the menu in the restaurant, but you get delivered a really high-quality service and you enjoy the experience. Versus when you buy a house. Hopefully some of you own houses. You’re proud to own houses and you’ve enjoyed decorating them. Maybe you have purple walls. If you’re born in the ’80s, perhaps you have fur on your sofa. I’m not here to judge. If there’s a storm and your roof comes up, it’s absolutely your responsibility to fix that. No one’s going to come and do it for you. We were saying, do you want pretty much everything done for you from an infrastructure perspective, which ultimately means less creativity and dictated services, but no overhead? Or, do you want the full creativity and autonomy of public cloud, but you would need to accept the responsibility for assuring those workloads and your backend infrastructure?

As I reflect on 18 months in this role, I’ve started thinking in a way that might be argued to be more philosophical than perhaps it is technical. What I’ve come to realize as a product owner is that your customer will never be truly happy. When I’ve presented at conferences, many people have asked me afterwards, did we release our MVP too soon? My answer is no. Defining the minimum part of minimum viable product is always going to be the hardest part of your journey. My learning from this is that more important than defining it is articulating it and making sure that those expectations are clear between you and your customer base.

The more features that you try to predict ahead of time, the more you risk building something that your customers actually don’t want. The more minimal your MVP, the more likely your customers will see it as a motel, not a hotel. That’s the situation I was in. The feedback you get from dissatisfied early adopters is ultimately what drives the most business success. Or more specifically, how to build the best developer experience. If you want to be a product owner, prepare to be disliked. I’m not making any friends. The lesson learned? Be really so very clear with what your features are included in your MVP. While you might sell the vision of the hotel or whatever else to your C-suite and your senior leadership, be really so very clear that version one may be quite far from it.

Let’s look back at our platform journey through the lens of the hype cycle. Hopefully you recognize this curve from the Gartner Hype Cycle, but my boss and I, Dean, have coined this the Hotel Hype Cycle. We managed to do the full curve in the space of 12 months, which was a bit of a roller coaster and also quite stressful. I’ll talk through it now. First, the trigger. That was moving from Cloud Foundry to Kubernetes. Everyone was excited. New skills, greenfield engineering, and forecasts of lower costs through standardization. Then came inflated expectations. This will solve all of our app problems, yet no one wants to talk about the people challenges, but that’s a whole talk in itself. Unlike Cloud Foundry, Kubernetes is not a PaaS out of the box. CF was very opinionated and tightly scoped, which actually made life quite a lot easier for both engineers and operators. When we launched our MVP, reality hit, and we hit the trough of disillusionment.

Developers still needed infrastructure knowledge, when we’d kind of sold that vision that they wouldn’t need any, they would need little baseline understanding of Kubernetes. Integration with other legacy services across the organization, because they weren’t designed by us and didn’t always have APIs, was a little bit clunky. As a platform team, we realized you can’t offer a true PaaS without introducing complexity and blurring those responsibility lines. We also have to shift how we work. Since we have been a purist DevOps team that rarely interacted with our customers, we struggled a little bit with the complaints that we were receiving during the trough of disillusionment.

As I’ve clicked on screen, the features are actually what drove that enlightenment. Now that we’ve been fortunate enough to deliver on some of these features, we are seeing an uptick in that experience from our developers. That turning point is listen to your customers. Don’t expect to make any friends. Take it on the chin and get engineering. Today, I’d like to think we’re well into the enlightenment phase as we’re building towards our thinnest viable platform. I suspect our developers may never be completely happy.

Interestingly enough, this reminded me of the very first definition, I don’t think there was an earlier one than this, from Evan Bottcher back in 2018 when he wrote the blog, What I Talk About When I Talk About Platforms. He introduces a platform as a digital platform with a foundation of self-service APIs, tools, services, knowledge and support, which are arranged as a compelling internal product. It’s not on screen, but he also goes on to say, autonomous delivery teams make use of the platform to deliver product features at higher pace with reduced coordination. The emphasis on reduced is important. I think what quite often gets forgotten is that the platform is also knowledge and support. A challenge that I’ve mentioned my team have been having previously as they were quite hands-off with customers. Something that we probably could have articulated better to our customers was that teams operate with reduced coordination, not no coordination.

The Application Lifecycle

Let’s look at this in a little bit more technical detail. Kubernetes is hosting containers. That’s the little bit of the application lifecycle that I focused on. I recognize it’s not the full software development lifecycle. Why was our MVP so disappointing to our users? While we’d committed to hosting Kubernetes in public cloud, the vast majority of my customers were in fact from that on-premises Cloud Foundry platform. The expectations were largely dictated by Cloud Foundry. Cloud Foundry actually did quite a lot. It handled the build image, scan and store image, deploy image, and the run image. What does Kubernetes offer? Suddenly not looking so great. Particularly when senior leadership assumed that container orchestration of Cloud Foundry and container orchestration of Kubernetes was like for like, I’m actually offering quite a disappointing service now to my developers. It’s no surprise that the MVP didn’t meet their expectations, even though I felt that I’d covered all of the core engineering requirements.

If we consider the application lifecycle to be commit, test, build, store, deploy, and run, if you think about my role as product owner, what I really have to care about is running the image. I just look after Kubernetes. I don’t look after anything else. That is treating my platform as an island. I’ve said at the start of the talk, that’s not what you should do. To bring this back to the title of my presentation, I need to start to talk to all of the other teams that were involved in the earlier part of this application to figure out how we could do this together. This is quite logical from an adoption perspective, as it doesn’t really matter how beautiful my platform scales if no one ever onboards to the platform. We really need to do look left of the application lifecycle.

Firstly, what is the ecosystem with which we’re building in? Due to our, so Fidelity’s existing organizational structure, many of the tools that were responsible for this on the left-hand side of run image were actually owned and operated by different teams. Each of these tools had very different levels of self-service and service management. Given that our operating model dictates that people want a hotel-like experience from Kubernetes, it’s not really that surprising that they then want to experience a hotel-like experience with everything left. We hadn’t yet achieved that. It wasn’t necessarily within my domain to influence. I needed to go and make some friends in the business. As I’ve already said, I’ve not really been making any friends. It became very clear due to the variety of different working methodologies, the existing processes for engagement, and the varying levels of self-service for tooling, the developer experience left something to be desired.

Ultimately, developers want a smooth and consistent experience across the application lifecycle, a golden path. I like this from Kasper. Any procedure in the software development lifecycle, a user can follow with minimal, not none, cognitive load and that drives standardization. To do this, we need to design along the entire lifecycle. Think about the hops that I’ve already shown you on screen. Link each of those stages together. To do this, you can take a dynamic configuration generation approach, or even use straightforward GitOps.

At Fidelity, I’m actually really fortunate that we’d somewhat paved the way with our cloud automation team, with those accounts and the mini data centers that I mentioned. We’d already well socialized the concept of using pull requests as an entry point to the platform. This approach also aids us in a regulatory environment, because we can enforce a point of review. Cloud security, if they want, can do that approval without my team needing to be a middle person. This approach also aids in us delivering in transparency of what’s going to end up looking like quite a distributed system at the back end, but that should always be transparent to our users.

The developer entry point is simplified using declarative config. Developers specify a desired state and let the configuration management tool, or in this case, our platform orchestrator, determine how to deploy it. One config connected to multiple possible pipelines. We do this with FIFO queues. The deployment mechanism should always be, as I said, transparent to the customer. They shouldn’t care which pipeline is deploying which service.

It’s more than just the lifecycle, it’s also about features. Any great platform isn’t really about the technology, it starts with user needs. Each organization is likely quite unique. With any good product, your features should be based on actual demand, but also actual need. Remember that your developers don’t always know what they want or necessarily what they need. Here’s the thing. The platform teams don’t actually need to build everything from scratch. Specialized services, internal or external, can handle the deep technology.

If you’re familiar with Team Topologies, that might end up being called your complicated subsystem teams. The platform’s real job, that’s integration. I think this diagram from the CNCF, believe it is the cloud native platform working group that put this together, shows that it’s that bridge between your capability and service providers, and your product and application teams. In that role, platforms do more than connect, they amplify. They embed best practices, drive consistency, and ensure that things like security, performance, and cost don’t get bolted on, they’re actually built into the platform.

How do we bring together all of these distributed services? As I’ve said, there can be a variety of services on the platform that are all in fact delivered by different teams. These are all presented to the customer by that same declarative config, or your single entry point. This allows us to standardize the way that teams interact with all of their services. Unlike that earlier application lifecycle where they got to talk to multiple teams, they now only have to talk to one interface. We’ve done this using GitOps, so all namespace changes are driven by that custom or pull request. All of our platform component changes are driven by pulling in commits directly from the CNCF. We have done our continuous integration in a pure, continuous fashion whereby everything comes in immediately.

Then there’s a talk I gave at KubeCon about how this is starting to cause us some problems, so check that out if you’re interested. This is actually really nice in the enterprise, because you can enforce a point of review. This is only made possible by an abstract description of your workload. To give you just one example, perhaps you have microservice of name X, dependency on database type Postgres, please not Oracle, with persistent storage of quota 100 gig.

How to Apply This to Your Enterprise

I’m talking about Fidelity, but how might you apply this to your enterprise? On the screen, you can see that I’ve labeled multiple teams. They deliberately don’t have labels of which team that they are because it’s somewhat superfluous to my argument, and each of your organizations will look slightly different. These teams could be security, it could be tools, it could be an IaaS provider, or whoever else you have within your organization. Map out all of these hops that your developers need to make as part of your application lifecycle, and then map the different teams who own the services and technologies. If possible, reduce the number of hops and handovers between teams. This shows our intermediary states, which is where I was about nine months ago when I presented this same slide at PlatformCon. We have now the same three teams rather than four, so we’ve slightly reduced the number of people.

Crucially, there’s only one handoff between each of those teams. You need to understand the different interaction expectations. These teams that are involved might be expecting a different way of working. This can be both the process for contacting a human, but it can also be how technology itself interfaces with a system. You will need to bring the people together. Please don’t underestimate this. I definitely did. It’s going to take strong leadership. It’s going to take patience. It’s going to take empathy, and a willingness to build a lifecycle that is most suitable for your application developers, not for your desires to play with technology, to build your CV, or for your interests.

If you’re in an enterprise where you have that cross-section of maturity, you might be building something that you don’t believe is totally technically elegant, but perhaps it is driving the most business value. To do this more successfully, can you sell the developer experience, which is what we’re all here to talk about? See where it’s possible to standardize the human interactions. I do still believe that the biggest challenge will often be standardizing your interfaces between systems, which when you’re using third parties may not always be possible. Instead, coming back to my previous point, consider if you can make it that single entry point to at least abstract some of that confusion away from your developers. As a reminder, consider where you can implement declarative config, dynamic configuration management, or perhaps even GitOps.

What Helped Support the Outcome?

This is a picture from a blog by Martin Fowler, and we all know that when something’s made it to his blog, we need to take notes. Team Topologies is a fantastic book that I’ve been talking about since 2019. It’s really nice to be able to show you that we’ve made real organizational change. This QR code will take you to a presentation that I gave at Fast Flow Conf in 2023 that was entitled, Reorg from the Ground Up: Scaling Team Topologies for the Enterprise. That talk was quite aspirational. It was a bit in theory. I was taking a bit of a career risk to try and persuade senior leadership to do the reorg with no real hopes of it actually landing.

Today at QCon, I can tell you that we have gone through the reorg, and we now have something that looks a little bit more like this. I run a platform team. I had to start thinking about how can I make this application lifecycle that you’ve seen relevant to my platform team? The biggest blocker that I was finding was that each of these steps were owned and managed by a different team that had a different way of working, a different way of thinking, and different expectations. Instead, we formed a platform group. Now each of the leads for these stages all work together, and we see each other as a team. We’re not fighting with each other. We’re actually incentivized to come to a common solution. It means that we’re providing a better experience as a platform group, not as a platform team. I’ve popped this on the screen, and for the first time, I’ve named those teams. It’s a little bit of a shout-out to those back at Fidelity.

Our platform group is now called the Developer Platform Engineering. You might have spotted that in my working title as Associate Director. We have tools, security, AWS, Azure, portal, which is Backstage, and K8s all living together as one horizontal team. That means that me and my leads are incentivized to work with one another to provide something that fits into Team Topologies.

We’ve gone one step further. We recognize that as the platform team, we were infrastructure specialists. We weren’t application specialists. When developers came to us to ask for advice, quite often the answer was, I’m going to get back to you on that one, and some quick Googling on the side. We wanted to have some enablement function with people that better understood application side specifics that could elevate our organization. We’re calling that the enablement group. What does that look like in practice? We have platform advocates. We have very highly skilled early adopters. We have communities of practice. What I will say about why this has been really successful for us is that these people have long histories of experience of being application developers. They know what the pain points are. They understand the problems. They’ve lived and breathed that, either in a successful DevOps team or an organization elsewhere. Please don’t have an enablement group that’s a bolt-on or a backseat driver. It will not go down well. That could be a talk in itself.

What’s Next?

What’s next? I don’t think that we’ve conquered Team Topologies. In my view, it doesn’t really matter how you relabel your teams if you’re not living and breathing the interaction modes. Think less about the names and more about the ways of working. For it to be successful across the organization, we really need other areas of the organization to truly restructure. That means culture and ways of working.

If you’ve ever been in charge of organizational change, you know that this won’t happen overnight. We need other groups that sit with us to also look towards being platform groups. We’re currently working with our data services function who now sit under our same leadership in engineering. We’re in that forming, norming part of the cultural shift with what was previously a separate department. They’ve never worked in this way. We need to be patient and empathetic as they get on board. Also, we operate an enterprise technology services department, which unfortunately is a little bit separate from the business. Although I’ve promised myself I will stop calling them that. Until they understand the language that we’re using, we’re not all on the same page.

What Makes a Platform Truly Valuable?

What makes a platform truly valuable? It’s actually only in that cross section of the developer experience, your business outcomes and your organizational fit. What do I mean when I say business outcome? That’s things like a faster time to market, a lower marginal cost of delivering new features, reduced coordination overheads and increased team autonomy, an improved risk posture with fewer blockers to delivery.

Finally, probably the one the CEO cares most about is an ability to out-innovate or out-execute your competitors, because ultimately, we’re a business and we want to survive. What do we mean when we talk about developer experience? That’s things like popular modern tooling, minimal to no paperwork, again, reduced coordination overheads and increased team autonomy. Security and compliance handled for you by the platform. Observability out of the box, and features ready at the time of onboarding. This list could be much longer, but the reason I’ve called these out are these are the ones that our personal developers were calling out for the most. That one at the bottom, the features ready at the time of onboarding is where that expectation gap lied in terms of the MVP.

Engineering Maturity – The Elephant in the Room

If it’s that easy, why am I giving you a talk about it? I think we need to address one small elephant in the room, engineering maturity. Because if you’ve spent time in both engineering and leadership circles, you will know that there is a tension here that exists. Developers thrive on autonomy, the joy of building. Engineering isn’t just a job, it’s an identity and one that I held for 10 years. We want tools that empower us, not processes that slow us down. The business doesn’t just value freedom, it values alignment. Predictability, compliance, and the cost control simply aren’t nice to haves, they’re non-negotiables when you operate in a regulated environment. Particularly when you operate at scale. In between these truths lies your humble platform team trying to reconcile the freedom with consistency, craft with compliance, and autonomy with accountability.

This is really hard. It’s exactly why developer experience can’t be the end goal, because optimizing purely for DevEx without considering the system around it will lead you to a local maxima. You’ll make the developers happy right up until the point where things no longer scale or secure or are supporting the business. A platform actually isn’t just a product, it’s a system inside a larger system. To succeed, it has to respect both sides of this tension. This is exactly why platform engineering can’t just be about developer experience, it has to be about balancing experience with outcomes. Designing not just for what is good, but for what actually derives business value for your ecosystem.

There’s one more small awkward thing I’d like to mention. There is a distribution of engineering talent. What you see on screen is taken from 5-Minute Insights by PwC. Many HR models will assume a normal distribution of talent. This is what’s called the forced distribution by PwC. It’s based on the work of Gauss. It suggests that most employees fall in the middle with fewer high and low performers. You’ll often see this bell curve applied in performance reviews, regardless of sample size, but I’m not going to take on that conversation right now. While this might be valid statistically, if a little blunt, in practice, it only compares people within your organization internally. It doesn’t give you any indication of where you sit in the market. What’s the market reality versus your ideal distribution?

If we were to plot all engineers globally, of which there are a lot, you do have that sample size. We would see a bell curve. Where that bell curve sits is shaped by culture, pay, and hiring practices. Great engineers tend to concentrate where the conditions are best, much like animals in an ecosystem. Think competitive salaries, strong technology cultures, or bleeding edge problems. Think big tech or high growth startups. Enterprises, especially those with lower pay or legacy tech, may see their average shift left. This isn’t a moral judgment, it’s a structural reality.

While the global distribution might be normal, your internal one might not be. That matters when you’re building platforms for your developers. Just wanted to call out that the reason that the green line here is flatter is partly to indicate that in a startup you likely have fewer people, and so your variety of performance is actually probably less. That’s down to your hiring practices. This brings us to the real point. Maturity varies, and in large orgs it varies a lot. If your platform is built as a product, your users are those internal developers with a wide range of skills and expertise. You might be selling your platform as best practice as a service, but are your users ready for that? The danger is that you end up designing for your lowest common denominator, and if you’re not careful, that’s who your platform will optimize for.

Key Takeaways

Platform engineering is more than building golden paths or internal products. It’s actually about shaping the environment where engineering happens. Abstracting complexity, not to make things elegant, but to make things possible at scale. This usually requires far more focus on culture and ways of working than it does on technology. When we design platforms in isolation, or as an island, disconnected from delivery pipelines, team structures, or business objectives, we create beautiful tools that nobody uses, or worse, tools that slow us down. When we treat our platform as part of an ecosystem, as a contract with our organization, and as a force multiplier for business goals, that’s when it becomes a strategic advantage. No, your platform is not an island, it’s a bridge, connecting craft to compliance, delivery to direction, autonomy to alignment. It’s only when we fully embrace the full context that we turn our platform into engines of real sustainable value.

Questions and Answers

Participant 1: You’re talking about a platform engineering team, how many people should a platform team have? The second question is, you can’t expect them all to be unicorns, knowing everything. What subject matter experts do you think should be in such a team?

Rachael Wonnacott: We actually started with four engineers, but the reason this was possible is that two of them were 10 star. Absolutely fantastic. You’d put them in a room for a day and they’d deliver what another team could deliver in a month. If you’re going to go really small and you want to go really fast, consider investing in some people that have prior experience. We at Fidelity hadn’t done Kubernetes previously, so we couldn’t have done that on our own. We made that investment to have two 10 star engineers come in and share some of that knowledge.

The two other engineers in that team were mid-levels with high potential. People that I saw that were really enthusiastic, wanted to learn, and wanted to get on board. It’s great to elevate their personal careers, but it also means that when we eventually rotate out the contractors, because they’re expensive, we’ve got two people that have that knowledge. Over the course of the two years, we’ve expanded to a team of 15. That is an absolute hard limit for me, and that includes both myself and my BA. We have 13 engineers. If you’re familiar with the work of Dunbar and Dunbar’s number, which is 150. He also talks about the pockets of trust.

If you have more than 15 people, you don’t know each other well enough personally to have that trust. If I get called out at 2 a.m., it’s really helpful if I trust that you’re going to tell me that you’ve made a mistake. We also operate internationally. We have developers and engineers in India, China, and the UK. I have about five engineers in each region. If I start to introduce any more engineers into that pool, we’re actually creating three mini teams, not one follow-the-sun model. Humans, we identify with people who look like us, who sound like us, who have the same values as us. A lot of what we’ve done with the team for productivity has actually been about encouraging that trust building. Flying people over to meet one another, game days where we solve problems, and trying to build some kind of traditional two-pizza team.

In terms of the maturity level, we have gone for a senior approach to start with, like I mentioned with the contractors. Now that we’re running towards that TVP plateau, I really want to bring in some junior members of staff. I started my career as a junior developer after leaving physics, surrounded by some of the best. Through extreme programming principles like pairing, I got up to speed really quickly. When I’m hiring for those junior developers, I really don’t care what their background is. They don’t need to have studied computer science, but I want them to be really enthusiastic about learning, because the best way to get started is to get hands-on.

Participant 2: You made a comment about if you’re not careful, then the platform might end up being optimized for the lower-performing developers. Why did you make that comment?

Rachael Wonnacott: I can give you a case study. As part of our migration, we have a hard cutoff of December, otherwise we’re going to incur a very large hardware cost. We need to have all of our developers off Cloud Foundry by Christmas. Unfortunately, this means there isn’t as much time as some applications would need to re-architect. To move them from on-prem to the cloud, we’re going to have to make some compromises, or you might call that a lift and shift. To do this, we will have to build some features that are not best practice to enable them to migrate. While I’d like them to be temporary solutions, what we’ve done there is we’ve built a platform for the lowest common denominator.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Urgent ACPI Revert For Linux 6.18 To Deal With Some Hardware Crashing Urgent ACPI Revert For Linux 6.18 To Deal With Some Hardware Crashing
Next Article Qilin Ransomware Turns South Korean MSP Breach Into 28-Victim ‘Korean Leaks’ Data Heist Qilin Ransomware Turns South Korean MSP Breach Into 28-Victim ‘Korean Leaks’ Data Heist
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Black Friday Week Smartphone Deals Are Calling: Save Hundreds on the Samsung Galaxy S25 Edge and Google Pixel 9
Black Friday Week Smartphone Deals Are Calling: Save Hundreds on the Samsung Galaxy S25 Edge and Google Pixel 9
News
Designing for Digital Twins: The Next Frontier of Product Paradigms | HackerNoon
Designing for Digital Twins: The Next Frontier of Product Paradigms | HackerNoon
Computing
This five-star vinyl player is down to a steal – and will make the perfect Christmas gift
This five-star vinyl player is down to a steal – and will make the perfect Christmas gift
Gadget
OWC’s Black Friday Sale Has Steep Discounts on Docks, Drives, and Mac Accessories
OWC’s Black Friday Sale Has Steep Discounts on Docks, Drives, and Mac Accessories
News

You Might also Like

Black Friday Week Smartphone Deals Are Calling: Save Hundreds on the Samsung Galaxy S25 Edge and Google Pixel 9
News

Black Friday Week Smartphone Deals Are Calling: Save Hundreds on the Samsung Galaxy S25 Edge and Google Pixel 9

8 Min Read
OWC’s Black Friday Sale Has Steep Discounts on Docks, Drives, and Mac Accessories
News

OWC’s Black Friday Sale Has Steep Discounts on Docks, Drives, and Mac Accessories

7 Min Read
Can You Use Apple CarPlay Without Plugging Your iPhone In? – BGR
News

Can You Use Apple CarPlay Without Plugging Your iPhone In? – BGR

4 Min Read
What the leaked AI executive order tells us about the Big Tech power grab
News

What the leaked AI executive order tells us about the Big Tech power grab

21 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?