Transcript
Vanderbijl: Before we dive in to our journey from on-prem into the cloud, I want to first talk to you about two short little stories. The first is about Sisyphus. Sisyphus was a Greek mythological character who was forced by Zeus to roll a boulder up a hill for eternity. Every morning, he’d wake up and he’d have to push that rock up the hill. He’d get as far as he could during the day, he’d go to bed, he’d wake up in the morning, and that rock would be at the bottom of the hill again. As a software developer, I actually feel a lot like Sisyphus sometimes. Every Monday, we wake up, we go to work, there’s our Jira board with all of our tasks for our sprint. We’ve got all the stuff to do. We work through them during the week, during the course of the sprint. At the end of the sprint, we’re getting excited because we’re going to release that. It’s going to go to our customers. Brilliant. We’re done. Our customers love it.
Then I wake up on a Monday morning and I’ve got another massive stack of tasks to do. Unlike Sisyphus, I don’t see that as being an exercise in futility. I find that to be really exhilarating, because it means I never have to worry about being finished. As a software engineer, software is never done. You always have the opportunity to continue. You always have the opportunity to enhance what you’ve done, and to make it better, and to change over time.
The next little story I’d like to just mention quickly is that of Amazon. In 1994, Amazon started, and it was Earth’s biggest bookstore, but they couldn’t make any money for years. After a while, they decided they’d try selling other things. If they’re an e-tailer, I can sell books, maybe I can sell other things. They started selling other stuff, but they still couldn’t make any money, until some bright spark realized that they had a whole bunch of extra server capacity that they weren’t using, and that they could sell that. AWS was born.
AWS now revolutionized our industry and now accounts for 74% of Amazon’s operating profit. Only 26% of their operating profit still comes from their traditional business. What I find interesting about this is that this tells us that what we know to be true today is not what necessarily is going to be true tomorrow. As a software engineer, we should take advantage of the fact that things always change, and that we should expect that things always change. We can use that change and that evolution of software to our advantage.
What Is MJog?
Let’s talk a little bit about what MJog is. I work for our parent company, Livi. Livi is a healthcare services provider. Livi provides online GP appointments. We have GPs and we provide software to allow GPs to have appointments with patients. We do that with the NHS and privately as well. A couple of years ago, we bought out MJog. MJog is a patient relationship management system. It allows GPs to be able to communicate with patients. We can do everything from batch messaging to questionnaires, so things like, do you have asthma? Do you smoke? How many times a day do you smoke?
To appointment reminders, “You’re due to see your GP tomorrow. Don’t forget to come in”. One of the important aspects of our application is the fact that we can integrate with EMRs. EMRs are electronic medical record systems. EMRs are where all of your patient data is stored. In the UK, there are basically two primary EMRs, but outside of that, there are loads more. Our application integrates with two of them.
Crossing the Chasm
We’re going to talk about how we migrated our legacy application that sat on-prem in GP surgeries into the cloud. We’re going to talk about five aspects of that migration. The first is, we’re going to talk about how we decided to do the migration in the first place. We’ll talk a little bit about how we designed our architecture. We already had an on-premise architecture, but how did we modify, how did we change that when we moved into the cloud? The third thing we’re going to look at is then having made our architecture, what were the choices we had to make, and what were the decisions we had to make to then implement that architecture cloud native or cloud native-ish.
We’ll also talk a little bit about observability and how we applied that to all of the code that we wrote, particularly in the cloud, and the advantages that that gave us. The last thing we’ll do is we’ll dive a little bit into how we defined requirements when we already had an application that did things. We had to match the functionality of the existing application in the cloud. We found that working out the requirements for that and defining that for our engineers wasn’t as straightforward as we’d hoped it would be. We’ll talk a little bit about that as well.
Choosing Cloud Migration
When I came on the project a couple of years ago, when I started working with MJog, this is what it looked like. We had essentially a two-layer application. In the cloud, we had a bunch of services that ran, and these were primarily services around sending messages and some billing services as well. Ninety percent, 95% of our logic really was all on this application suite that was installed on-prem. As I said before, this isn’t installed on our premise. This was in every GP premise. We had thousands of installs across the country. The application had grown over 15, 20 years, and was a real mishmash of different technologies, different libraries, different frameworks, and all sorts. We had Delphi services. We had Java services. We had PHP 5. This was only a couple years ago, we still had PHP 5. We also supported multiple SQL databases, so primarily SQLite and MS SQL.
Another aspect was that it only ever ran on Windows. It was all Windows based. From that on-prem application suite, we were also able to talk to the EMRs that we supported, so EMIS and SystmOne. Roughly speaking, this is what the services look like. When I say services, really, these were Windows services. They’re not services in the domain driven sense of the word. As you can see, we had a real spaghetti mess of stuff here. We had services that basically could talk to anything they wanted. We had a database in the middle that wasn’t wrapped with an API, so if any of our services wanted to talk to it, they’d just open a straight SQL connection and talk to it with raw SQL. To make things way more fun, our developer also decided that all of our SQL statements would just be raw strings. We not only had raw strings for one SQL language, but we had raw SQL strings for two SQL languages.
Then, on top of that, we’ve got a real mix of business logic all over the place. In the Delphi services, we dealt with pushing messages to our messaging server. Instead of also pulling down the replies, when we did that sync, we decided it would be much more fun to stick that in a standalone PHP service. We’d push messages with Delphi. We’d pull down replies with PHP. It was a real mess, and it was a real mix of everything, really. We were basically drowning under a mountain of tech debt, not to mention the fact that we were basically faced with a massive change about what our customers wanted. Before COVID, in healthcare, particularly, everything had to be on-prem, because we don’t trust anybody. If it’s on-prem, then we can trust it, which is a brilliant premise.
Suddenly, after COVID, they didn’t want anything on-prem anymore. The GPs would work from home, and they wanted to be able to connect to their system or our system, without having to get onto a VPN. They also wanted to be able to connect with a Mac or whatever system they wanted. They didn’t necessarily want to be stuck to Windows. That was a big problem. We also had the fact that our primary developer, the person who’d written 95% of that client code, and, more importantly, the only person in our company who knew Delphi was going to leave us in six, nine months.
What was on fire? You can’t live like this. We were getting everything in the year. We were being told that our UX was old, that we had to update our libraries and stuff. When you’ve got so much noise, it’s often hard to see the forest for the trees. We sat down and we said, we’ve got all this stuff, but what are really the biggest problems that we face? What are the things that are really going to be a problem tomorrow? Because that’s the stuff we need to work on. When we did that, we realized that really there were two things that stood out amongst everything else. The first was our customers. We had a bunch of customers whose renewals were up within the next two, three months. They were basically saying, if you don’t have a cloud version, we’re not going to renew. That’s a pretty major problem for a business that relies on customers. The other problem we had was our developer leaving.
If we don’t have anybody that can maintain and manage our systems, particularly our Delphi, because he was the only one that knew Delphi, that was going to be a real problem for us very quickly. With those two things in mind, we realized, it’s an obvious choice. We had to move into the cloud, and we had to replace the Delphi. We figured that because we were going to move into the cloud, that it’d be a perfect time that we could also then replace the Delphi at the same time. That was just the first of the billion of other questions that we had to answer. We knew we wanted to go in the cloud, but we had two options. We could do a lift and shift, or we could do a cloud rewrite. A lift and shift is pretty straightforward. Everybody knows what that is. It’s easy. We’re in AWS. Other EC2s are available. You get an EC2, you stick your application on it, and off you go.
Our support team would love it, because they would know how it worked. It would be fairly straightforward. I could deliver that in a month, maybe two. Maybe a bit of automation. We’re done. Problem is, it’s going to cost us a fortune. We had to have fairly beefy EC2s for each of our customers, which meant that basically any profit we were going to make, we wouldn’t make. That wasn’t a great option. It also didn’t really solve the fact that our Delphi developer was leaving us.
Again, solved one of our problems, but not all of them. The cloud rewrite was the other option. As a developer, I love doing a cloud rewrite. I get to start again. I get to write the shiny new things. That’s brilliant, and that’s fun. It’s certainly going to solve our cost problem. Instead of having an EC2 for every customer, I’m going to make a small handful of services, and that will serve all of our customers. It’s going to be way cheaper. The only problem is that our customers will be long gone by the time we get done. Six, nine months was a rough estimate. I think we took 10.
Here was another solution that was going to be cheaper but wasn’t going to solve our customer problem. Neither of these solutions was going to work. What did we do? We did what every normal person would do. We did them both. It is fine to do both. Sometimes you don’t have an option. You have to be pragmatic. You have to be realistic about what your options are. In fact, doing both solved both the problems. We did our lift and shift. That was quick. Then we moved straight on to our cloud rewrite. It took a little bit longer, maybe overall, but we were able to solve both of our problems. Importantly, we kept our customers.
The other side was that, it is really important to take the time. What we learned from this is, you’ve got to take the time to understand what your problems are, especially when everything’s on fire. We’ve all been there. Everybody is yelling at you. You’ve got to just take a pause, work it out, because it’s way better to do the thing that’s important than to do 100 things that don’t matter.
Designing the Architecture
I’m not going to talk about the lift and shift. We lifted and we shifted. Boring. It’s done. Let’s talk about the cloud rewrite. That’s the fun stuff. Remember, this is what we had, a spaghetti mess. When you’re trying to migrate from one system to another, you’ve got to first get a real handle on what it is you’re actually trying to migrate. Like I said before, we had all these things that look like services, but weren’t services. We didn’t really understand what the underlying functionality of our system was. We were all fairly new to the application. While we saw these services, we didn’t really understand the functionality. Nobody had ever looked at trying to separate functionality by domain or any of that. We had to do that.
Then, once we had that, we figured, we can do an implementation from that. First thing we did was work out what functionality we had. We worked it out by the language it was written in. In the Delphi’s code, we had a bunch of logic around connecting to EMRs, so reading and writing from external medical record systems. We also had a bunch of logic around synchronizing data between external systems and our database. It was also a really big bit of software, or a really big bit of Delphi logic around managing the on-premise software, so starting services, stopping services, managing updates, and things like that. I don’t know why they built it in Java. We had a task scheduling system in Java. Why not? In PHP, we also had a bunch of sync services. Our UX was all written in PHP. Remember, this was PHP 5 at the time.
The UX in particular really tightly coupled both business logic and UI concerns. It was actually quite a bit of a mess. We had a whole bunch of UI stuff. We had things like being able to create messages and create campaigns in the PHP as well. Once we did this and we thought about it for a bit, it actually became quite simple to say, we could take this and we could turn that into something that resembles domains in some way, shape, or form. We did. We realized that, maybe it’s not a domain per se, but it’s at least a set of templates. We realized that our sync services, all the syncs that we were doing, were all roughly the same, and they would really fit nicely together, and we could develop that as a suite of repeatable services. We realized that the software manager wasn’t going to be necessary. We could delete that. We’re in the cloud. We’ve got GitHub Actions for that crazy.
We also realized that we had a really nice, totally separated domain around connecting to EMRs. What was important about this was that we also realized that our parent company, whose primary purpose was performing GP meetings online, would really love to use the ability to read and write from local EMRs. We thought, that’s going to be a great standalone service. Task scheduling, it’s a bunch of cron jobs. That can stand on its own. That left us with this little thing in the middle that really didn’t fit anywhere. Reminder generation was basically when we’d receive an appointment for a practice for a patient, we would then generate a message for that patient, and send it to them, for example, the day before. It wasn’t really a sync in the way that the other services were a sync, but it wasn’t really business logic or UX logic, like creating campaigns or creating messages. We left it a little bit on its own. Remember that, because we’ll come back to it.
We took all that, we distilled it down, and we went, this is simple. We don’t need all that crazy complexity. We can actually make some really clear patterns here. We can really take the domains that we identified and make them make some actual sense. We did that. We had data services, basically our database, we wrapped in an API, and that could just sit there and handle any of the requests to access data or to write data.
The application services, the PHP really could just sit on its own. It didn’t need to change much. We just put it there. It could talk to the database as it needed, and that would be fine. What we could do is to take all the complexity of those sync services and put them into a little suite of many services. They could be our interface between our data and any external system we needed. That was something that we could template and, essentially, copy paste from service to service, or sync to sync that we needed to perform. Then we could just create a little standalone scheduling service that would manage all of our syncs for us.
As I said before, we’d have an EMR service that we would create as one of the other ends of the external systems we want to connect to. The messaging service and the billing service, they already existed, they were already things that were in the cloud. We weren’t going to touch them because we didn’t want to take on any more than we absolutely had to. We’re not going to talk about the messaging service and the billing service. As I said, they already existed. We didn’t do anything there. I’m not going to focus much on the scheduling service or the data service or the application service, not because they’re not interesting. Data service was a database API. Not really that interesting. Scheduling service was a bunch of cron jobs that we put into EventBridge, again, not very interesting. The application services we barely touched. The interesting stuff was EMRs and the sync services. That’s what we’re going to focus on.
Implement Cloud Native Services
Let’s start off, and let’s talk about the EMRs. The EMRs aren’t like any normal external system. This is healthcare, and it is behind the times. We had the two EMRs that we had to connect to. One of them allowed us to connect over HTTP. We could connect over the internet, which was great. Although I don’t know why I have to say that that’s great, but we couldn’t just connect to it like a normal person. We had to wrap around a Windows DLL to be able to make those connections. When we did that, we would then get back a bunch of complicated XML. That was a bit of a pain in the butt. It also meant that to do anything with that, we had to be running on Windows, which is expensive. SystmOne, on the other hand, didn’t force us to run on Windows. The only problem with SystmOne is you have to be on-premise to be able to connect to it. Yes, this is 2022.
Also, with SystmOne, you couldn’t connect over HTTP. You had to literally open a TCP port and make direct TCP calls to it. Then wait for it to respond when it decided it was ready to. Then you’d get some XML back. We had no control over these systems, and we had no way of trying to make them better or to argue that we want something that’s at least 2015, maybe. We just had to accept what it was. We also had no real way of being able to monitor these systems. Even though they both serve basically the same thing, same types of content, they both have contacts, they both have appointments, the XML was wildly different. The models they used were wildly different. We had to work out, how were we going to manage this in the cloud?
We knew that we had to do this. My vote was to say, to heck with it, in slightly stronger language, and just not do it. That wasn’t an option. This is a real critical part of our application. We also knew that we had to expect that those systems were going to change at a rate that we had no control of. In fact, often without us doing anything. We also wanted to make sure that our calling code wouldn’t have to worry about the complexities of whether or not we’re calling something on-prem, or whether we’re calling something in the cloud, or we’re using carrier pigeon. This is what we built. It’s not that exciting. It’s not that revolutionary.
It’s a proxy API with a couple of adapters below it. What it let us do was it let us hide all that crazy from our calling code. It meant that our parent company should they want to, or, for that matter, any of you, if you want to, and want to pay me enough, could connect to patient data and could connect in a consistent and standardized manner. We expose, in the EMR API, that proxy layer. We expose standardized domain models. We standardize the query language so that we could just make one client that can make calls, and it wouldn’t matter what underlying EMR you were connecting to. All the crazy could be hidden in the connectors. Transforming that logic would all be done there. This was great, because it scales both vertically and horizontally.
If we want to add more EMR connectors, no problem. Just build a connector. In fact, we built one of these connectors in about a week. That would be easy. If we wanted to scale horizontally, if the throughput was increasing, we could easily do that as well, because we could just double up the connectors, or triple or quadruple. It scaled really nicely. Finally, it was totally standalone. We took something that was heavily dependent on MJog, and we totally separated it so that anybody could use it. That was what we did with the EMRs.
Let’s move on to synchronization. We had a whole bunch of syncs that went on. We probably had a dozen or so. I’m not going to focus on all of them. We’re just going to look on the two that were really interesting. We had messaging services, and that was a service that we controlled. We both sent messages to that service, and we’d pull down things like replies, like delivery receipts. On the EMR side of things, again, we were doing the same thing. We’d pull appointments and patients from the EMRs, and then we’d write back to the patient records. If we had sent a message to a patient, we would write that back to their record, and say, we told them to come get a COVID job, but they believe it’s full of 5G and they don’t want one. What was nice about all of the syncs that we realized was that all of the syncs were well separated.
They all had very distinct and separated data. We were either pushing messages or we were pulling down replies, or we were getting appointments. Everything was very nicely segregated. With our messaging services, we built that, and by and large, that thing runs fast. We control it. We know how it operates. It goes. We didn’t have to worry about it a lot. Contrast that to the EMRs, and total opposite is true. The EMRs are fragile. Don’t forget, one of the EMRs sits on-premise, and on Saturdays and Sundays, they turn it off. We never knew if the thing was going to work, and we never know when it would work. We had a real mix of issues that we had to deal with. We had those requirements, and then we tried to work out, what are we going to do? Are we going to build a monolith? Are we going to build a microservice? Again, really, they all do the same thing. They take a bit of data here, they do something with it, and they stick it over there.
All the data is really nicely separated into different domains. The domains are already there for each sync, and they’re very nicely separate. We had a massive disparity in time differences, not only between the messaging services and the EMR syncs, but also if we had to sync a lot of messages or a lot of patients, we could have a sync that would run, in some cases, the hours. Then we would also have other syncs, if they were a lot less, that would complete in minutes or seconds, even. Tied to that but slightly different is the fact that we also had different load requirements. Sometimes we’d have to push thousands of messages through. Other times we wouldn’t have to put any messages through.
That requirement would change depending on the account, the customer. It would change depending on the time of day. We had no control over that. One account might push 1000 messages at 9:00, the next account might do it at 10:00. You never knew, and you had no idea what was going to happen.
Because primarily of the spiky profiles between the time and the load, we decided that the best thing to do here is to go with the microservice, because that allows us to separate and be able to scale and manage all of the different syncs independently. We could have put them all in a monolith, and it probably would have worked fine, but then you’re sharing all the resources and all of that. We just thought, this is a microservice. This is going to be best fitted for a microservice. That’s what we did. That wasn’t the end of the discussion.
The next question was, do we want to go serverless, or do we want to go containerized? This is basically what our services look like. We started off with a trigger of some description, and we’d query a bunch of data. We’d then do something with that data, and we’d save it somewhere, a DB, an external system, whatever it was. What was important to understand is that we’re not lucky enough to be Netflix or Uber, or maybe unlucky. We operate very much at a human scale, thousands of requests. We don’t have to deal with millions. Our sync processes can take anywhere from seconds up to, in the worst case, hour and a half. That was already going to be well beyond the 15 minutes allotted for a lambda.
The other aspect, and this was actually probably one of the more crucial aspects to our decision, was that we didn’t have anybody on the team that had experience of running event-driven microservices. Just as an example, this is what we’ve got. Logically, this is what happens. If we wanted to make this fit in a lambda, taking into account the time constraint, we’d have to add queues. We probably wouldn’t have to just add two queues. We probably have to add one or more in the business logic as well, and maybe another one in between the trigger and the query, depending on how long those queries took. You can suddenly see that this goes from being easy to being a little bit more complicated.
For that reason and our lack of expertise in the area, we decided that, for now, the best option was to go with containers. That doesn’t preclude us from further down the road making the switch to lambdas if we then need to. This was by far the best option here, because it’s going to be the easiest to implement. Our teams understand it. It doesn’t stop us from evolving as systems change on a per sync basis in the future.
What discussion of microservices would be complete without an argument about orchestration and choreography? In most ways, actually, it was a fairly straightforward decision. We needed to run on a schedule. We needed to run and trigger each of our customers independently. We can probably argue about this, but really it was an orchestration. We ran with an orchestration. Remember we talked about that little red box in the middle? That was reminder generation. Reminders weren’t triggered on a schedule. Reminders weren’t created one account at a time.
They happened whenever an appointment was received. It was an event-driven choreography. We had orchestrations for all of our services, and we had one choreography. The two most important things that we worked out during this, were, only decompose to the level that you have to decompose to. You can have the best microservice architecture in the world, but if it’s too complex for your teams to be able to manage, there’s no point to it.
Observability
Let’s move on. Let’s talk about observability. On-prem, we didn’t have observability at all. We had audit logs, we had application logs, but they were on-prem. We couldn’t get to them. We could get to them if we called up the receptionist and asked her if we could get onto a remote desktop. You can imagine how fun that was. Also, let’s say we got onto the remote desktop, whatever’s gone wrong, how do we make that happen again? How do we track or log what’s gone wrong? This was a really interesting one. The other problem we had was that we only had an understanding of what happened for one account at a time. We were moving all of our accounts into the cloud, so we had no real idea at all about how it was going to react once we got into the cloud.
Because suddenly we went from having all these systems and all these services running on a separate machine. How is that going to work when they were now all running against one set of machines? We had no idea what was going to happen, really. What we decided to do from the very beginning was that we would make sure that observability was a first-class citizen. Everything that we would do was all around making sure we could tell what was going on with our services. We did the simple things. I can’t stress this enough, we put a correlation ID on every API call. It allowed us to track what was happening in our applications across all the services. It was a godsend. It was amazing. We also used structured logging so that we could add a lot of our own custom metadata to all of our logs and allow us to generate metrics from those. It meant that we could start segregating our logs, not only by correlation ID, but then also by account or by everything else.
We also realized that, because we had sync services, sync services were a really nice self-contained process that really generated its own metrics. At the end of every sync, we started generating summary metrics to say, for example, how many patients were processed in that sync? How many messages we sent during that sync? We started developing a massively rich set of data that we could start looking at and understanding what was actually going on in our systems, what was happening, and how they were performing.
Yes, correlation IDs. The simple things often give us the best rewards. That’s definitely true of correlation IDs. Because we didn’t know how things were going to react in the cloud, the other thing we realized was that, what we thought we were going to observe, or what we thought we wanted to observe, changed over time, changed as we started using the system. We now regularly go back, on a weekly, biweekly basis, to our dashboards, and rewrite them or change them or modify them, because we’re constantly learning about how our systems are operating. We’re constantly changing what we want to find out about those systems. We’re also still learning. We still have the age-old problem of, when is something in error, when is it not in error? We also have the joyous problem of our EMR being turned off on Saturdays and Sundays, and at 6:00 on the evening.
During the day, not connecting to that EMR is a problem. At night, it is. How do you manage that pain in the backend? We’re also still trying to work out how to manage our structured data. How do we want to filter that? How do we want to maintain it? We’ve looked at key-value pairs. We looked at namespacing that metadata. We’re still trying to work out what the best way to manage all of that is.
Defining Requirements
Let’s talk about how we defined our requirements. Remember, we had a real complicated set of code that we were trying to copy from on-prem into the cloud. Primarily, especially the stuff that was written in Delphi, was written procedurally, and that didn’t transliterate well into object oriented Java. We had a ton of code that was copy and pasted. Sometimes that code would be copy and pasted and be the same. Sometimes it’d be copy and pasted, and one variable would be changed, or one bit of logic would be changed. We had no idea whether it was used. We had no idea why it changed from time to time. It was really difficult to understand what was happening.
Being the Cavalier Cowboy that I am, the first thing we decided to do, because I’ve got existing code, it’s going to be easy. We just wrote stories that said, here’s what the existing code does, do it in Java. That was not great. We ended up copying errors and bugs from one to the other. We also worked out that just because you know what the code is doing doesn’t mean you understand why it’s doing that. We were trying to transliterate into Java, but we just ended up making loads of subtle but very hard to understand mistakes. When we couldn’t get the legacy code or the new code to match the legacy code, it was really hard to understand why that was happening. Because we couldn’t argue, this is what it does. It was just totally impossible to understand. That was really not a good solution. I think we stuck at it for about two sprints, and then we realized it just wasn’t working.
The other option, what we decided to do, was to spend the time upfront to understand what the requirement of the code that we were trying to copy was. Once we did that, we could write normal, straightforward user stories that the engineers could just implement in Java, and they would understand why they were doing that, and they could make it actually work in Java. That was a massive change. The throughput just went through the roof. That was the way to do it. If you’re ever in that problem, definitely spend the time upfront to understand what your code is doing before trying to just copy it.
Recap
This is what we had, a spaghetti mess where everything was on fire, everything was broken, everybody was shouting at us. This is what we built: streamlined, separated into sensible, clear domains. It’s always hard when you’ve got competing interests, when you’ve got different people asking you for different things, telling you that this is the most important thing, or that’s the most important thing, or this is going to break, or whatever. You have to take the time to understand what really is going to be a problem, because people like to shout, even if it’s not the most important thing.
Also, remember, you’re never done. Software is continually changing. We can use that change and that knowledge that it’s going to change to our advantage. We can use the evolution of software to our advantage. We’ve got Charles Darwin here, “It’s not the most intellectual or the strongest of the species that survives, but the one that’s best able to adapt and adjust to their changing environment”.
Key Takeaway
I’ll leave you with, I think, what I’ve learned from this. We always talk about tech debt as being something that is a necessary evil. We’ll do it that way. We will have some tech debt, but that’s ok, because we’ll get to it. Never. Tech debt is what you get when you take shortcuts. Instead, try not to create tech debt. Instead, try to make sure that you’re setting up your software to evolve. Make sure that you’re able to change as quickly and as fast and as necessary for how your environment is going to change. Because, remember, what you know to be true today isn’t what is going to be true tomorrow.
See more presentations with transcripts