Key Takeaways
- Transitioning from legacy on-premises systems to cloud-native services requires carefully balancing immediate needs and long-term goals like scalability and sustainability.
- A hybrid approach, such as combining lift-and-shift strategies with cloud-native rewrites, can help businesses meet pressing customer needs while building a future-proof platform.
- EMR integration is a complex task, especially in healthcare, as outdated systems often require costly and challenging solutions, highlighting the need for creative cloud-native integration strategies.
- Using microservices and modular architecture enhances scalability and flexibility, particularly when handling varied workloads like EMR synchronization, but observability must be prioritized to monitor and improve system performance.
- Investing time in clearly defining requirements and avoiding direct replication of legacy code can prevent errors and streamline cloud migration, leading to more efficient development and better system adaptability.
The main objective of this article is to explore the challenges and strategies involved in transitioning legacy healthcare systems to cloud-native architectures. Our company, Livi, is a digital healthcare service. I will walk through the challenges we faced when modernizing our Mjog patient messaging product including how we integrated with legacy Electronic Medical Records (EMRs), managed data synchronization and decided between serverless and containerized solutions.
By highlighting the lessons we learned along the way, I hope to offer insights that can help others navigate similar cloud migrations while ensuring scalability and adaptability in an ever-changing tech environment. This article is a summary of my presentation at QCon London 2024.
Livi is at the forefront of revolutionizing healthcare delivery. As a healthcare services provider, we facilitate online GP appointments, connecting patients with General Practitioners (GPs) through our innovative platform. This blend of technology and healthcare enables us to serve National Health Service patients and those seeking private consultations.
A pivotal development in our journey was the acquisition of MJog, a patient relationship management system designed to enhance communication between GPs and their patients. MJog empowers healthcare providers to send everything from batch messaging and questionnaires to crucial appointment reminders.
An integral feature of our application is its ability to integrate seamlessly with Electronic Medical Records (EMRs), which are essential for storing patient data. In the UK, the landscape of EMRs is dominated by two primary systems, but numerous others exist globally. Our application is designed to connect with these systems, allowing for efficient data exchange and enhancing the overall patient experience.
Transitioning from on-premises systems to cloud-based architectures poses numerous challenges, especially in industries as complex as healthcare. Our journey was no exception. As we navigated the intricacies of this transformation, several vital lessons emerged that are valuable for any team facing similar hurdles.
The migration of our legacy MJog application from on-premises to the cloud was a comprehensive, five-step process. First, we assessed the necessity for migration, identifying the strategic need to modernize our infrastructure for scalability and improved performance. Next, we redesigned our architecture to transition from the on-premise model to a cloud-optimized framework. With the architecture in place, we made critical decisions around implementing cloud-native features while balancing legacy system requirements. We then focused on observability, embedding robust monitoring tools to ensure visibility and reliability across the cloud environment. Finally, we tackled the challenge of defining clear requirements by mapping the functionality of the legacy system to its cloud counterpart, a process more complex than anticipated but essential for ensuring consistency and functionality in the new environment.
MJog Architecture
When I joined the MJog project, the system was a complex two-layer architecture with most of its logic housed on-premise at GP surgeries across the UK. The on-premise system had grown over 15-20 years into a tangled mix of outdated technologies, including Delphi, Java, PHP 5, and both SQLite and MS SQL databases. It was entirely Windows-based, with services directly accessing databases through raw SQL connections. Business logic was scattered across services, creating inefficiencies. We faced enormous technical debt.
Following COVID, GPs wanted remote access to our applications without VPNs and compatibility with non-Windows devices. Compounding this, our key Delphi developer was leaving the company, forcing us to urgently address the system’s modernization.
The Necessity of Change
Early on, we recognized that evolving our application was not just an option; it was a necessity. The technology stack was outdated, and we were getting requests from all sides: our user experience was aging, our libraries needed updating, and customers were demanding cloud solutions. On top of that, we were facing a crucial deadline: some of our biggest customers had contracts up for renewal and were clear they would not stay with us unless we offered a cloud version. To complicate things further, our only Delphi developer who knew the ins and outs of our legacy code, was preparing to leave. These two issues meant we had no choice but to move forward and modernize.
We faced two primary options: a quick lift-and-shift to the cloud or a complete cloud-native rewrite. The lift-and-shift would have been fast, keeping us operational, but it was expensive and didn’t solve the long-term issues, particularly the reliance on Delphi. The cloud rewrite, on the other hand, would reduce costs and future-proof our platform but would take too long, risking customer churn. Realizing that neither approach alone would work, we did both. We swiftly moved to the cloud to keep customers happy, then focused on the rewrite to ensure scalability and sustainability. This hybrid approach allowed us to meet immediate demands while laying the foundation for future growth.
Reimagining the Architecture for the Cloud
After completing the lift-and-shift migration to the cloud, we focused on the more complicated task: the cloud rewrite. Our legacy system was a mess of various technologies, including Delphi, Java, and PHP, which lacked a clear separation of responsibilities. Before we could build a more maintainable, cloud-friendly architecture, we had to understand each component’s underlying functionality.
To start, we categorized the existing services by language and purpose. The Delphi code contained critical logic for connecting to Electronic Medical Records and synchronizing data between external systems and our database. Java was used for task scheduling, while PHP managed our user interface, which was tightly coupled with business logic. This examination revealed natural groupings within our system, allowing us to think more clearly about how to redesign it for the cloud.
We recognized that our sync services could be templated into reusable components, streamlining future development. Task scheduling was simplified to a straightforward set of cron jobs managed by AWS EventBridge, which made it easier to orchestrate processes without the complexity of the previous implementation. Additionally, we decided that the EMR connections deserved a standalone service, as this functionality would be valuable for our application and could be used independently by our parent company, which focused on online GP meetings.
By modularizing the architecture, we significantly reduced the complexity of our system. We wrapped data services in APIs, allowing for easier access to data and promoting a clear separation of concerns. The application services, primarily written in PHP, were left largely untouched as they required minimal modification. Our primary focus was on the sync and EMR services, which became the backbone of our new cloud architecture.
This reimagined system addressed the technical debt accumulated over the years and provided a scalable and maintainable framework for future growth. We created a solid foundation that would allow us to adapt to the evolving needs of our customers while ensuring that our application remained relevant in a fast-changing technological landscape.
Building Cloud-Native Services
In our journey to build cloud-native services, we had to overcome several challenges, particularly when dealing with outdated legacy systems in the healthcare sector. Transitioning to the cloud required us to rethink how we integrated with external services, managed data synchronization, and handled varying workloads, all while ensuring scalability and preparing for future growth.
Connecting to External EMRs
Integrating with Electronic Medical Record systems (EMRs) poses unique challenges, particularly within the healthcare sector, which can be slow to adapt to technological change. For our migration, we needed to connect two distinct EMRs, each presenting its own set of complexities and limitations that complicated our transition to cloud-native services.
The first EMR allowed us to connect via HTTP over the internet, which initially seemed promising. However, the integration process required us to wrap our requests in a 32-bit Windows DLL to establish a connection. This added an unnecessary layer of complexity and meant we were still tied to an expensive Windows infrastructure. Furthermore, when we received responses, they came back as complicated XML structures requiring additional processing and handling which could slow down our application’s performance and responsiveness.
On the other hand, the second EMR offered a different set of challenges. Unlike the first, it required an on-premise connection and demanded direct TCP calls which complicated the integration by requiring a customer specific vpn. Additionally, while both EMRs managed similar types of content, such as contacts and appointments, they utilized different XML formats and data models, complicating our ability to create a cohesive integration strategy.
As we transitioned to a cloud-native architecture, we faced the significant challenge of managing these disparate systems effectively. We lacked control over the development and monitoring capabilities of the EMRs, which further complicated our ability to ensure seamless operation within our new cloud infrastructure. Ultimately, we had to devise a strategy that allowed us to integrate these two EMRs and ensure that our cloud-native services could handle the complexities and variations inherent in their data models. This required us to think creatively about normalizing the data and streamlining the integration process, setting the stage for a robust and flexible cloud architecture.
To achieve this, we developed a proxy API with adapters that hid the complexities from our calling code. This approach allowed us to standardize data models and query languages, enabling consistent connections to different EMRs. The system is scalable and independent, making it flexible enough to support additional connectors as needed. Ultimately, this solution decoupled the EMR logic from our core system, allowing for easier integration and growth.
Synchronization Challenges
Synchronization of data was another critical aspect of our architecture. We managed multiple sync processes, including messaging services and EMR data synchronization. Messaging services synchonizing data between our databases and SMS and email providers were, primarily, under our control running efficiently and predictably. In contrast, the EMR integrations presented challenges due to their fragile nature, often dependent on on-premises systems that could be unexpectedly turned off.
We faced a significant disparity in performance and load requirements across these syncs. Some would complete in seconds, while others took hours. Given the unpredictable nature of these demands, we opted for a microservices architecture. This choice allowed us to manage and scale each sync process independently, ensuring that varying loads could be handled efficiently without impacting other services.
Choosing Between Serverless and Containerized Solutions
A crucial decision in our journey was whether to adopt a serverless architecture or utilize containers. Our team’s lack of experience with event-driven microservices led us to favor containers for now. Since our processes could take longer than the 15-minute limit imposed by serverless solutions like AWS Lambda, containers provided a familiar, manageable approach that our teams would be comfortable supporting. However, we ensured this choice did not close the door to future serverless options. As our expertise grows and demands evolve, we can transition to serverless architectures when appropriate.
Orchestration vs. Choreography
The discussion around microservices often includes a debate between orchestration and choreography. For our architecture, we primarily implemented orchestration to manage scheduled tasks and trigger processes independently for each customer. In contrast, specific functions like reminder generation operated on an event-driven choreography model, reacting to incoming appointments rather than being triggered on a fixed schedule.
This blend allowed us to leverage the strengths of both methodologies while maintaining simplicity. It also provided us with the foundations of a flexible “plug-in” architecture which would allow us to easily add or remove functionality to or from our system as requirements changed.
Prioritizing Observability
Establishing a robust observability framework became a priority in transitioning to the cloud. Initially, our on-premises systems lacked sufficient observability. We relied on audit logs that were challenging to access and analyze.
To address this, we implemented correlation IDs for every API call, enabling us to track requests across services and understand system behavior comprehensively. Additionally, we adopted structured logging, allowing us to include custom metadata and generate meaningful metrics.
These practices led to the creation of summary metrics at the end of each sync, offering insights into system performance. We recognized that our understanding of observability would evolve as we interacted with the cloud environment, necessitating regular updates to our monitoring tools and dashboards.
Defining Requirements Effectively
One of the most challenging aspects of our transition was defining our requirements accurately. Initially, we approached the task by attempting to replicate legacy code written in Delphi. This approach, however, led to a copy-paste fiasco where subtle errors proliferated across the new codebase.
Recognizing this issue, we shifted our strategy to focus on understanding the underlying requirements of the existing code before attempting to implement them in Java. This initial investment in requirement analysis paid off significantly, allowing us to create more precise user stories that our engineering team could follow. The performance improvement was significant and highlighted how important it is to thoroughly analyze and understand requirements before starting the implementation.
Conclusion
Our experience illustrates that transitioning from legacy systems to cloud-based microservices is not a one-time project but an ongoing journey. We learned that, in the face of competing priorities and pressures, it is crucial to discern the real challenges and opportunities for improvement. Software is a continually evolving entity, and embracing this change can be an advantage. Adaptability, rather than rigidity, will ultimately determine a system’s longevity and effectiveness.
In retrospect, the lessons learned from our transition are invaluable for any organization looking to modernize its technology stack. As we navigate this landscape, I encourage teams to prioritize adaptability, invest time in understanding requirements, and recognize the importance of observability in system design. Instead of accepting the notion that technical debt is an unavoidable reality, strive to create inherently adaptable systems. Embrace the challenges, learn continuously, and remember that what is true today may not hold true tomorrow. This philosophy will prepare your organization for whatever changes lie ahead.