By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: [Video Podcast] Building Resilient Event-Driven Microservices in Financial Systems with Muzeeb Mohammad
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > [Video Podcast] Building Resilient Event-Driven Microservices in Financial Systems with Muzeeb Mohammad
News

[Video Podcast] Building Resilient Event-Driven Microservices in Financial Systems with Muzeeb Mohammad

News Room
Last updated: 2026/02/16 at 6:07 AM
News Room Published 16 February 2026
Share
[Video Podcast] Building Resilient Event-Driven Microservices in Financial Systems with Muzeeb Mohammad
SHARE

Watch the video:

Transcript

Thomas Betts: Hello, and welcome to the InfoQ Podcast. I’m Thomas Betts, and today I’m speaking with Muzeeb Muhammad. Muzeeb is a senior manager of software engineering at JP Morgan Chase and a senior member of IEEE. He specializes in designing secure, resilient, and high-performance distributed microservices for large scale financial platforms, and that’s what we’re going to be talking about today. So Muzeeb, welcome to the InfoQ Podcast.

Muzeeb Mohammad: Thank you so much, Thomas. It’s a very pleasure to be part of this podcast. I’m looking forward to sharing some of my industry experiences across different organizations. 

Foundations of event-driven systems [01:10]

Thomas Betts: Well, great to have you here. InfoQ has covered event-driven systems before, but in your view, what makes a good event-driven system? How do they differ from traditional, say, request and response or monolithic architectures?

Muzeeb Mohammad: So we actually do have a lot of applications built in the monolithic technology stack, like RESTful APIs, SOAP-based web services. But the challenges that we were facing is, I think, time to the market or to scale up the solutions, I think we were having some of the common challenges, the industry experiences. So we have tried some of the event-driven approaches. Very specifically we started with the Kafka as one of the architecture patterns. We started seeing a very good improvement. I will explain some of the examples that we successfully implemented at SEI. SEI was one of my previous employer. So part of when the customer tries to open a checking account, it goes through various systems behind the scenes. We were actually not reaching the SLA within the time. So what we did was we introduced a Kafka event-driven system in the backend processing. So for example, customer goes to the sei.com and try to open the checking account.

So what happens is behind the scenes we loosely decouple the whole backend processing. I think the customer checking account goes through the credit check. It looks at the various factors like whether customer is eligible to open the account or not. And once those systems are loosely decoupled, so basically the customer account information is getting published to couple of topics, and then there are a couple of consumers. Basically, fraud check is one of the consumer. And account opening is one of the dependent system, which basically takes the customer information, opens up a brand new account in the system. I think we have seen this particular implementation help navigating through a customer journey very efficiently. And also the various dependent systems are asynchronously processing the end-to-end flow. So there is a completely decoupling solution we implemented. And ultimately, I think the SLA, we were able to achieve within the agreed timeline.

Trade-offs and benefits of event-driven financial systems [03:40]

Thomas Betts: Yes. I think you talked about the customer journey. I think that’s what works with event-driven systems. You’re talking about the events. And if the business process says, well, the customers request a new checking account, and then that’s the first event that happens. And then other things trigger off of that. “Oh, we need to do a fraud check. We need to do this”. And eventually there’ll be an event that says the new checking account was created, or maybe the checking account was declined.

But there are steps along the way and each of those events happen. And so you would have different message streams that process those and you design the system to be asynchronous. And I think it’s sometimes counterintuitive that we’re making all of these things that are asynchronous and out of band, but that can actually speed up the process. You mentioned meeting your SLA demands. How does that trade-off happen? How do you do things separately and it comes out faster?

Muzeeb Mohammad: What we have done was we basically part of this whole customer journey when we look at the architecture solutioning. So we introduced one, I think there are multiple Kafka topics for each of the downstream application. For example, fraud check is one of the downstream application that process independently and the new account built when the customer basically applies once a fraud check is successful.

The next process is to invoke the new account build. So what we did is we had two different Kafka topics, and each of that topic is consumed by multiple consumers. So fraud check process is one of them, and then credit scoring check is one of the process. And then there is a actual new account bill, there is another process. So all these different downstream systems were subscribed to one of the Kafka topic and each of the process independence, independently process the things. And at the end, we are basically letting the customer know that your account is successfully created, and then there is a notification process goes back to the customer.

I think the main success criteria, what we have seen is when we introduced a synchronous processing, and when we let the downstream applications process independently, and also we have a freedom to deploy the changes more frequently. So because of this asynchronous nature and decoupling, we let each team deploy their application solutioning independently without impacting each other.

Thomas Betts: Yes. So there’s that process for the full developer experience, the full CI/CD lifecycle and software delivery lifecycle of not only can the software get better, but where we find there’s a bottleneck, we can now target just that as opposed to working on the whole monolith. So there’s a little bit of an aspect of good reasons to do microservices to break it up, to have individual teams be able to work on individual processes. And also the fact that you had to redesign it to be asynchronous, I’m assuming. So it used to be start the process, customer requests, and then you’re waiting for that one thread to basically finish versus now it can fan out and do parallel work. Is that one of the benefits you’re seeing, is that all these things can go independent side by side as opposed to serially?

Muzeeb Mohammad: Exactly, Thomas. That’s what we used to have. So we used to have one monolithic, big, giant application that handles all of these processes systematically, like asynchronously. When the customer, there is a UI application that tightly coupled with this monolithic web application, and then it needs to process everything like step by step. If one of the steps fails, we are not letting to process the other systems. So because of this asynchronous decoupling and microservice architecture, we actually decoupled all this monolithic application into multiple microservices, introduced event streaming via Kafka, and then we saw very good performance.

As I said, I think the engineering teams are independently working on each part of these backend processes, and we are able to deploy more often deployments. Earlier it was like maybe every quarter we used to do the deployment, now it is on demand deployment. Whenever the team is ready to with the new capabilities, we are basically more frequently doing a deployment via CI/CI pipeline.

Reliability in practice [08:10]

Thomas Betts: And you mentioned SLAs at the beginning. Do you see other reliability and performance benefits with splitting this up and how was that measured? How did you detect and what were the metrics you used to decide that we are now a healthier system and it was worth the re-architecture?

Muzeeb Mohammad: So we have the observability pattern implemented across the different services, starting from the UI screen to all the way to the different microservices level. So what we used is a concept called TraceID. So one of the common attribute that we shared across the stack. So if you want to basically see what happened with one particular transaction, we go to some of the observability tools like Splunk, we look at the Trace ID and it gives the whole history from the beginning to the end. So I think Splunk is one of our main observability tool. Common industry observability tools such as Splunk, Dynatrace, or AppDynamics can be used for this purpose. Those will give you, when there is a issue happens, tools gives you more granular level where exactly the problem happened. So sometime we go to Dynatrace to drill down and find out where exactly the exceptions or maybe where exactly the failure is. So I think we rely sometimes Splunk, Dynatrace and AppDynamics.

Financial industry context [09:31]

Thomas Betts: Yes. And that’s all built on using open telemetry, I presume, and standards that the industry has come up with. Here’s how you do all this stuff with modern cloud native solutions. Now we were talking about financial services, and especially in the US, financial services have this old stodgy mindset of we use technology and it works. I worked at a bank not that long ago and we still had mainframes. And I think there’s a lot of banks that still have mainframes. Those are very much not modern technology. And you’re talking about Kafka and event streams and stuff that’s within the last few decades really taken off versus integrating with stuff that’s several decades old. How do you make those systems work together and how does the business adapt to transitioning from something so legacy and stable and reliable to this newer technology and newer architectures?

Muzeeb Mohammad: Right. Yes, I think that’s a very interesting aspect. I think every organization has been going through this shift. So at SCI, we used to have the mainframes, the core processing, I think the whole checking and credit card platform was built in mainframes. So what we used to do is I think we look at the system end-to-end, see if there is any particular domain we can shift and build outside the mainframe. So we tried with the very specifically in the credit card side, we took one of the functionality called new account build. So what we did is the system still goes through the mainframe, but once the mainframe builds the account, we are actually introduced an MQ layer. So the COBOL, the mainframe’s COBOL program basically published that into one of the MQ layer. And then from distributed technology side, we actually develop one MQ listener that listens to the events, that listens to this MQ messages, and then basically translate the Kafka event and publish to the one of the topic.

And we have a downstream systems which actually consumes. I think there are like four to five consumers that continuously look into this newly built account based on that newly created account and we had some of the use cases. Whenever customer opens the new credit card, I think there is a concept called priority pass. So basically customers without relying on the manually opting for the priority pass. So this because of this, the event streaming solutioning upfront, we are basically creating the priority pass. So it’s basically hybrid solutioning, all the way from the UI screen to the mainframes to the outside solution built in a public cloud environment where this information is being available to the outside world, and then we are basically creating some of these business use cases out of it.

Event streaming as a legacy migration pattern [12:42]

Thomas Betts: Well, and I like how that, and that encapsulation of the mainframe, the mainframe emits events effectively. This is the data, but an event occurred. And so it seems like event streaming and event sourcing sounds like a natural fit for how to use those legacy systems rather than have some tightly coupled down at the COBOL layer. You look at where’s the business boundary? Where’s the business event that occurred? And that’s the thing we build our systems around and we respond to that. Did that mindset help with design and development and requirements to say, “Hey, this is how we build the systems?”

Muzeeb Mohammad: Yes, we started with the hybrid solutioning in the keeping in the mind. I think slowly we are moving towards the event sourcing and even based mechanism. The idea is once we have the data available outside the mainframe, and once we create the SOR systems, the system of record SOR system, many financial institutions aim to gradually modernize legacy systems over time. That was our main thought behind it. So we are basically moving some of the functionalities outside the mainframe and then once some of the critical functionalities become available using distributed technologies like the microservices, event streaming, observability patterns, and then over time, organizations may continue modernizing legacy systems as distributed architectures mature.

Thomas Betts: And do you see the event streaming as a way to help move that away that you can get to this event occurred and it doesn’t rely on this technology implementation? If the checking account was created, you shouldn’t care that the checking account created means this record in the mainframe exists. It’s the checking account is created. So is that going to help with the future migration plans?

Muzeeb Mohammad: Exactly. Exactly, Thomas. So one of the benefit we are saying with this event streaming is I think we have a capability to look back the whole history of the events. And for example, customer opened the checking account and then he started depositing the money. So for each of the actions the customer performs, we are actually relating those actions to events and then there is a capability to look back what exactly happened for the particular customer interactions with the system. So I think event sourcing and event streaming is really helping us not only just creating the SOR system of record outside the mainframe and also look back what the availability standpoint and replay standpoint. I think all these capabilities are really helping. Very particularly this history of events, whatever the history happened, we could able to replay those events.

Thomas Betts: And so you mentioned a system of record. Do you also have a system of reference? I know sometimes one model is we’re doing this gradual transition, we’ll have what’s in the mainframe is replicated into a system of reference, but it’s not the source of truth. We know that we still treat the mainframe as the system of record, but you can then start gradually transitioning using the strangler fig pattern of, we use the system of reference to query stuff, but now we can get to our updates instead of writing back to the mainframe, start writing into the new system of record. Is that on your roadmap of how to make that transition happen?

Muzeeb Mohammad: Exactly. So that is our complete roadmap. So we are looking after, I think we implemented in our area maybe five critical business processes using this event sourcing, event streaming. And as I said, system of reference is getting translated very nicely end to end. And we are looking after to become our, the distributed side solutioning becomes the system of record. That is our top end goal. And also there are few patterns we implemented apart from streaming the data outside the mainframe. There are a few tools like there is a tool called CDC, change data capture. So whenever there is a change happens on the mainframe, for example, IMS is one of the mainframe database. So this tool streams the data and we have a reconciliation patterns. So what we do is we take the CDC streamed data and the system of reference data, we compare, we have some of the solutioning implemented using some of the Java framework, Java and Spring.

So what we do is we throughout the day, we run a reconciliation process. We just compare how the data is looking at the distributed cloud environment side, like the distributed side, what’s in the mainframe side, and we generate the report out of it. So eventually we are looking at how the quality of the data is. So far, we have seen very positive result. I think the data is pretty much accurately reflecting in both the places. Once we get more confidence, organizations may gradually transition additional processes to distributed architectures.

Stakeholder buy-in for a new architecture [17:59]

Thomas Betts: Yes. So it’s definitely a gradual transition. How do you get all of your stakeholder buy-in to what is not a quick win? We have to do this and it’ll be done on Tuesday, but this is going to take a while. And how do you build the confidence into we’re measuring that these two systems are getting to be more in sync and now we’re at a point where we can trust it? How do you get the, like I said, the product owners and the business sponsors and other people that aren’t just the technical side to all agree with what’s the process and how do we do that?

Muzeeb Mohammad: Right. So some of our stakeholders like product owner, tech manager, executive directors, we have shown them some of the successful implementation. For example, this new checking account, we have built using this event streaming solution outside the mainframe that has become very successful. And also we’ve shown the confidence by running through this reconciliation process. So there are two to three critical solutioning implemented outside the mainframe. And one of the big advantage and one of the one liking factor from our leadership team is that this new solution that we are building outside the mainframe, we are actually keep on adding additional capability and the time to the market has improved a lot. Whereas the mainframe system, I think if you want to incorporate any of the changes, I think there is an awful process, a very delayed, lengthy process involved. So our leadership team, our stakeholders, really love  to see that new capability implemented in the faster way.

So with these two solutions, what we implemented so far, they really loved it and they want to basically move forward with some of the other critical processes as well.

Motivation for architecture modernization [19:45]

Thomas Betts: Well, that leads into my next question, which is what were the big motivations for this? I think people hear mainframe and like, “Well, that’s old technology. We want to get off of that”, but that’s somewhat the engineer perspective of can’t we just use a new shiny thing? But there’s also the business aspect. There’s not a whole lot of people who still know how to program COBOL, and there’s just good business reasons to start moving away from that. You touched on a little bit of we want to add new features and functionality so that it doesn’t look like we’re still working in the 1980s and you can’t do that quickly. What were the other factors? You mentioned reliability and just the speed for the SLAs not being met. What were the other factors that said, “Hey, this is when it’s time to actually go through this process and do this major migration”?

Muzeeb Mohammad: Yes, I think one of the major challenges that we are seeing is the continuous integration, continuous deployment. That is actually one of the major lacking part in the mainframe side. Whenever there is a lot of tools, I think on the distributor side, there is a continuous improvement is happening. Whenever a developer writes a code, there is a thorough end-to-end solutioning already in place. The moment that developer commits the code to executing the automated functional test to running this code scans to deploying into some of the cloud regions like blue green way. So automatically it checks after the code is got deployed to one of the region, there is automated functional test we are executing, ensuring the new code is meeting the expected behavior and then turning on the traffic to the newly deployed code. But this pattern, this solutioning is missing on the mainframe side. So we are struggling. So even to deploy, there is a lot of downtime.

There is a concept called packaging. I think even the CI/CD part is very little difficult on the mainframe side. So the main advantages we are seeing is our main goal is to time to market. So time to market is one of our main goal. We want to continuously add additional capabilities and we can basically speed up the things very fast, but whereas this mainframe, I think we have some of the challenges, so that is a main motivation to move towards the distributor side.

Thomas Betts: Was security also a factor? I know mainframes have a different security posture. They’re safe in some ways because they’re not the same stuff that’s hackable. It’s, “Oh, here’s the new Linux and everyone knows how to get into it”, or whatever NPM package. So they’re sort of shielded in a way because they’re legacy, but they’re also notoriously difficult to upgrade. And even if we’re not talking mainframe, if we’re talking about other systems that are just 10, 20 years old, those things need patching. Is there some benefit to doing modern microservices and how do you make sure that you’re not introducing new security issues because you’re not on the bleeding edge, but closer to modern technology? How do you get, again, all those factors come into play with building these new systems and keep everything secure?

Muzeeb Mohammad: Yes, so definitely. I think mainframe has been amazing in terms of the security standpoint. I think historically it was proven. I think we never… Continuously, mainframe is doing a fantastic job. So one of the factor, what we are doing on the distributed side is that we are introducing a environment as a code. So it’s one of our new offering. What we do is we write the code to spin up the new environment and behind the scenes we are using, I think it’s an open source tool called Terraform.

So what happens is we keep on adding the security policies on top of this environment code. So developer standpoint, the developer is just focuses on the business code implementation, but there is a couple of departments delicately working on creating this security policies on top of this platform. I think we call it as a environment as code. So I think the security processes are continuously evolving and we are actually implementing at the platform level and developers, the engineering side, the engineers also responsible to expose some of the capabilities using maybe OAuth based mechanism. For each of the API that we expose, we have a very thorough process involved. Each of the API needs to go through API governance process and part of that we ensure we have this OAuth based mechanism.

And we look at the each use case independently, uniquely. There are various patterns we follow, not only just OAuth. There are some of the other patterns, maybe sometimes it depends on the business use case, we go with the service based mechanism as well. So there is a XYZ on nine based, OAuth based. We are continuously evolving on the security standpoint. One, we are implementing at the platform environment as code level, and another one is looking at each aspect, what are the best practices we can implement?

The paradigm shift from building standalone solutions to having a platform [25:16]

Thomas Betts: Yes. And I think that’s a good case study of these aren’t like bleeding edge new ideas. These are established patterns that you can go out and find, here’s how we build a good platform. Here’s the benefits of doing that. That the platform now handles widespread security updates and like everyone gets it as opposed to each team has to individually go and patch things. But I think that’s also a lesson for other companies that are still in the, well, we have legacy systems that are working and how do we make the transition? And was setting up all of those platform teams and creating build servers and CICD and infrastructure as code, was that a huge lift within the financial services industry or is that something that everyone’s just doing now?

Muzeeb Mohammad: It’s a huge shift, Thomas. Absolutely agree with you. Earlier, software engineer used to just focusing on the software requirements, business requirement, implement those solutions. Now the software engineer is responsible to look after the security aspect as well as this infrastructure provisioning as well. So at SCI, the engineer is responsible end-to-end solutioning, starting from the platform creation to the introduce ensuring the security standpoint, we thoroughly spend very good amount of time on the security standpoint. We look at the solutioning end-to-end, not just one particular small component, whether these security practices are implemented thoroughly or not. For example, whenever we see any business use case implemented end to end, if there is any PI data is being used or not. So if there is a PI data, we enforce PCI compliance processes. In regulated financial environments, PCI compliance processes ensure that sensitive data is properly encrypted, properly shared end-to-end solution, not just one system.

Each of the system needs to certify the PCI is certified. That is one way of doing it. The another one is like we mask the PI data across the systems. Whenever there is a information is shared data in transit or data at rest, we ensure the data is basically getting masked when we are basically going with this logging or observability standpoint. So my ultimate point is the engineer looking at every solutioning very carefully, thinking the security in the forefront, always looking after the business use case if there is any API data involved, ensure certain regulatory processes thoroughly followed, and we are doing consistently across the different systems.

AI trends in financial services [28:01]

Thomas Betts: So just like security and all those things are coming in or all this new platform stuff is coming into financial services. We have to ask about AI on every episode of the podcast, I think. Where does AI fit into financial services? Is it something that everyone’s pushing for? We need to show that we have AI or is there a trend to stay away from it and let it mature a little bit until it’s a little bit safer? And how do you add AI into the product or into how you’re building the products and how do you do that safely and securely and ensure that you’re getting whatever goals you’re hoping to achieve by having AI incorporated?

Muzeeb Mohammad: In one organization I worked with, we explored some proof-of-concepts. One of the idea we started exploring is anomaly detection. Anomaly detection is like one aspect. Without engineers spending a lot of time looking at the various other various metrics like Splunk, Dynatrace, UI, Screenside, it taking a lot of engineers effort. So what we are doing is there is a proof of concept we are trying to come up. We are actually building one AI model. That AI model continuously gets the various observability metrics. Logs is one of them traces and we feed this information to this AI model. So this AI model also we spend good amount of time fine-tuning and looking after very specific one, which is really doing a good job in terms of site reliability engineering. So we have chosen one particular model, and then we are continuously feeding this different observability metrics to this model.

The model is actually doing a lot of analysis and giving a different aspect like where the problem is. So it is basically giving us a very clear information where the problem is actually occurred because if you look at our end-to-end solutioning, there are like close to 50 plus microservices involved and engineer needs to spend very good amount of time to look after each of the system logs and fine tune and find out where exactly the problem relies. But using this AI animal detection model, I think as per the initial result, it is showing very promising. So engineer stayed away knows where exactly the problem is based on this AI model output, analysis output. In exploratory pilots I’ve been involved with, early results have been promising. Likewise, we are also looking after the other areas as well. One more proof of concept we are doing, which is more on the introducing a dynamic security policies.

I think some of these policies relies within the microservice code level. So we want to basically see if we can spin up a common platform level, the security policies. This AI model basically continuously look at the various incoming payloads based on that dynamically apply some of the security policies at the platform level. That is another proof of concept we are doing. But so far early exploratory results have been encouraging.

Anomaly detection can be for engineering as well as finance [31:21]

Thomas Betts: Yes. So you’re talking about anomaly detection for anomalous behavior of how the system is performing to say, here’s a bug. When somebody says anomaly detection with financial systems, I’m thinking fraud or there was an unusual card transaction, but you’re not looking at that specifically. You’re looking at this from the engineering side, like how to make sure the system is behaving and something went weird and a human might not notice it, but the model you’re creating is able to detect, this looks strange.

Muzeeb Mohammad: Exactly. Exactly, Thomas. Yes. And we internally at SCI, I think we brainstorm some of the use cases. I think the one that you highlighted, fraud detection system. Yes, I think there is some thought we have given. Not only this fraud detection, there are some of the other use cases we are exploring. I think we are at the exploring stage, but the kind of result these two POCs gave us very good confidence. Eventually we will be implementing at the business driven use cases as well.

Looking ahead for financial systems [32:23]

Thomas Betts: So I think that leads us to, let’s just look ahead like five, 10 years. Since you’re in financial services, where do you see software changing? Do you see us moving more into these event-driven services and that’s catching up with the industry? And will AI be more involved? What’s your predictions?

Muzeeb Mohammad: We are looking for, if we talk about the next decade or so, I think one of the major shift is happening is on the software engineering side. I think earlier we used to spend a good amount of time to deploy one particular business driven solutioning, to develop and build and deploy. But now speed to the market is drastically improved. We have some of the tools internally. The engineers can start using some of these AI tools to speed up the application development all the way to deployment. So that is one aspect that’s continuously evolving with this new AI revolution. So I think time to the market is one aspect.

The next decade, you’ll see a significant change. And also we started looking at the advantages of AI capabilities. Very soon we will be implementing some of the solutioning on the business driven use cases. So far we have implemented at low risk level, but I see in the next decade or so, you will see a lot of business driven use cases implemented at the financial sectors, starting from checking account all the way to, there is a concept called taxonomy.

So customer, when the customer applies for the credit card checking account or credit card account or checking account, they get to know where exactly the customer request is. Right now, the whole thing happening at the backend within the bank, but customer can see more visibility where exactly the request went through, where exactly it is stopping. I think the whole inside you will get it. So my ultimate point is maybe in the next couple of decade, you’ll see a lot of business-driven use cases implemented using AI.

Thomas Betts: Well, I like how that circles back to our opening discussion about how having event systems makes it more visible to see here’s what has happened. And surfacing those to the customer can also be useful and saying, “Here’s a better customer experience, here’s a better product experience, and also helps design and architect the systems”. So I think that’s a great place to wrap it up. So Muzeeb, thanks again for joining me today on the podcast.

Muzeeb Mohammad: Thank you so much, Thomas. It’s a pleasure.

Thomas Betts: And listeners, we hope you’ll join us again soon for another episode of the InfoQ podcast.

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Scaling on Demand: How Serverless Architectures Empower Agile and High-Performing Platforms | HackerNoon Scaling on Demand: How Serverless Architectures Empower Agile and High-Performing Platforms | HackerNoon
Next Article Linux 6.19.1 Released To Ship Some Early Fixes & Device Quirks Linux 6.19.1 Released To Ship Some Early Fixes & Device Quirks
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Prompts Are Overrated. Here’s How I Built a Zero-Copy Fog AI Node Without Python | HackerNoon
Prompts Are Overrated. Here’s How I Built a Zero-Copy Fog AI Node Without Python | HackerNoon
Computing
The iPhone Will Go To The Moon For The First Time On Upcoming NASA Mission – BGR
The iPhone Will Go To The Moon For The First Time On Upcoming NASA Mission – BGR
News
New rules on social media could target ‘doomscrolling’ and ban for under-16s, Starmer says – UK politics live
New rules on social media could target ‘doomscrolling’ and ban for under-16s, Starmer says – UK politics live
News
Market Downturn Sees XRP Holders Positioning in Mutuum Finance (MUTM) | HackerNoon
Market Downturn Sees XRP Holders Positioning in Mutuum Finance (MUTM) | HackerNoon
Computing

You Might also Like

The iPhone Will Go To The Moon For The First Time On Upcoming NASA Mission – BGR
News

The iPhone Will Go To The Moon For The First Time On Upcoming NASA Mission – BGR

6 Min Read
New rules on social media could target ‘doomscrolling’ and ban for under-16s, Starmer says – UK politics live
News

New rules on social media could target ‘doomscrolling’ and ban for under-16s, Starmer says – UK politics live

16 Min Read
Galaxy S26 Ultra selfie camera leak has us playing ‘spot the difference’
News

Galaxy S26 Ultra selfie camera leak has us playing ‘spot the difference’

3 Min Read
Elon Musk’s X goes down for millions of users worldwide
News

Elon Musk’s X goes down for millions of users worldwide

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?