By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Test Smarter, Not Harder: Achieving Confidence in Complex Distributed Systems
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Test Smarter, Not Harder: Achieving Confidence in Complex Distributed Systems
News

Test Smarter, Not Harder: Achieving Confidence in Complex Distributed Systems

News Room
Last updated: 2025/09/01 at 8:32 AM
News Room Published 1 September 2025
Share
SHARE

Transcript

Nogueira: My name is Elias. I’m a principal engineer. I’m a Java Champion, Oracle ACE for Java. I work in the Java Magazine, Netherlands edition. A fun thing about it, is that we send the printed magazine there. We do have the digital format, but for everyone, we send the real magazine, so you can smell, can feel it, and you can just put aside, and read in your Kindle later. It’s a nice feeling.

Nowadays everything is digital, it’s a nice feeling to receive the magazine itself. I’m Elias Nogueira in everything: Twitter, LinkedIn, Bluesky. The talk will be focused on testing, but there is organization for that. First, I will talk to you about the problem, then how to solve this problem. I will show you code examples, and then I will show the takeaways.

Architecture

Just to give you context, this is exactly what I’m working on right now. We do have digital platforms, in the company I work for. We do banking. We have onboarding, lending, business, assist, and engage. I’m working in the middle of that, digital banking. Basically, the full platform is APIs. We do have common APIs, enterprise integrations, a lot of different stuff. Through the APIs, of course, we do also provide the frontends, web or mobile. Basically, what we do, in general, we are a banking system, where if you guys in the right side want to open a new bank, you guys can buy all the services from us, except the core bank system, that normally we develop or we buy from someone else.

Then the other full platform we can deliver out of the box, and in weeks, you can have a bank operating normally. This is basically what we do. We do, of course, have these connections below with the core bank system, fintech partners, open banking, which we don’t control. It’s just open for integrations. It’s basically the business view. Zooming in, in digital banking, we have a lot of different APIs. Everything you can think about banking, we do have as APIs for retail customer banking, like normal banks or retail banks. You guys will see, for example, approvals.

That is a not common thing when we are in a bank, and like, ok, I want to make a transaction, pay my credit card, something like this. Approvals is more business, so it’s a different flow, but it’s still banking. Those are only the few APIs that we do have. In those few APIs we do have, I will be zooming into payments. This is not entirely the payments part that we do have. This is a super-thin view about it. I want to explain to you this process, more or less, because this will be the talk in the technical code examples.

Payments, actually, it’s not to only one service, we know that. Not only three services in blue, we have way more, but just to give an explanation. We call internally a payment order, which is basically the orchestration about everything when we do make a payment. Then we have some options. We can configure, for example, if the payment’s an ACH, or it’s a SEPA, or if it’s different kinds of payments, we can configure that. We have batch process as well for payments. Every time I make a payment, the payment system emits an event for a different purpose. Here I’m explaining that I will submit an event for an event listener.

The audit service will consume that, and we just get everything, all the metadata that is necessary to just keep in track all the steps and everything we are doing about it. When I’m doing a payment, if it’s retail banking, I have an approval process, and from the approval process, I have limits. There are some limits set, for example, batch payments. I cannot approve a batch payment that in total, the batch is more than a million. There are a lot of business rules for that. For batches, for example, there is business calendar. I cannot process the batch and make the batch payment if it’s in the holidays, if it’s a public holiday, for example. We have those restrictions. For this full platform, we have specific integrations with the core bank system, different ones, which is more layers of testing.

Testing Challenges on The Architecture

Zooming in, in only payments, which is only one group of microservices we do have, we have a lot of things to test, and it’s complex. We can talk in the entire talk about how we can do in terms of testing here. What we are going to do in terms of testing, are those challenges. The most challenging ones are, one, testing with the real dependencies. Why is that? If I’m just focused on payments, I will come back there. Everything that is not on blue, it’s a hard dependency for me. I need to figure out a way to test that. Support multiple databases. As we ship software to customers, customers can have different requirements for a database, different databases. It’s a challenge as well, how I can do the test in everything, like in all the possible different databases that I can support. Then, mock dependencies.

Of course, I need to test with the real dependencies, but in lower levels, I should run through mocks. Just to speed up my process, at some point, I will run through real dependencies, but it’s also a challenge. Testing asynchronous request. Every time that I send a topic or send an event into a service, I need to test that: if it’s there, if it was consumed, if the proper service consumed that. API governance, because one team, for example, payments, are getting at least in this list of six different services, there are integrations, there are different APIs, and you should have governance on that. We can talk about more challenges, but I will focus technically in only three: support multiple databases, mock dependencies, and test asynchronous events.

Coding Example

To start, the coding example. I will have a payment system. Super simple one. What I’m going to do with this payment system is that I have a payment request. As someone, you get a beer from here, for example, and then you need to pay for that beer. The payment request is, ok, you have a POS to pay, and you get a card, and you pay. I am simulating a transaction for that. For that transaction, when I do this transaction, I will check an antifraud system, a third-party one. Then, I will also emit some topics and events for this audit service, just to get all the things that I’m doing. If I have a fraud, if I’m just making the payment, if I have error during the transaction, whatever, this is what I’m going to do. The code example is super simple and silly. The main goal is not creating the best system ever, but just giving you the idea of how we can do those stuff. Don’t judge me about my code, it’s super simple. If anything’s there, it will be mostly focusing on those three challenges.

One Application, Multiple Databases

One of the problems we do have when we need to support multiple databases is because we need to support those, and then it becomes complex, in different ways. We know nowadays that modern systems must support different databases. Nowadays, I’m not seeing basically any company that says, I only support this. In terms of running infrastructure, yes, but when we have some redundancies, like some plan B, we can route to another database, even with a different provider. When you need to test that, of course, your service will have a database, your microservice will have that. Integration tests, normally, when we develop, we run in-memory, because it’s faster.

Then you don’t get a fraction of possible errors that you can have in a real database. It’s a problem. It’s a real problem running through an embedded database. Even though I’m creating a strategy to run it through multiple databases, if I think about running it sequentially in a CI, or even in my local machine, whatever, I’m wasting too much time, because it will be exponentially more time when it needs to cover more databases. There is this real common problem of, in-memory database doesn’t show any real problem with MySQL versus Oracle, when you have different types. When we need to implement that, a lot of people try to do it manually, and it’s a totally bad thing. Nowadays, we have a lot of different tools to help us out, but still, I see some people, ok, I need to support multiple databases. They do a lot of manual steps to support that, even during the testing process. It’s really bad.

How can we solve it? Creating a parallel strategy to run this database, and supporting multiple databases in the application. What are you going to do? For example, my microservice now, I know that I need to support at least three databases, MySQL, MSSQL, and Oracle. I will show how to do that. I will exclude Oracle, because if I run Oracle here, not in a cloud instance, it will be a little bit delayed. It will take some time. I want to show first, now, the application running through the in-memory database, how we can switch to a normal database. Then I will show you how we can parallelize that. This is my IDE. I want this. This will be basically Java. I have, in my application, a Spring Boot application.

As everyone does, I have an in-memory database with H2. It’s default. It works really nicely. Of course, I have the driver, specifically an H2 driver. My dialect for Hibernate to manage everything is also H2, because there are some specifics with different implementations of databases, of course, different vendors. I have my integration test. I will just comment this out. I have supersede integration tests here, where I’m just creating a payment, saving it, try to create a request for that. Then, did this request succeed? As soon as I save it in the database and I make a request, and I can get anything, can I see the same data there? Super normal test to do, easy one. It should fail as well. I want to do something, like to save all, or to find all, or to fail, I have different tests. If I run all those tests, they run ok, and fast, because it’s SQL, it’s an in-memory one, so like 300 milliseconds. Everything here will be H2. What should we do, the first step? Switch to a real database.

My advice, of course, is switch to the one that is the main one or is the full one in your context: if it’s MySQL, it’s MySQL, if it’s Oracle, it’s Oracle, whatever. How can we do that in simple ways? Can you see this blue icon? Testcontainers. Testcontainers is a container that provides containers for different applications, and they can actually mimic the real dependency. What I will do here, instead of installing MySQL in my machine and running it in my machine, or create an instance in the cloud and connect in the cloud, I use it through a container. What Testcontainers do, through Java code, or any programming language that they support, I can just write a few lines. Say, I want a container for MySQL. Then, start it.

You don’t need to say that, but as soon as you have this instance, it will start up the container, initialize it, and you have the database. You can make the connection directly to the database, and then you have a database. Every time you run the application, Docker will start that instance. You have it ready. You run your test. As soon as it ends, it kills that container. It does this over and again. It removes the real dependency you need to have for, not real dependency, but hard dependencies about databases.

How can I do it here? Super simple. For Java meanings, and you’re using Maven, Testcontainers, Testcontainers dependency. I really need the real dependency from that database: there is a specific one, all the time. Of course, I need the connector to make sure my application can connect to that database. Super simple. I think most of you guys have done this. I create a profile here called test. I’m saying, my jdbc:tc, Testcontainers, so I’m leaving all the responsibility for this to Testcontainers, and this Testcontainers are MySQL, it’s on this version, and my database is called payments. Then I have my driver, my dialect set to MySQL. Simple like that. Then, this is a test profile. When I come back to my test, and I’ll test, Spring will do some magic. It will look at that application test, will load, database, it’s Testcontainers, MySQL.

Testcontainers will initialize that database. Then, my integration test will save, and it will do all the stuff on MySQL, not on the H2. “Elias, I don’t trust you”. I will run the test again. If you see, it’s taking a little bit more time, and say, Testcontainers, MySQL, creating container, starting the container, doing a lot of stuff, and then, test executed. Remember the H2 one? It was almost 300 milliseconds for the test. Of course, there is this startup time for the container. As soon as the container starts, the total execution time doesn’t change, because it’s the same thing. It doesn’t change because of the database, but now I’m using a real database behind the scenes. Again, MySQL. I can see that MySQL is here, and it’s being initialized. Then it’s running the test and doing all those different things here. Then I need to support multiple databases. I have one right now for testing. How do I support multiple ones?

The first thing we need to do, in terms of Spring, at least, we should have different profiles for different databases. I have MySQL, MSSQL, and I still have H2. Just the difference with MSSQL now, from Testcontainers’ perspective, in my configuration, my application, is that, ok, Testcontainers, SQL Server. This is the version, this is my database, and the dialect for Hibernate. Now I have all the supported databases, but now I need to create a strategy to run those, and how I can run those differently. One of the possible things we can do, in terms of Java in Maven, is creating different profiles. What I have here, I have profiles for MySQL and MSSQL. I’m saying, Spring profile, activate MySQL or MSSQL. Just for understanding, this will activate that file, application – the name, and then Spring will load all the configurations from that application in YAML.

Then I will start up my database as MySQL or MSSQL. For the test itself, now I have the profile. There is one change in my payments integration test, which is this. Instead of setting the profile that would be MSSQL, I’m saying, I want you to get from what is active, from what we have. The default one is at H2, just to speed up the process. Every time I run like this, if I run the command, sending -profile, -P, and MSSQL, and I just focus in that integration test, you guys will see at some point, the container will be trying to connect to SQL Server. Then it’s starting my database on SQL, and now it’s running my tests. Three tests successfully run. Super easy, actually.

Now I have another problem, how can I run it? I run one database, and I need to run multiple databases. You guys won’t run in your local machine. You guys will run it in a CI/CD process in any tool. One of the things we need to do is, instead of running it sequentially, as, let’s say, integration test will take 3 minutes to run, then if I run sequentially, it’ll be at least 90 minutes to get this feedback from CI and say, everything is fine. If something, like MySQL, for some reason, it’s not available, if I’m not focused enough or smart enough, Oracle won’t run, because it failed in the middle of the process. It’s not a good approach. A good approach, of course, is to run it in parallel, where I start all the databases at the same time. It’s not exactly the same time, because it’s parallel, you have some delay.

Then I can have at least the same amount of time for everything. Through CI/CD process and tools, the different tools have different ways to do that. What I will show you here, it’s my GitHub workflow. I’m running it on GitHub as well. In terms of GitHub Actions, I can have a step like integration test that depends on my build process first. Of course, first I build the project. Then I run my integration test. They have a super-nice feature called matrix. I can run things in parallel based on the data I have on a matrix. In my matrix, I have MySQL and MSSQL. Then, when I run the Maven commands, I’m saying, profile, get it from the matrix, data from the matrix. I have my GitHub Actions here. I have my build and test. I run the workflow manually, just to show you that it works. It’s something you guys should do. It’s building. It will take 10 seconds. Then, integration test job will run with two databases.

More than 10 seconds, in Java. It takes some time. Now it’s running my integration test in two databases in parallel. You guys can figure out that there’s basically no code at all. It’s only a few things in my pom.xml, creating profiles. Duplicating my application profile in Spring, just make sure I have the different databases. Then just setting what I need to focus on. In terms of parallel execution CI, in GitHub Actions, there’s a feature for that. I’m just using that feature. Of course, in other programming languages, in other strategies, you guys might have the same thing. You guys need to have multiple databases, have a way to start those databases, and the CI/CD will be basically the same, with just different commands maybe. For example, MySQL. It’s running on MySQL. Because if I try to search the logs, MSSQL, I can see that the focus is on MSSQL. The container is actually using that. The same thing for MySQL.

Takeaways for this. Integration tests, you guys must run integration tests in multiple databases if you guys need to cover multiple databases. Or even, if you guys need integration tests, cover different dependencies that you’re consuming. If you guys support different queues or anything, you guys can do the same approach. Instead of the database, just the focus, what you guys should do. Parallel execution is a must nowadays, because we want to understand if our tests are failing, if our application is reliable enough to make the next step, or deploy into production, or just waiting for a click to deploy to production. Using containers, specifically with Testcontainers, you guys can use Podman, whatever, it will make you closer to production using that database, or those dependencies there.

Third-Party Systems – Fragile Dependencies in Tests

The second thing is about third-party systems. This really creates a fragile dependency between your test and what it needs to test, because actually we don’t own that. Maybe it’s online, maybe it’s offline, maybe the data has changed. You don’t have any control, and this is how to manage. For sure, you guys are facing or faced this problem before. Remember that every time that I make something in payments, I will send a topic to Kafka, actually, say, there’s some metadata here, someone will consume this. What I will try to mimic is the consumption that I will show later. The problem with this, normally, we do mocks internally through integration tests.

In terms of Java Spring, like MockMvc, we can use Mockito, whatever, we mock that dependency. Sometimes we can also use in sandbox/QA environment for this third-party. Both, from my experiences, are not that good. The mocks that we use in the teams are nice, of course, because they can run, but imagine that antifraud system. I don’t have only payments, one of my teams using it. I have the entire company using the same service. Then, the developers must implement, basically, the same mock in the code in 23 different teams. Then, if there’s one change, 23 teams need to change everything.

If I have another change, 23 teams need to do it over and again. It’s a good approach, of course, but when this third-party system is being used widely in your company, we need to have another way. We cannot trust in your third-party systems. We need to create something better. The tests that rely on them will be flaky and slow, or one or another or both. Local mocks make it consistent, but for one single team. It’s limited for the team, while other teams need to use that as well.

How can we fix it? We have something really cool in the market called service virtualization. We do this with WireMock, and also with Docker. What is service virtualization? Service virtualization basically is simulating behaviors for external systems, instead of relying on real services. Teams can use lightweight stand-ins that mimics the real ones’ behavior. This is super beautiful. ChatGPT actually generated this for me. This image is better. Imagine that this service virtualization is your third-party system. Then, in this service virtualization way, this service virtualization will reply to you as the third-party system.

Then you can have all the teams connect to that, with one source of truth about mocks. As it mocks, and nowadays we deploy everything on Kubernetes, or at least using Docker for that, you can easily create a Docker container, with WireMock or with any other provider. You can run it locally in your Docker container, or you can deploy it in a Kubernetes infrastructure, make it available for everyone, everyone connects there. Easy. Or you just deploy a Docker image, put somewhere, they pull the Docker image, they start it locally, and they can run all the tests mimicking that mock. It’s different from the coding mock, because you’re centralizing the mock. If you need a fix that will impact everyone, you just do once there, and then everyone will have this fix.

Service virtualization will replace real calls. When you use WireMock, it’s super simple to do that. We can share the mocks across the teams, and it enables us for reliable, repeatable integration tests, without using the sandbox third-party systems. What I will do, this antifraud fraud check system, I will create the container around it, and I should know the OpenAPI specs for the API. Then I can, of course, based on my conditions for tests, I can build what I need to execute.

How can we do that? I have here one test that is a normal one that I’m saying that I don’t recommend. I’m using Mockito here. Then, what I’m doing, I have a setup creating a payment. Of course, as a third-party system, it needs to have a configuration to hit the normal endpoint for that. I’m saying, when I need to get the URL, just create some endpoint, some API key. Then, what I’m doing here, I’m saying I want a valid payment with no fraud detected. I’m saying, exchanges, like when I receive any string, and from this request is a GET, with anything inside, it should be equals with that response that is not fraudulent.

Then return, OK. Then, the response for that isFraudulent is false. The message that it will return from that service will be transaction approved after successful fraud check, so I can make that payment. I have exactly the opposite, with transaction flagged as suspicious, and it’s fraudulent, true. I’m just mimicking the behavior that is not happening. It’s not real. It’s important to have as tests, because it makes it safer when we do any kind of change in my controller, in my service, whatever. If you run this, it will run without any problems. Imagine that I do want to connect to a vendor. I configure, it’s my sandbox-qa-vendor.com, this specific port, and this specific API path, and with X thing.

If I try to run this, this will fail because I don’t have connection to the sandbox. This will be a normal issue sometimes, because you can create a test like this. Of course, it’s nice, it’s fine. You should have a test like this, but then the third party is out, it cannot connect. I have an exception here that something weird is happening. Like, connection refused, no hosting. This happens a lot.

How can we fix this? I can use WireMock in terms of container. The first thing before the container is that, in WireMock, we can develop by Java code or by JSON files. I really recommend JSON files. Then, for the JSON files, you can have the mapping, which is the request that should come in. Then you do another file that is called files there. That is the response, so what it should respond. This is the response. When I want to check success first, what I’m mapping here, when I have a request that is GET for this API fraud check, with query parameters, any amount, any transaction ID, with this API key, then reply with 200, and get the content from this file, the successful one, which is this one, that will return, fraudulent, false, transaction approved.

Then I have the other one. I can say, when I want to simulate a fraud, when the amount is this one and only this one, reply with fraudulent, true, and then it’s suspicious. Then you control the behavior with some parameters. When the transaction ID is X, when this has a combination with this transaction ID and this amount, you can do a lot of things, but you need to define those conditions in terms of testing. It should be connected to your test. Now, what I have here, I just have a Docker image that will show, I’m using WireMock. I have a name for that. I’m exposing the port, 8087. I’m just doing the internal mappings for WireMock. I can run this standalone, but I’m going to say, get my files internally and map to your container in the files directory, and my mappings to your mappings internally, and then just have the normal entry point here. How does it work? I will run this container through here. It’s running. It just started. Now I’m in localhost.

My integration test that runs into a sandbox, it’s fraud check integration test. I will change this to localhost. The API key is still the same. Now the magic will happen. The tests are running, because WireMock is replying, instead of the third-party dependency. If the service is not available, then you do the routing to WireMock. If the service is available, just consume the service. You can remove or fix the dependency. The only thing that is not a drawback, you need to create those mappings based on all your test conditions, because you know how to test that, then based on a condition, like I need to see there is a fraud. You need the final condition, this amount specifically.

In my test, of course, it’s not real. Then you have it. What we do actually there in the company, we do have a mock server, which is a super-huge container. It’s basically mapping files. We have different folders for different third-party dependencies. We have all the mappings. We deploy this in infrastructure. We have offline as well. All the teams can use it now without creating true code that’s specific, like using Mockito or MockMvc, whatever. They can actually hit an endpoint. We can even simulate latency and everything, when you’re doing requests. It’s really helpful because when we need to change something, a third-party dependency, we’re changing the mocks, we deploy. Everyone has the latest version. They can use it happily.

Takeaways. Now we have predictable mocks and reliable mocks. Creating this Docker approach, this containerization, container something, we can actually distribute this across the teams. The release process, when it needs to run a CI/CD, whatever, it’s super stable because now you’re actually removing that instability to a third-party sandbox, QA, whatever, from someone.

Testing Asynchronous Events – Timing is Everything

Lastly, testing asynchronous events. This is super easy, actually. It seems really difficult, but it’s not, at least in Java. I don’t know in other programming languages. Time is everything. Remember that all the time that I have something in payments, I send an event, a topic, and I use Kafka here. This topic is there to be consumed for this audit service. We’ll just get information. We consume this information in Kafka terms. If I run a test, there is a high chance, basically 100%, the test will fail because there is a delay in the topic. I’ve sent something to another tool and I try to consume in an audit. It’s not there yet because I have the topic in Kafka, and then the application will consume that. It’s a real problem. How can we fix that? Normally, we fix in the wrong way sometimes.

Developers are super smart, and sometimes we do a sleep or polling hacks for that. If we do a sleep and polling hacks, maybe the request will be there in 500 milliseconds, but I’m waiting 2 seconds. I’m waiting way more time than I should. All those systems based on event queues introduce all the uncertainties because I need to somehow control it. Integration tests will fail because of this delay.

How can we make it good, and fast, and reliable? Synchronize this test. There’s a tool, there’s a library, a super-easy and nice one that you can use, called Awaitility, in Java. We can create dynamic conditions for that in a smart way. I can say to Awaitility, wait at least 5 seconds, but if it returns in 300 milliseconds, don’t wait the rest, just proceed. Super smart. We can avoid fixed timeouts and make the test more resilient. We can have more realistic behaviors when you’re dealing with this situation. I will first show you how it works. I have Kafka here. I can show that I have a lot of messages, like this audit service you consume. I have one here. The payment that has a timestamp, is created, it’s pending. Another one, there is a fraud here.

That other service, whatever, will consume those topics and just use in a super-nice way. In terms of tests for events, what I have here, I know that this should fail because the event will take too long. Actually, I added a hack just to increase a little bit the time, it would take 2 seconds to have the topic ready to be consumed, to consume that, just to simulate the failure. What I’m doing, basically, producers send events and the consumer gets this event, my audit service. I should see that the event is created, I’m consuming the event already, that the payment ID is the same as the one created, and the timestamp is the same. If I run this, it will fail right away. You can see that we’re expecting something, but I get an empty array because I don’t have this topic yet because it takes a little bit of time. How can I fix it? Really simple. I talk about Awaitility. There’s a lot of ways to use it, Awaitility.await, at most duration of 5 seconds, until asserted, because this is the test.

Until I can assert this, but you can use a lot of different conditions and even create a condition, a callable, to create a custom condition that it should wait. For example, a database, something more specific in a service, whatever, you can create a custom approach. I say, until asserted, and then it’s a function, and then you basically get this one here inside. I will remove this. I send the event. It will take some time. Then when I’m consuming it, I’m saying, Awaitility, go there and wait at most or at least 5 seconds. It will be polling. The full configuration is every half second, but you can increase or decrease it, whatever. As soon as I have that, then you assert. When I run this one, you see that the test will halt for a while here. Just now it runs. It’s not instantly. The test is there, running, waiting, and then it runs successfully. How would you know that it’s running successfully? A normal thing to do, while you’re testing, don’t 100% trust all the information. Change some data just to make the test fail. Just like, it’s failing because you had this.

In terms of timestamp, what I would do here, I’m getting from the event. Then I say, I want the Instant.now. This will fail because it’s different. The timestamp will be exactly different because I’m waiting a little bit. It goes to the queue at least 2 seconds later. When I run this and it’s still waiting, but we wait for 5 seconds, instead of like, waiting, it’s waiting, it’s waiting, it’s waiting, and it will fail, because I don’t have that, exempt it. Here, you can see that it tries to do it in 5 seconds, but it’s not there, and it was polling. In a super-simple way, actually, with basically a line of code from this library, I can make an asynchronous system work in a perfect way in my integration tests. What we can do, using Awaitility, actually, it has this delay to consume the event, but this library will be polling, polling, polling until it’s there, then I can continue executing the test.

The takeaways for this one. Asynchronous flows require resilient assertions. Remember, I changed the code a little bit just to show that it can fail, depending on the data, of course. This still helps you to wait, but of course, not that long, 5 seconds is a lot. It’s just to show you, for you to understand. We can actually create realistic tests for that. Most teams or companies actually are skipping those kinds of tests, because sometimes we don’t figure out a good way to do that. Now at least with Java, we can.

Lessons Learned

To wrap it up, what I have learned. I showed you five different ways, and challenges we can have in that architecture. We learned three different ones: how to support multiple databases, how to mock the dependencies in a super-nice way, and how to test all the asynchronous events.

Questions and Answers

Participant 1: The example that you showed about having a centralized mock server for a third party, in reality, that third party can still change their interface? How would you prevent that? Have you looked into Pact testing or contract testing as well?

Nogueira: Contract testing, mostly, yes. Just like what we do as well, like make sure we can know when it changes and it’s breaking the contract. We must have contract tests just checking, is everything still fine from that third-party dependency, or even internal dependency? If everything is fine, I know that everything is fine. If it breaks the contract, my mocks should change. As soon as possible, I must change the mocks. You can have different approaches. I don’t know if the third party will send you a message, say, the next month you do a breaking change and then you change. Contract testing is one of the best ways to do that.

Participant 2: Many times, our third-party services that we react are in fact internal services in our own company, just managed by other teams. When you create an internal API, do you also provide mocks or stuff like that so other teams can use that?

Nogueira: Yes. For example, in my application nowadays, these yellow ones are mocks, if I don’t control them. For example, I control payment integration. This, I control everything. I would have a mock integration that I don’t control. This becomes a mock. Even for my teams, I have payment integration. I should provide this as a mock for others in this mock server so they can consume as well. Of course, this payment integration has an OpenAPI spec because it’s a connection, and then we build all these mocks through our mocks based on OpenAPI specs.

 

See more presentations with transcripts

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article OnePlus Pad 3 India Price Announced Ahead of Its September 5 Launch: Early Access Offers, Discounts
Next Article When Browsers Become the Attack Surface: Rethinking Security for Scattered Spider
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

MLS Season Pass drops to $25 for Apple TV+ subscribers
News
You Have A Few More Hours To Score the 62 Best Labor Day Deals on Gear We’ve Tested
Gadget
4 mistakes that can slow down your home internet that I always avoid
News
In 2015, the US revealed the meeting of two fighters with an unknown object. China has just “introduced” it to the world
Mobile

You Might also Like

News

MLS Season Pass drops to $25 for Apple TV+ subscribers

1 Min Read
News

4 mistakes that can slow down your home internet that I always avoid

8 Min Read
News

YouTube TV just got a brand-new look on Android TV devices

2 Min Read
News

What we know about Perplexity and how it compares to ChatGPT

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?