By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: How to Use Apache Spark to Craft a Multi-Year Data Regression Testing and Simulations Framework
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > How to Use Apache Spark to Craft a Multi-Year Data Regression Testing and Simulations Framework
News

How to Use Apache Spark to Craft a Multi-Year Data Regression Testing and Simulations Framework

News Room
Last updated: 2025/11/26 at 6:57 AM
News Room Published 26 November 2025
Share
How to Use Apache Spark to Craft a Multi-Year Data Regression Testing and Simulations Framework
SHARE

Transcript

Olimpiu Pop: Hello everyone, I’m Olimpiu Pop, an InfoQ editor, and I have in front of me Vivek, and he has a fascinating scenario from Stripe. Vivek, could you please provide a brief introduction?

Vivek Yadav: Yes, hello everyone. This is Vivek Yadav. I am an engineering manager at Stripe. I have been working at Stripe for the past eight years, playing various roles as an engineer and an engineering manager. For most of my time, I have focused on one area of Stripe, specifically how Stripe bills our users.

Olimpiu Pop: Thank you. Stripe is an awe-inspiring company that holds a significant share of the global payment system, so I’m very curious to see what you’ve actually put together. But before that, can you give us a short introduction on how you got into software engineering, software, or computer science, because each of us has an interesting story, so what’s yours?

Vivek Yadav: Absolutely. Yes, my story is unusual, but I didn’t touch computers until I enrolled in a computer science course at the university. I grew up in a village, and I didn’t have access to any computers or similar technology. I did well in school, and when it was time to get into the university, I just asked around, “Hey, what should I get into?” And thankfully, people suggested computer science, and I have been enjoying this field ever since.

Planetary-scale multi-year data regression testing [01:45]

Olimpiu Pop: Okay. The reason we’re discussing this now is that you’ve used Apache Spark in an unconventional way, not the way people typically use it. What was the problem you’re trying to solve?

Vivek Yadav: Yes, absolutely. This is a common scenario that every large-scale system encounters repeatedly, namely, migrations. We write a system after a few years, but the system does not scale. The inputs and outputs are generally the same. Still, the internals of the systems just need to be rewritten because the business logic keeps getting more complicated, and then we have to redo the abstractions, right? And when doing so, we want to ensure that the previous input and outputs are essentially unaffected. What our system was able to handle before, it can continue to hold the same things in the same way without any impact on the users, except that it can now accommodate more items.

In our use case, we were conducting a migration and needed to ensure it was safe and secure. To verify this, we tested our new code using multiple years of past data. Again, we are dealing with money, so we must be cautious and ensure that everything is correct. Now, let’s say we have a production-scale system and imagine that you’re providing a real-time service, designed to handle a specific type of load. If you want to run past three years, five years’ worth of data through that service, it’ll take a lot of time to test that service.

But we came up with an interesting strategy, which was essentially, “Hey, could we leverage Apache Spark for this testing?” And how do you even do that, right? Now, if you look at every service in a simplified format, you’ll see that every service is really doing a bit of I/O and a bit of business logic. You receive a request, apply some business logic, and send a response back. You may be performing some database writes in between, and sometimes you have a chain of these events as well, which involves the input, business processing, and output. The output is then taken by another step in the business processing, and so on.

You can squint at your services in a way that you can basically see them as a series of business logic operations with some inputs and outputs. In a very simplified format, this is very similar to Spark. In a service, your input/output occurs based on a request being received, whether it is a gRPC request or an API request, and the response being sent out. In the case of Spark, your input and output occur on top of a Hive table, HDFS files, or S3, allowing you to read in bulk. Rather than getting one request at a time, you read in bulk, you write in bulk, and you basically do their business logic in between.

Olimpiu Pop: Let me stop you here to make sure that I understand correctly. Oversimplifying everything, what you actually did was regression testing. Still, the regression testing ensures that the transactions — likely billions of transactions that Stripe processes each year — are used as data to verify that everything is functioning correctly. And then, oversimplifying everything, you have a framework that allows you to test those things where you have input and output, essentially to ensure that the processing is consistent.

Vivek Yadav: Yes, absolutely. Doing this requires discipline in terms of how you write your service. If you could write your service as effectively, think of it as the code logic is a library. That library takes in some configuration at startup time and then serves requests. It exposes some methods, allowing you to create an API where you call the process and receive a response. Now, the same library can also be wrapped in a Spark wrapper. If we have a service, we can deploy it in multiple environments. We could have a production environment, where we can modify the input and output. We have a queue environment; we change the input/output, database, everything, right? You also have local test environments, where you override some conflicts and modify the input and output.

Similarly, Spark is just one more environment where the same code can execute, except that your input and output are happening on top of Hive tables or S3 files, and your configuration is essentially just a time setup again. The key point is to organise your code as a library and then add the layer of I/O around that. Now, one type of IO makes that a service, a different kind of IO makes that a Spark job. Once you have the setup ready, then you can set up year sort of data in a matter of hours, because Spark, you can just run this massively parallel with Spark. However, if you are dealing with a database or something similar, then it’s just too slow, and you cannot scale it that quickly.

Olimpiu Pop: Okay, so more or less, what I’m hearing you say is that you have data split around, let’s say buckets, as different containers of different types of data, and then using the abilities that Spark has, you can just parallelise everything and do it decently in terms of time. And then, rather than having something that spans a more extended period of time, you’re doing it in half a day or a day.

Apache Spark works in bulk, rather than one operation at a time [06:52]

Vivek Yadav: Yes, exactly. The key thing Spark has is that, rather than reading and then writing one request at a time, it essentially reads from S3 in bulk and writes back to S3 in bulk. And so, the other critical component here is that this testing scenario does not fit every application; however, it is particularly well-suited if your data is already stored in S3. In many production scenarios, a real-time service will be ongoing, but you will also maintain a copy of your data in storage for various analytical purposes.

In the case of Stripe, not everything is available in S3; however, in many scenarios, we need to perform additional analysis, and therefore, a significant amount of data is stored in S3. In our case, we already had a copy of our requests and responses in S3, so having Spark on top of that was a very natural choice. We already had other Spark jobs doing work on top of that data for different purposes.

Olimpiu Pop: Okay, so that was my next question, but you already figured it out. Is that oversimplified? It’s an assert statement. You have the request and the response, so what you’re doing is using the same request pushed over and then comparing it. And what would be the output of this system? Because years’ worth of data means probably in the order of billions of transactions or whatever. What’s the output? I mean, usually you’ll get, when you’re running a set of tests, you’ll get, okay, these kinds of scenarios are not the ones that you expect. But if you’re running on billions, you’ll have a very long scroll just to check everything, so what’s the output?

Vivek Yadav: Absolutely. There are many use cases here, but our first use case was simply a migration and a backtest of that migration. In the case of back testing, the expectation is that we already had a previous production request response, and we now play the last request against the new code, obtaining new responses. Now, we just need to compare the new response data set with the old response data set and make sure that they’re the same. You can basically, on top of that, write a custom diff job or a generic job as well, which basically compares the two and then gives you, “Hey, where are the differences, what are the rows that are differing?” And then you can analyse those rows, fix your code, rinse and repeat.

Olimpiu Pop: Looking at the lifecycle, what is the classical deployment lifecycle, and where does it occur? Is it, as you mentioned earlier, in a staging environment, or are you on top of that because you are in a very regulated space, and I would expect that you cannot just go and do continuous deployment and do releases and so on, so I’d expect this to be a very sandboxed space.

Vivek Yadav: Absolutely, yes. In this Stripe environment or any other regulated environment, all code running is executed in a highly controlled environment. None of this code is running on my local machine or anything like that. Stripe provides, and other companies of a similar nature also offer, a development environment that complies with all regulations by default, allowing you to run code within it.

By the way, this is not impacting production at all, even though you are, let’s say, using some sort of production data. When I say production data here, I want to clarify that none of this is customer-identifiable data as well. All that work is already done before the data goes into cold storage and so on, Stripe has excellent security and so on. Every bit of compliance is done on top of that before it reaches a point where any human can really examine it. There’s no identifiable information here. It was not even transaction data, it was the network cost data, which is one level deeper to transactions.

Olimpiu Pop: To ensure clarity, there is no PII involved because the data is anonymised, so there is no case. Additionally, given the critical nature of the system you’re assembling, everything is done in a separate environment to ensure that production is not affected. Everything is air-gapped, so everything is in a secure location.

Vivek Yadav: Yes, yes.

Olimpiu Pop: Okay. Looking at it, trying to visualise the architecture of everything, you have buckets, S3 if I understand correctly, so AWS is used on that side. Then you have data in and out at the network level, and then you have Apache Spark to be used as the glue between those two points, and then you analyse if there is something that’s not working as expected in terms of regression testing, you’re just looking at that, and then you zoom in.

Vivek Yadav: Yes.

Olimpiu Pop: Are there everyday engineers who do that, or do you have a specialised work group that includes QAs or engineers in testing? Who are the consumers of this framework?

Vivek Yadav: The consumers of this framework are engineers who offer different services. The engineers are responsible for testing their code. This is not going to, especially in this scenario; it’s not like we are relying on a QA or a test engineer. Engineers are end-to-end responsible for their work, so if they’re creating something, let’s say, and I’m accountable as an engineer for migrating it, then I’m basically using this framework to ensure that my migration is correct and to debug my code as well, whatever I’m writing. Many more scenarios are enabled along with this. We created this Spark-based testing solution for migration, but later we realised that “Hey, this can be used in a bunch of other scenarios as well.

Another common scenario I’ll give you is, again, this is not widely deployed, and not every service is applicable here. However, in our case, whenever I make a code change to my service, this is not a migration; it is a small code change. We have a golden data set already, which is let’s say a few million requests, a few million responses, and when I commit that code, I can basically tag, “Hey, can you just make sure that this code is safe and that from the GitHub this will trigger a Spark job essentially that just reads the request, these expected responses and then the new reactions and just makes sure everything is correct, attaches back the responses, the diffs back to the PR to just make sure that even minor code changes are accurate and this happens in order of minutes.

Multiple levels of quality assurance [13:25]

Olimpiu Pop: That changes the landscape because, generally, if you’re talking about the last validation step, it’s okay to run it overnight or in a different type. Still, if you’re discussing it just as a feedback loop for developers, then you have another perspective that you have to take into account. That was my question about how much it takes, because usually you have engineers complaining about it taking too long, and then they lose focus. Still, a couple of minutes is quite okay, then, if you’re discussing the golden data set. Is it likely that it is always the same data set, or is it something that is just being sequenced?

Vivek Yadav: We started with the same data set, and obviously, with time, the shape of the data changes and so on, and we realised, “Hey, we’ve got to refresh that golden data set time to time”, and so now we have a script that basically runs the script. It’ll regenerate the golden data set based on the latest data and subsequent updates.

Olimpiu Pop: Okay, great. Thank you for sharing that. And now, as we touched on, developers are closer to the inner feedback loop. How do you see the developers, and I’m now touching on your engineering measurement skills, because up to now, we have discussed the engineering side. What’s your metric? How did the code base improve? Do you see fewer defects then? Do you see happier developers? These are the two key points that should be considered in this landscape.

Vivek Yadav: Yes, there are two key points here. From the developer’s perspective, one key aspect is having the confidence that the reviewer is evaluating the code quality, and they can focus less on bugs compared to before. So if you’re reviewing the code and you already have a bunch of tests passed and that code has also been run against millions of transactions, and “Hey, this works out fine”. That gives you some confidence that the code is correct, and then you can proceed with the quality-based review. So that’s one part is just more confidence in the system.

The second thing, and that’s why we actually attach to the code reviews is we want to catch the regressions basically before the reach production and there’s only so much you can do with the unit test and the integration tests. So we just wanted to have one more layer of safety to catch things before they go to let’s say production system and they’re live and then we detect it live, it still happens, but this was a massive step change in terms of how many issues were happening because of a code change.

Olimpiu Pop: To see how the landscape looks like from that point of view because you have different ways of looking at testing and those were the pyramid of testing. And then there were a couple of guys that were saying about the reverse pyramid of testing and then there were some people who were diamond of testing and all other sorts of geometrical shapes and figures and I don’t know, gems and other stuff, but you touched on two other ones.

One of them is unit testing and then as the name suggests, that’s for very small pieces of code and that will allow you to show how the code is running in isolation for you as a developer. But then you have a lot of other stuff like integration testing and then regression testing and so on and so forth. So how does the landscape looks like? You have unit tests and what are you targeting? Okay, well, it’s 20%, 15% in terms of unit testing and everything else or it’s more hunch-based than you just focusing on that.

Vivek Yadav: I would say this really changes from use case to use case part one, and it also changes from what the maturity level of the system you’re working at. I’ll give you another example. I’m working on a greenfield project right now. The project is not yet in the user’s hands. We are iterating fast on that and so on, so right now we are not really caring about what our unit test coverage is. Our key thing is, “Hey, let’s get a solid enough base ready that actually serves some core purpose”, and just before we go to the users, that’s when we start having unit testing.

Now, also it depends on is the system going to move money or is the system just an internal tool or something else? If something is going to move money, there’s no amount of testing that is enough. You basically go for all in, you try to basically have the highest number and so on. But if it is, let’s say a less serious system, then you got to be careful about how much resource you devote to testing. I don’t have one ready-made answer. It really changes from what’s a scenario in front of you.

Olimpiu Pop: Okay, so you’re adapting based on what you have and the outcomes and so on so forth.

Vivek Yadav: Absolutely. Anything that is touching users and money movement, and so on, that definitely gets the highest level of testing.

Reuse the existing data and tooling [17:59]

Olimpiu Pop: Normally these tests are at some point appearing on a dashboard for some kind of management. How did the numbers improve? You said that it’s a massive scale, but what should you expect or you said also that it’s not something that you use in every kind of system, what kind of systems you should use it for?

Vivek Yadav: Yes, I think this a Spark-based let’s say testing of JVM services. So this will fit in a scenario where your code is already written way that it can run in a JVM system. So if it is a Scala code or a Java code, that will work in JVM system. That’s one part. The second part is this will work well. I would let’s say cheaply in terms of development cost, if you already have your data copied over to S3 as a part of normal let’s say ETL process, a lot of let’s say Stripe-shaped companies have those ETL processes and a lot of data is available in S3 systems and so you already have the data you can write on top of that. And finally, I’ll say that if your job is not doing too many state lookups, so your service, what I mean by that, some services are very complicated in terms of, “Hey, give me this customer information, give me that user information and there’s some conditions”, and do something. This will not work there.

But some services are very, let’s say contained within themselves. In our case, we have two services – I can give an example of this – is a transaction that is happening, what will be the network cost of this transaction? So network cost is essentially the cost Visa or MasterCard or any other network will charge for this transaction. We don’t need to call anyone. All of that is based on a set of configurations. The configurations are pretty heavy. There are a few thousand sets of rules that will determine these costs, but they’re very contained. You don’t need to call, you can just load them once and it’ll work out and you have a bunch of logic written around them. Similarly, this is a transaction, what will Stripe build a user for this transaction? That’s a very similar calculation as well. And that also doesn’t require too many lookups around. So if your service is more contained and it is not dependent on other services too much, that’s where this really, really fits.

Olimpiu Pop: Okay. Let me draw a few conclusions. Just making a parallel to programming languages, we usually said that imperative programming languages, you have a lot of state more or less, right? You had a lot of things to contain. Then nowadays with the functional, you’re just discussing about in and out and basically you had, depending on the different type of data sets, you had the similar things. You apply the function to an input and you had an output and more or less that’s how the Spark based system is working, but that’s not always the case.

And now you’ve thrown in the ring two other points. It’s optimizing for your network cost where the network cost will be the providers and the things that are out of your reach in this case be it MasterCard or Visa or any other providers or the guys that are creating the interaction between the banks and the user itself. And then that’s a huge amount of configurations that allow you to do it, and that differs from one side to another and that’s the points that you would like to change, to adapt to. So then it’s a whole different perspective because probably you have to match inputs and outputs, but also configuration. So that’s something that is a little bit more exponential rather than linear in this case.

By using real-world data, you can use testing for simulations too [21:28]

Vivek Yadav: The configuration in our cases, yes. What Visa, MasterCard will charge for a thing. We are not making live calls on them. They basically publish their rules like this is how we are going to bill you and you can encode those rules into your system. So those configurations, we have operations teams to actually translate the configurations coming from Visa and MasterCard into let’s internal rules that our systems can execute. There are a few thousand lines of rules essentially rather than anything else, and so they’re very content they can fit in memory of one session as such once they have translated. So that does not really increase the complexity of the system.

But that actually brings me to another interesting use case of the same thing, which is time to time these configurations change. So ultimately, we are trying to figure out one use case of this spark based testing is regression testing, but another use case of the same thing is what if testing basically means that you are playing in this world with these rules? What if rule change from these set of rules to that set of rules and then what will be the change on the output? As an example, let’s say your telephone company charges you $20 a month and that has some impact on your yearly budget. What if your telephone company started charging you $40 a month? What would be the impact on your budget? That is a what if testing a configuration change, what would be the impact on the output?

Olimpiu Pop: One facet to what you’re saying on added value that you’re bringing with the system, all of them is regression testing that is on spot with data that you had and without any kind of headaches. The other thing that we’re currently mentioning is more or less outliers, things that you’re just thinking or maybe a better term will be chaos engineering where you’re just throwing different things and how the things look like in that kind, and then that gets more into the space where Spark is actually used in terms of analytics so you’re just making projections. So that’s not only used for engineering but also the business side can ask what if questions and provide better answers. Okay, that’s quite interesting.

Vivek Yadav: Yes, just to make that more concrete, so one example of this is we have a scenario which is, well, rules of these networks are going to change starting October and the current rules are this and the new rules are that and then we need to give projection to our finance team or our users as well, how is your underlying cost going to change? Well, we could run the new rules and the old rules on the same data and figure out what’s the difference and then based on that we can give pretty accurate estimation on how the feature is going to behave, and it’s one of allowed features our users have.

Olimpiu Pop: Okay. Talking also in terms of looking at companies from the business impact, it is quite useful because it’ll provide you more in-depth views on different types of things. You mentioned the S3, that this kind of implementation is very reliant on having data in S3. What if we have, you mentioned a what if testing, so let’s do a what if testing in this space, what if we have a different type of data, container database, whatever, you mentioned Hive as well. How should I approach it as a developer? I want to use something else.

Vivek Yadav: Yes, I would say as long as Spark can read from that data set in an efficient manner, it works. When I say efficient manner basically, Spark is good at reading in parallel from disk basically, right? So if you ask Spark to read from a, let’s say JDBC connection or something from a DB, it will read, but it won’t be efficient. So as long as you’re able to leverage Spark’s efficiencies, it’ll work out. The underlying data set could be any HDFS solution.

When designing testing systems, bear in mind the IO operations efficiency [25:11]

Olimpiu Pop: Okay. So what I hear you saying is as long as you have an efficient way of reading, probably some kind of adapter or whatever, Spark has, it’ll be efficient. Just to remain with your example, JDBC, probably it would be limited about the number of connections that it can have and all these other things that are related to the databases, so that, it’ll be a bottleneck. If in your case it’ll be a limited number of data sets that would make sense. It might work, but then it defeats the purpose because you will normally go, it’ll be too big of a hammer for that kind of problem. Thank you. What else should we know about your system that I didn’t ask?

Vivek Yadav: I think the other interesting situation here is that this is very cheap to run. Compared to any equivalent testing of the shape, which is, let’s say if your problem statement is you need to backtest three years worth of data in one day, it will be very costly to do that in anything else other than Spark or similar system.

Olimpiu Pop: Okay. But to just get more concrete, first of all, what would be the alternative? Because I would suppose that when you took this decision, you didn’t say only, “Okay, we already have it, let’s do it”. What are the alternatives?

Vivek Yadav: Absolutely. The other alternative we initially considered was essentially having a database like a Postgres or Mongo type of database where you basically dump all the data, you massively scale that database for whatever number of days you need to do the testing. So you spin up the data set, database, you load the data, scale the database up to load all the data to begin with, and then run the test on top of that. You could basically write a thing on your service that runs this test, but that just requires huge amount of setup and tear down because you don’t want to maintain that database for long term because just for testing purposes, it’s just implementation cost was too high as well as infrastructure cost was too high. While in the case of Spark, we had basically near zero implementation cost because we already had the Spark infra and so on, and this was nothing for Spark to deal with. It’s like a very trivial amount of load as such for Spark.

Olimpiu Pop: Okay, so what you mentioned is that currently there is a hype war, whatever, the trend in the industry is to use ephemeral testing, so to just drop a bunch of the testing environments, so to just have the development environment and then after you’re done with the inner loop, for the outer loop, you’re just using ephemeral environments and that’s it. What you’re saying is that you already had this thing so was pointless to implement something new, and the Spark is so heavyweight due to its analytics target that it was quite easy to adapt to that. And then we are saying data in terms of, okay, for this amount of data. Just give us a ballpark estimate, what are we talking about in terms of and what kind of unit are we talking about billions of rows or what would be the unit that we should think about?

Vivek Yadav: The scale of our system was basically somewhere around 400 plus billion rows and growing. In terms of data size, it depends on the projection, but it would range from anywhere between 2 TB to 5 TB.

Olimpiu Pop: Okay. That’s a lot of data, so probably this would be something that you’ll just want to do, I don’t know. I’m just thinking now also in terms of weather projections and all these kind of things where you have a lot of data and that would be in terms of scale because I think not that many people are just looking at those kind of things and they need to be that accurate in terms of predictions. Okay, that’s interesting. You as a company are very much in terms of learning. I know one of the things that struck me in terms of Stripe is the writing culture where you’re just looking into having a lot of stuff being written and then iteratively improving and looking at business impact. How can the current system be improved? Don’t tell me it’s a perfect system because I’ll not believe you.

Vivek Yadav: Absolutely not, no. Our systems are, let’s say there’s a lot of scope of improvement all around. In our current system honestly, all the improvements are on the business side of the solution. So when I say business side of the solution, it’s like the core system I was talking about is how do we bill our users or let’s say how do we estimate our underlying cost, network cost? There are a lot of different shape of costs that always keep coming up and system keeps getting stretched. Currently, the system is designed from the perspective. You have this one input coming in a very functional way essentially. You have one input coming and you apply some logic, you have output going out, right?

Two different requests are created very independent of each other. They’re not related to each other, but sometimes there are new use cases that are coming up where well cost of this new request coming and actually depends on some previous requests that you have seen and you actually need to care about state and we don’t do that very well today. The state management basically happens slightly outside the system in a way. And if we could somehow bring that into the fold, that would be easier on our users.

Olimpiu Pop: Okay, so meaning the users, the business part of the organization that actually looks in terms of projections and how to use it?

Vivek Yadav: Yes.

Olimpiu Pop: Okay, great. Thank you. Well, normally I would discuss with engineers to become more product, now you’re just talking about the product, well, business to just be more engineer-ish. Thank you for your time. Good luck with your work and thank you for sharing your thoughts.

Vivek Yadav: Thank you, Olimpiu.

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Kovant wants to become the nerve center of enterprise agentic AI operations after pre-seed funding –  News Kovant wants to become the nerve center of enterprise agentic AI operations after pre-seed funding – News
Next Article NVIDIA Is Interested In Helping Bring Vulkan Video To Chrome NVIDIA Is Interested In Helping Bring Vulkan Video To Chrome
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

What Makes the Best AI Website Builder API? Features, Performance, and White-Label Power
What Makes the Best AI Website Builder API? Features, Performance, and White-Label Power
Computing
Foldable iPhone Specs: Everything We Know So Far – BGR
Foldable iPhone Specs: Everything We Know So Far – BGR
News
The Trump Administration’s Data Center Push Could Open the Door for New Forever Chemicals
The Trump Administration’s Data Center Push Could Open the Door for New Forever Chemicals
Gadget
Webinar: Learn to Spot Risks and Patch Safely with Community-Maintained Tools
Webinar: Learn to Spot Risks and Patch Safely with Community-Maintained Tools
Computing

You Might also Like

Foldable iPhone Specs: Everything We Know So Far – BGR
News

Foldable iPhone Specs: Everything We Know So Far – BGR

5 Min Read
I can’t stop using this phone’s external lens now that they’ve fixed its big oversight
News

I can’t stop using this phone’s external lens now that they’ve fixed its big oversight

9 Min Read
Onton raises .5M to expand its AI-powered shopping site beyond furniture |  News
News

Onton raises $7.5M to expand its AI-powered shopping site beyond furniture | News

5 Min Read
First there was nothing, then there was Hoto and Fanttik
News

First there was nothing, then there was Hoto and Fanttik

19 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?