By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Stream and Batch Processing Convergence in Apache Flink
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Stream and Batch Processing Convergence in Apache Flink
News

Stream and Batch Processing Convergence in Apache Flink

News Room
Last updated: 2025/07/29 at 12:05 PM
News Room Published 29 July 2025
Share
SHARE

Transcript

Qin: I’m going to talk a little bit more about stream and batch processing convergence in Apache Flink. My name is Becket. I’m currently a software engineer at LinkedIn. I used to work on mainframe at IBM, and then worked on Kafka at LinkedIn, and then moved to Flink. I’m mostly focusing on the data infra development. Currently, I’m a PMC member of Apache Kafka and Apache Flink, and also mentor of Apache Paimon and Apache Celeborn.

Outline

First of all, I’m going to introduce a little bit about what is stream and batch unification? What is the motivation and the use cases of it? Then, we’re going to look at the stream and batch unification in Flink, so how exactly Flink does that. We’re going to dive deep into the computing model and the execution model of stream and batch unification. Finally, we’re going to take a look at some of the future work that we’re trying to accomplish in Apache Flink.

Motivation and Use Cases of Stream and Batch Unification

Let’s take a look at the motivation of stream and batch unification first. To explain that, imagine that you have a bunch of data in your organization or company, and, apparently, you want to build some data application around the data. You want to get some insights out of it. Your application team will probably ask for a few things, like data freshness, throughput, cost, and also, they want to be able to express their business logic, of course. In addition to that, they probably will ask for scalability, stability, operability, and so on, as well. Now the question is, how are you going to tackle all those requirements? The answer is data infrastructure. That’s the one in the middle. Data infrastructure actually consists of multiple things. Apparently, you need a computing engine.

At this point, we have many computing engines out there. You have MapReduce, which is old school, and you have Hive, Samza, Flink, Spark, Pinot, Trino, all those different engines to tackle different problems. Also, you have storage. Your data infra must have storage. Then, we’re talking about all the message queues, KV stores, file systems. To name some projects, we have Ambry, HBase, we have HDFS, Kafka, Venice, Redis, all those projects out there. Then, you also need some control plane to do orchestration and resource management and so on. We have YARN. We have Kubernetes. We have Airflow. We have Azkaban. There are some new projects like Flight going on in this domain. Also, you need data modeling.

In order to model your data, you need some format for it. What we see is that we see like Avro, we see Parquet, we see ORC, we see all those metadata management out there for you. Eventually, you also need some supporting tooling. For example, you will need metrics, you will need logging, you will need alerting, testing, releasing, all those support tooling to help you to just build your data infrastructure projects and application. Everything in the middle consists of this whole data infra domain.

Now let’s take a look at an example. If I’m provided with all those things that I just mentioned, as an application developer, how would I develop my application? One typical example use case we see is here, we have an online application and it’s generating some user activity events, and I’m pumping those events into Kafka. Then, those events are processed using my streaming feature generation. I’m trying to generate some feature for my machine learning model. What I can do is that I can have a Flink job reading from Kafka, the messages are in Avro format, and I’m using Kubernetes to just orchestrate this job. The result features are pumped into Feature Store. Let’s assume the Feature Store is implemented on top of Venice, which is a key-value store, open-source project. This is just one, like the streaming feature generation pipeline.

Meanwhile, you probably will ETL all the user activity to your HDFS as well. It will have your batch feature generation jobs running here, which is also pumping features into the Feature Store here. We’re talking about the batch pipeline here. The projects involved probably include like HDFS, Spark, Parquet, which is columnar format. You also have Airflow here, and YARN, if it’s old school. As you can see, the interesting thing is that the streaming pipeline and the batch pipeline actually has quite different tech stack. We’re basically talking about two completely different ecosystems when it comes to the same thing, which is feature generation. I think part of the reason is because, in the past, the batch ecosystem was built first and streaming was introduced much later. That’s why you see this divergence between the ecosystem of batch processing and stream processing.

What we want to achieve ideally is the following, we don’t want to distinguish between stream and batch because for the same exact logic, in that case, I need to develop twice for streaming and for batch. What I want to have is that I just want to have one unified storage that can provide me with the key-value access pattern, the queue access pattern, which is like the messaging system, and also the range scan pattern.

In terms of the processing part, I don’t want to have two separate pipelines. I just want to have one tech stack that can handle both streaming and batch feature generation. Here we can imagine that I can use Flink as the convergence engine, and I’m just going to use Kubernetes across the board, use Airflow for workflow orchestration, and hopefully I can just unify all the formats with just one format. I don’t need to do all those type system conversions along the way. This would be the ideal world. We will have unified storage. We will have unified computing engine, unified orchestration control plane, unified data model, and unified tooling as well.

What’s the benefit of having that? The benefit of having that is that we’re going to dramatically lower our overall cost for data infrastructure. When we talk about cost of data infrastructure, usually we’re talking about these parts. If I’m adopting a new technology, there will be a migration cost. Whatever technology that I adopt, there will probably be a learning cost, meaning that if I have a new hire, they probably need to learn what’s the tech stack there, so that’s a learning cost. When I have my stack running there, I will probably need to maintain it just to make sure it’s running fine all the time. That’s maintenance cost. Once everybody learns the stack and my infrastructure runs fine, there will be a long-running cost, which is that I’m using this stack to develop my application. Every time I develop something, there will be a cost.

If the tech stack is super-efficient and has very good engineering productivity, in that case, my development cost will be low. If the stack is bad and really takes a lot of time for me to develop some new application, then the development cost will be high. Finally, it’s execution cost, meaning that if I run this stack, I would need to put some hardware there. That’s where usually we put the benchmark. The performance benchmark usually only benchmarks for the execution cost. Those are the five costs associated with this data infra domain. With the stream and batch unification, the target here is to lower the development cost, maintenance cost, and learning cost so that we do not need to have two ecosystems to maintain, and our end users do not need to learn different projects. They just need to learn one tech stack, and they can do both stream and batch processing.

With that, we can conclude what does stream and batch unification mean. It basically means a new design pattern that people do not need to distinguish between streaming and batch anymore. Today, we’re going to focus on just one part of it. The goal is that we want to have just one single project in all those five categories, and then, we do not need to distinguish between stream and batch. Today, we’re going to focus on just the computing engine. We’re going to talk about Flink. We’re going to dive deep into Flink as a computing engine itself, and explain how exactly Flink achieved the goal of stream and batch unification.

Goals of Computing Convergence

Before we dive deep, we need to first understand, from computing engine perspective, what exactly does computing convergence or stream, batch unification entail? There are actually three requirements. The first thing is same code. If you write the same exact code, it should run as both stream and batch job. The second thing is same result. If you write the same code and feed it with the same data, then both streaming and batch job should return you the exact same result. Finally, it’s the same experience, meaning that whether you’re running a streaming job or batch job, your experience should be pretty much the same in terms of logging, metrics, configuration, tuning, debugging, all the tooling, and so on. You want to have all those three categories to be there to declare successful computing convergence.

Stream and Batch Processing Unification in Flink

With that defined, let’s take a look at how exactly Flink achieved this goal. We have been hearing this statement quite a bit. People always say batch is a special case of streaming. Conceptually, that is true. However, if you look deep into it, that’s not completely the case. When we’re talking about streaming, we’re actually talking about dealing with infinite, out-of-order, dynamic data, in a sense that we actually need to solve the problem of stream processing, and stream processing is not solving a third problem. It’s just two problems it’s trying to solve. One is infinite data, and the second thing is out-of-orderness. In order to address these challenges, we invented a bunch of mechanisms, including watermark, event time, retraction, checkpoints, state, all those things are just invented to tackle those two challenges, infinite data and out-of-orderness.

On the batch world, the characteristic of the input dataset is completely different. We have finite, well-organized, and static data, and the goal is actually to proactively plan for execution time and resources. On the batch side, the focus would be, how can we divide all those computation into execution stages? How can we do speculative execution, adaptive execution, data skipping, all those technologies? If we want to achieve both worlds with just one engine, we’re talking about we need to take care of query optimization, scheduling, shuffle, state backend, connectors, all those components, and make sure they can actually work well in both streaming and batch. While conceptually speaking, you can say batch is a special case of streaming, when you look into the actual requirement of each component, they’re actually different. The focus is different.

In order to understand what are the differences, and what are exactly the problems or challenges that we need to tackle, we can think of a computing engine as two parts, at a high level. The first part is called computing model. The computing model is essentially the logical computing semantics to perform computation. It’s like the primitives this computing engine provides to its end user to describe, how do you want to do the computation? Usually, the computing model have an impact on the computing result. There are some examples, for example, event time, the timer, retraction, watermark, all those semantics or primitives provided in stream processing will actually have an impact on the end result you’re receiving. There is a second part, which is execution model. The execution model is usually considered as a physical implementation of the engine, just to fulfill the computing model or computing semantics.

Typically, the computing results are not impacted by the execution model. Examples of the part of the execution model are, for example, shuffle, scheduling, resource management. No matter how you do shuffle, no matter how you do resource management or scheduling, your computing model, your end result is unlikely to be impacted. That’s the physical part. If we look at the computing model and execution model separately, then we can understand why streaming and batch unification make sense. The Flink approach is following. For the computing model, we’re basically adopting the streaming computing model for both streaming and batch.

That basically means, under the hood, Flink is using exact same streaming operator to perform both streaming and batch computation. Because of its unified model, there’s actually little overhead on the batch side, even if we’re using streaming operator to do the batch job. The challenge here is, how are we going to handle streaming semantics in batch processing elegantly? We’re going to dive deep into all of those with examples. Usually, we can consider a stateless processing to be identical between streaming and batch. For example, you’re just doing a stateless transformation.

In that case, on streaming side, on the batch side, there’s no difference. The stateful processing is the tricky part. We do recognize for batch processing and stream processing, the execution model has to be different. It’s not because that streaming cannot produce the same result as batch. It’s just, you need a separate execution model for batch so it can run more efficiently. It’s not about correctness, it’s more about efficiency. In Flink, we do have a separate execution model for stream and batch.

Computing Model

Let’s take a look at computing model. We have this table here and we listed the difference between streaming processing and batch processing in terms of the computing model. In stream processing, there’s concepts called event time and watermark. There are concepts called timer, and retraction, and state backend. All those kinds of things don’t exist on the batch processing side. Imagine you’re writing a program and you’re leveraging all those primitives when you’re writing the program. Now you want to run the program in batch mode. How would the batch processing elegantly handle all those semantics? That’s going to be one of the challenges.

Another thing is that, on the stream processing side, we cannot support global sort because it’s too expensive, as you can imagine, because you have infinite data. If you want to say, I want to do global sort, that’s going to be extremely expensive, and you will never be able to produce correct result per se, because the data never ends. If you want to do global sort, then there will be never a correct result. On the batch processing, however, this is like a very normal operation. There’s also a gap. This is about computing model difference.

In order to tackle this problem, we’ll talk about a few things, event time, watermark, timer, retraction, and state backend. Let me explain how Flink actually handles those semantics in batch. First, let’s take a look at how are those semantics handled in streaming, just to explain what do those concepts mean. Let’s assume that we’re doing a count aggregation on a tumbling window of size 4. The tumbling window starting time is of timestamp 0. I think Adi actually provided a good example of the tumbling window in her talk. It’s basically a non-overlapping window, just of size 4. The output of the windows are triggered by timers. We talk about timer. The way it works is that every time when we see a window actually closes, there will be a timer that is triggered to emit the result for that closed window. That’s how it works, so timers. Watermark actually is there to delay the result emission to handle late arrivals. This watermark is invented to handle out-of-orderness.

If you remember, I mentioned that for stream processing, one key problem we’re trying to solve is out-of-orderness. The idea is very simple actually. Because you say, the event can actually arrive out of the order, so in order for me to emit the right result for a particular window, one simple strategy is to just wait a bit more. Let’s say if I want to emit result at the window and at time 12 p.m. today.

By the wall clock time of 12 p.m., because of out-of-orderness, there might be some events whose timestamp is before 12 p.m., that haven’t arrived yet. In order for me to emit the right result for the window ending at 12 p.m., I need to wait for a little bit longer so that the late arrival events can come into the window before I emit the result. That’s exactly what watermark means. It basically tries to delay the emission of the result a little bit. Watermark equals to X basically means all the events whose timestamp is before X has arrived. Assuming that we have Watermark equals to MAX_TIMESTAMP_SEEN minus 3, that basically means that if I see across all the events, the maximum timestamp, let’s say is X, then I would consider the watermark to be X minus 3.

With those mentioned, let’s take a look at this example. Let’s say I have a message coming in whose timestamp is 4. Apparently, this will fall into the tumbling window starting from 4 and ending at 8. Because I’m doing a count aggregation, I’ll put the key to be the window and I put the value to be the count. Here I will say, there’s one event in the window starting at 4 and ending at 8. In this case, my watermark is equal to 1. The reason being is that I’m saying that the watermark is actually the max timestamp that I’ve seen minus 3, because currently the max timestamp I see is like 4. If it’s minus 3, then it’s going to be 1.

Now the second event arrives, it’s 5. Again, it’s falling to the window of 4 to 8, and so we’re going to increase the count by 1. Now in window 4 to 8, we’re going to have the value of 2. The watermark is also increased to 2. The third event comes, and in this case, the out-of-orderness happened. We see a timestamp of 2, which is actually out-of-order. In this case, inside the stream processing state, we will put the window 0 to 4 count equals to 1, because 2 actually falls into the window 0 to 4. We’re not going to bump up the watermark, because, so far, the maximum timestamp seen hasn’t been increased yet, so the watermark stays at 2.

Moving on, we see the fourth event, which is also of timestamp 3. Similar to the last event processing, we will have the value for window 2 to 4 bumped to 2, and their watermark still stays the same, which is 2. Now the fifth event comes, and the timestamp is 7. In this case, what happens is that, first of all, the watermark will bump to 4. Because we say the watermark is max timestamp minus 3, and currently the max timestamp is 7, so watermark becomes 4. Because watermark becomes 4, that basically means the assumption is that all the events before timestamp 4 has arrived, meaning that I can close the window starting from 0 and ending at 4. In this case, the output will be emitted for window 0 to 4. The value would be 2 to 4. In this case, I can clean the state. I don’t need to keep the state of window 0 to 4 anymore because the result has been emitted. The state of window 4 to 8 is still there because the window is still open. It hasn’t been closed yet. Now the next event comes, which is timestamp 9.

In this case, again, watermark will be bumped to 6. Because the watermark is bumped to 6, and we take a look at the window, we didn’t find any window that can be closed. The state stays the same. We have the state 4 to 8, count equals to 3. The reason 4 to 8 count equals to 3 is because we saw 7 earlier. When timestamp 9 comes, we will have a new window because 9 falls into the window from 8 to 12. The window from 8 to 12, the value will become 1. This is the state after we’re processing event with timestamp 9. Assume that’s the last event we process. Assuming that this is like the finite streaming mode, eventually we will just emit the Long.MAX_VALUE at the end of the stream. In this case, eventually we’re going to emit Long.MAX_VALUE. The watermark in that case will also be Long.MAX_VALUE, so it will basically close all the windows that is in the state. Then, the output would become 4, and window 4 to 8 has a value of 3. Window 2 to 12 has a value of 1. Eventually, we will have the output of all the three windows.

This is how stream processing works. We explained event time, which is the timestamp here, can be out of order. We explained watermark, which is the max timestamp minus some value. We also explained the state, so it’s keeping the state. We explained how the states are built and how the states are clean in stream processing. Now imagine I have the exact same program, which has timer, has event time, has watermark, has all those concepts, but now instead of running in streaming, I want to run it in batch. How can I do it? It’s exactly the same program and exactly the same semantic. We want to get the exact same result in batch. What happens is the following.

We will have 4 coming first, and we will have the state, which is window 4 to 8, count 1. 5 comes, window 4 to 8, count 2. 2 comes, and we will say window 0 to 4, count 1, 4 to 8 is still count 2. 3 comes, and we’re keeping building this state. As you can see, we are not emitting any watermark. The reason being that we’re processing finite dataset. We don’t need to emit any intermediate result. We only need to emit the final result when all the data has been processed. The way it works is that we will emit one single Long.MAX_VALUE at the very end. Then, we will have the output, which is the state by the time when the Long.MAX_VALUE watermark comes.

However, this looks like correct. At least logically speaking, it’s going to emit the exact same result as stream processing. The problem is that you still have RocksDB here. In batch processing, you probably don’t want RocksDB because RocksDB is expensive. You’re building your state pretty big. It’s not going to be very memory or disk I/O friendly when you have such a big state there. How can we improve it? Can we replace it with a HashMap. We can just get rid of RocksDB and we’d replace it just with a HashMap. The natural question would be, what if the state is too large to fit into memory if you use a HashMap? You will definitely have OOM. How can we do it? The way we do it is basically leveraging the fact that your data is finite. What we can do is that we can sort the input before we feed it into the operator.

If we sort the input, let’s take a look at what happens. We will see 2 comes into the picture at first. Remember that in the past, we have 2 coming in the middle because it’s out of order. Now, because we’re sorting the input by the event time, we’re actually having this 2 come into the picture first. Then, we can say, now the state becomes 0 to 4 has a count of 1. 3 comes into the picture, we say, 0 to 4 with a count of 2. Then, timestamp 4 comes. Timestamp 4 actually falls into a different window. In this case, because we know all the data are sorted, we know that once the key changes, or the window changes, that basically means all the data in window 0 to 4 actually have arrived. What we can do is that we can just clean up the state of window 0 to 4, and we emit it as output.

Then, in that case, we just need to keep the state of window 4 to 8. In order to emit 0 to 4 with the count 2, when event 4 comes, we actually need to emit the watermark to be Long.MAX_VALUE. Remember, we say the watermark is actually the thing there that triggers the emission in stream processing. If you want to apply the same model in batch processing, you should use the same way to trigger the data emission. That’s how we do it. In Flink, once we see there is a key switch in batch processing, we will immediately emit a Long.MAX_VALUE watermark. That watermark will trigger the emission of the current key, the result of the current key to be outputted. If we move forward, then we will see, 5 comes, and it also goes into the window of 4 to 8. Then 7 comes, it also goes to the window of 4 to 8. Then 9 comes, 9 is going to be in a different window.

The window key, again, changes. In this case, what happens is that we will see the 4 to 8 value going to be emitted, and the 8 to 12 equals to 1 is going to be keeping the state. We were going to use the Long.MAX_VALUE watermark to emit the 4 to 8 window result. Eventually, when everything is processed, we’re going to, again, emit Long.MAX_VALUE, just to trigger the emission of the last key.

As you can see, the way it works is, basically, we only need to keep the current key state in memory. We don’t need to keep everything in memory in that case. We’re using the exact same way as we’re triggering the output on the streaming side to trigger the emission on the batch side. This is exactly how most of the batch engine works. It just does the sort-merge agg, or sort-merge join. That’s how it works. That’s about the computing model. We explained the difference of the computing model between stream and batch processing because there are different primitives and semantics, and how are we going to handle those primitives and semantics in streaming elegantly in the batch jobs.

Execution Model

The second part we look at is the execution model comparison. The execution model comparison actually contains the scheduling, shuffle, checkpointing, failure recovery, and sort-merge join and agg. For stream and batch processing, both of them will have their own scheduling and shuffle and checkpointing. The difference is that on the streaming side, all the tasks need to be up and running at the same time, while on the batch processing side, the tasks are actually executed one by one even if you have only one task manager, one CPU, it’s going to be time-shared execution. For the shuffle, on the streaming side, it’s peer-to-peer pipeline shuffle.

On the batch processing side, it’s blocking shuffle, in a sense that you will put your shuffle result into disk first, and then the next task will read from the disk. For checkpointing, stream processing has interval-based checkpointing. On the batch processing side, we’re not having checkpointing at all, we’re disabling it. For failure recovery, streaming processing side, we’re just recovering from the checkpoint. On the batch processing side, we’re recovering to the shuffle boundary. For stream processing, at this point, we cannot support sort-merge join or agg, as you can imagine, because global sort is not supported. There’s no Adaptive Query Execution, AQE, in the picture at this point. On the batch processing, we have both. Those are the execution model comparison. Again, so those are for efficiency, those are not for correctness.

Just to explain this a little bit more, so for scheduling and shuffle, on the streaming side, all the tasks must be up and running at the same time, and shuffle data does not go to disk, it just goes to network and send over to the downstream task managers. On the batch side, if you have just one task manager, it’s going to store its intermediate result in the disk, and next one comes, and just read from disk, it’s going to be like that. In Flink, it’s blocking shuffle. For checkpointing and failure recovery, so let’s assume that I have a job that is doing checkpointing here, basically store all the state into remote checkpoint store, and one of the task manager fails. What happens is that everybody will just rewind to the last successful checkpoint and continue to process. This is a global failover, sort of. All the connected tasks will be restored from the remote storage.

On the batch side, what happens is that if I have one of the tasks that has failed while the task manager is still there, I can just restart this task, because all its input data is actually stored in the disk. If I have the whole task manager lost, what happens is that I actually need to rerun every task that has been run in this task manager, because all the intermediate state, all the intermediate shuffle result will be lost when this task manager is gone. In order to rebuild all the shuffle result, I need to run all the tasks that used to be running on this task manager. This is assuming that we’re not using remote shuffle, but if you’re using remote shuffle, then the story will be a little bit different. There are more differences between streaming and batch execution model.

For example, sort-merge join and basically AQE, is not supported on the streaming side, because it needs global sort. AQE is actually based on the stats for shuffle or finite dataset, which is not available on the streaming side either. For example, in batch, what it can do is that you look at the intermediate result size, then you automatically adjust the downstream parallelism so that it can run with proper resource, but this is not available on the streaming side. Also, there’s speculative execution that is also for batch only, and there’s different optimization rules for streaming and batch, for example, the join reorder thing.

The Future of Stream and Batch Unification

The last part is about the future of stream and batch unification. There are a few things that we want to do. One thing we definitely want to do is to be able to run the streaming and batch stage in the exact same job. At this point, you can only submit a job either as a streaming job or as a batch job, but you cannot have both streaming and batch mode running in the same job, but it’s going to be very useful if you think about it. If I’m processing the whole dataset, I can process the historical data using batch mode very efficiently, and once I switch over to the streaming data, I can switch to the streaming mode, and then it can produce the compute in the streaming mode for streaming data very efficiently as well. That’s going to combine the best of both worlds. We will also want to have better user experience. Currently, there are still some semantic differences between streaming SQL and batch SQL, for example, in Flink.

One thing we see a lot is that on the streaming side, your data may be stored in some KV store or online service, so you can have an RPC call to look it up, but on the batch side, such online service doesn’t exist. Your data might be storing in some batch dataset, so this causes the semantic or the query you’re going to write to be a little bit different. The last one we want to achieve is that we want to maybe also include ad-hoc query into the picture.

So far, we are talking about stream and batch computing unification, but if you think about analytics world, there’s actually a third pattern, which is ad-hoc query. The ad-hoc query actually has the exact same computing model as batch processing, but the execution model is different, because for ad-hoc query, the execution model is trying to optimize for the end-to-end response time rather than throughput. This is causing the difference. We can apply the same rule, basically, is like you compare the computing model and you compare the execution model, and you can actually unify them.

Let’s take a look at what’s the current status of this stream and batch unification in the industry. At this point, I think the Flink community has spent many years to build this feature, and if you go there, you will see actually the batch performance is really good. It’s almost on par with the open-source Spark. It’s at the point of where we’re seeing early majority for computing engine. For the rest of the domain that we mentioned in the data infrastructure world, storage, control plane, and so on, I think it’s still in the early adopter phase.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article We Pored Over Hundreds of Books to Find the Best Reading Lights
Next Article Yelp is creating its own AI videos about restaurants
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

DexScreener Rockets by Boost Legends | HackerNoon
Computing
Adidas CEO details worries about ‘consumer reaction’ to changes by the brand
News
Ultimate Health Check Up. Longer Life.
News
For months Ukraine turned a strategic city into impregnable. Until Russia added a lethal partner to his drones
Mobile

You Might also Like

News

Adidas CEO details worries about ‘consumer reaction’ to changes by the brand

5 Min Read
News

Ultimate Health Check Up. Longer Life.

0 Min Read
News

Sky warns of 5 new channel changes hitting TV screens TOMORROW

2 Min Read
News

Your Next Yelp Reviews May Include an AI Video — With a Catch

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?