By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI for Food Image Generation in Production: How & Why
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > AI for Food Image Generation in Production: How & Why
News

AI for Food Image Generation in Production: How & Why

News Room
Last updated: 2025/08/27 at 9:03 AM
News Room Published 27 August 2025
Share
SHARE

Transcript

Amerkhanov: I’m going to start with a story, the story of a project that was built some time ago. It was an ordinary data science day. I was doing my data science work, which is explaining our business, why can’t we reach an accuracy of 200% because of the mathematics itself? Then, I received a message from my product manager, which was stating that we are starting a new generative AI project. I thought, that sounds cool. How did they manage to allocate the resources for that, given our limited capacity overall? Then a deeper question arose in my head, why do we actually need the generative AI project? That is why the name of the talk is, “AI for Food Image Generation: How and Why”.

I have around 10 years of experience in software development, data analytics, machine learning, data science, and AI. I’m currently leading the development of AI solutions at Delivery Hero. Delivery Hero is a global food delivery company with our entities located on 4 continents, in more than 70 countries with almost 50 billion in terms of GMV. Our headquarters is located in Berlin, Germany. Those are some of the brands that you might be familiar with. For instance, Talabat had an IPO last year.

Outline

We’re going to cover the following topics during this talk. Why, first of all, image generation is important for Delivery Hero. How we have built the MVP. How we made it scalable. We’re going to talk a bit about the safety system, the image quality, and also about fine-tuning the Stable Diffusion model, about how do we optimize it, and how do we make it cost efficient.

Why Is Image Generation Important for Delivery Hero?

Given the preposition that we have to come up with the project on generative AI, the first question that we should address is, why do we do this? We formulate the business hypothesis the following way, that the quality of the menu content positively influences the conversion rates, which is obvious. In order to check this and to confirm this preposition, we conducted data analysis, figuring out that quite a lot of products don’t have either image or a description.

Then the next question was, should we go with text generation to cover descriptions? That’s going to be pretty easy. The descriptions were not a blocker for the customers to purchase the products. We had to take a look at the images, and only 14% of products were bought without images. That led us to the conclusion that probably the highest impact from doing the generative AI would be from generating images. That’s how we started. I have gathered our team, and we have brainstormed and come up with two solutions. One was pretty straightforward and simple. The other was sophisticated but elegant. I was asking my product manager, which of those options should we come forward because we weren’t able to decide upon. Luckily, they have brought some clearance in terms of strategy and the overall direction that we should go forward. I’ve said, we’re going to do both of them.

The first solution here is pretty straightforward, is a text-to-image generation. We have some prompts similar to the one represented on the slides. We provide the information regarding the product name, menu category, descriptions, some disrelated attributes that were extracted with an additional text classification model, and the information regarding the material. The material comes from each of the vendors individually based on their preferences over the background that they would like to have on their menu. The second solution was more interesting in terms of what do we do. We had to detect the right image within the set of images of each of the vendors. Then the second step was to detect the food object on this image to mask it, and to do the inpainting based on the information about the product we have.

How the MVP Was Built

For the MVP, we were using the GCP platform with the backend located on Cloud Run with Postgres, Pub/Sub, and GCS for storing the messages. The most interesting part here is Vertex AI pipelines, which helped us a lot with building fast pipelines. What is Vertex AI Pipelines? This is a wrapper on top of Kubeflow, which is a wrapper on top of Kubernetes, which is a wrapper on top of Docker. This level of abstractions allows us to fastly build the orchestration for our models. At that moment in time, we were using OpenAI DALL·E as one of the most advanced models back in the days. The pipeline looked like this, with the rectangles here representing the components.

For each of the component, we had a separate environment, because, for instance, object detection, Grounding DINO model required its own drivers for running, and we were running it on GPU, while, for instance, the image selection component didn’t require such hardware, so we could run it on a lighter environment. It started with data extraction from our data warehouse, then going through the generative flow, and the final outputs were transitioned to Postgres and GCS. That’s how the outcomes of the MVP looked like. We were generating the set of Google Forms for each of the restaurants, and we’re transitioning them to the content teams such that they could go through these Google Forms and select the most appropriate images based on their expertise about each of the regions.

How Does the Stable Diffusion Model Work?

That was the moment when we realized that there is a positive traction regarding our project, that the content teams were interested in moving forward, and there is a need to roll out this to other entities. We have come up to the understanding that we need to host our model internally within our infrastructure in order to reduce the costs. In order to do this, let’s talk a bit about how the Stable Diffusion models work, because that’s going to help us a little bit with understanding how did we serve it. Whenever we talk about the Stable Diffusion architecture, there are majorly three components, Variational AutoEncoder, which is, in simple words, a way to map the images to vector space. The U-Net, the second component, which is used as a denoising function. What is a denoising function? I will explain on the next slide. OpenAI CLIP, Contrastive Language-Image Pre-Training for mapping images and text to the same latent space. This is the architecture, with a brief explanation of the diffusion process, how it works.

We have two stages in the diffusion process, the forward pass and the backward pass. We start with the initial image that has some resolution and three channels, blue, green, and red. Our image is represented here as a pixel space, the red rectangle on the left. Then, using Variational AutoEncoder, we map this to the latent space, to a vector space, and apply Gaussian noise with the diffusion process, which is outlined here, the forward pass. After that, we do denoising process, and that is why we need U-Net here, which is a network that is trained to predict where the noise was applied. After predicting where we have applied the noise on the previous stage, we subtract it and map the vector representation of the image back to the pixel space to get the final image in the initial resolution. The rightmost part here, the conditioning block, it outlines if there are any additional modalities that we would like to condition our generative process on.

For instance, we can condition on classification, on embeddings. We can condition on audios. In the vanilla Stable Diffusion, we just condition on prompt, and that is why we need CLIP here, because it allows us to map the images and the text to the same latent space, such that denoising U-Net also takes into account the information regarding the textual inputs that we provide. In order to visualize how it works, actually, so we have some noise as an input. We usually have several steps during the denoising process. On each of those steps, we try to predict the noise and subtract it from the latent space, or from the pixel space, dependently on what model we have, in order to converge to the final representation of the object that we are trying to generate.

Talking a little bit more about Contrastive Language-Image Pre-Training model, because we had some issues with that. Let’s just briefly recap what’s that. It is trained in such a way that we have a separate transformer to vectorize the images. We have a separate transformer to vectorize the textual inputs. We have vectors. We multiply those vectors to get their multipliers. On the diagonal of this matrix, we optimize the model training in such a way that we decrease the cosine distance between the vectors, such that the model trains not the specific modality, but rather a semantic representation, for instance, of a burger. Why I’m saying this, because we have faced some unexpected problems with CLIP, while we were trying to host Stable Diffusion in our infrastructure.

The exact problem was related to the non-Latin languages in many of our products, for instance, Chinese or Arabic names of the products, for some reason, weren’t generated well. That was weird, because CLIP was actually trained on around 50 different languages, so we would expect it to work perfectly, but it wasn’t. The solution here was quite straightforward. We have just translated every non-Latin product name to English using Google API, and it worked well.

The second problem was a little bit more intricate. That was actually about the context length being limited to 77 tokens, which was a blocker for us to generating some of the products, because sometimes we had quite large prompts with many specific details regarding the positioning, coloring, lighting, the product context, the ingredients of the product, so that goes beyond 77 tokens, and we had to deal with it. Hopefully, there is a way implemented in the library called Compel, that, for large prompts, we can use this library. How does it work? It actually chunks the initial textual prompt to several inputs, vectorizes each of the inputs separately, and then joins them together, such that we can propagate this further to the model to generate.

System Scalability

After overcoming all of these CLIP-related problems, we have scaled up our infrastructure, and the decision was made to migrate part of the services, part of our infrastructure to AWS, because some of the services were cheaper there. That was a mistake. I’ll explain it later. When we faced the problem that we have to integrate several clouds, it just took a longer time than we were initially planning for this.

Finally, we have come up with a nice way on how to orchestrate the CI/CD process, because we were serving separate computer vision models on separate clouds, and depending on the cloud, our CI/CD pipeline with GitHub Actions was navigating the relevant containers with the relevant models to the specified clouds that we were generating for. That’s how the pipeline looked like now. The generation part was migrated to AWS.

Depending on the flow that we had, whether it was text-to-image, whether it was image-to-image, we had different routing of the same context that was propagated either through one model or through several models, and each of the models was hosted on its own GPU hardware within the Elastic Kubernetes Service that was autoscaling using KEDA. KEDA is an instrument for message-based autoscaling. Depending on the load, our cluster scaled from 0 to 36 nodes and back to 0. This allowed us to already cover around 100,000 images to be generated daily.

On the left side, you can see our rollout to the final restaurant. Every restaurant owner could go to their application, choose the product that didn’t have any images, see the suggestions, and choose the most relevant based on their opinion regarding their representation of their own menu. On the right here, you can see the examples of the products that were accepted and uploaded to production and were already contributing to the quality and to the user experience of our customers. That would be it unless we didn’t have some interesting cases.

After rollout, we were interested in incrementally increasing the quality of the model. The question from product was, how can we actually understand if a new version of the generative model that we deploy is better than the previous one? I thought, this is a machine learning problem. We’re just going to calculate the accuracy using sklearn accuracy. Then I have realized that there are no ground truth. This is the general problem of all the generative AI solutions, independently from whether we generate images or text, is actually, how do we measure them? I’ve said, just give me a second. I will figure out how to do this. Three days later, I have returned back to them and said, I have an idea. The idea was that we had some product guidelines regarding how our images should look like.

Those guidelines, they were based on the positioning of the image, on the composition, on the requirements, on the coloring, also on the content of the images. Preparing for each of those aspects, a separate computer vision model that would classify if the image is aligned with our guidelines or not, and measuring the overall probability for the image would give us some score for each of the images. The dataset for this benchmarking framework consisted of 1000 product descriptions. Each of the versions of the models that we were iterating on was prompted with the same prompt because we were trying out the actual model and not the prompt. We were generating 1000 images for the 1000 preselected products stratified by entities and by categories. That’s how we get the final score. Based on the final version, one of the best configurations was the version of Stable Diffusion, Juggernaut V9 with an IP adapter.

The second question after this was, if we can, how do we optimize this Stable Diffusion model for inference? Because the actual reason for us to migrate into self-hosted generative model was to actually optimize it for cost efficiency in the end. The first step here is to properly select the GPU, depending on our optimization objective. In our case, that was cost efficiency. We have tried out several GPU configurations. Those are just the three most interesting. As you can see here with the default parameters, vanilla Stable Diffusion was taking around three minutes to generate one image on Tesla T4 NVIDIA GPU. However, with A100, it was taking around half a minute. The A100 was one of the most computationally efficient GPUs back in the days.

However, on the other hand, the cost per A100 was drastically higher compared to T4 and L4. When we have calculated the cost per image generated, we have had a clear winner, which is L4, which introduced us at 50% decrease in terms of cost per generated image. It was already lower than the cost that we were paying with OpenAI, DALL·E, which was 2 cents per image. In that case, we already had 1.6 cents per image. On the expected volumes of generation, which was measured by 10 to 100 of millions of images, that was corresponding to quite a good amount of money.

The second step after we have chosen the right hardware was, how do we actually serve the model? Because there were many ways of how can we serve it in a more optimal way. In our case, the direct metric was to optimize the time it takes to model generate one image. We were actually targeting to decrease the time per image. This is the Stable Diffusion XL with the vanilla parameters, default configuration. The first and the most obvious choice is to migrate from float 32 to float 16, which was already four times more efficient. However, while doing those optimizations, we are highly interested in preserving the best quality that we can. Because for instance, with quantization, with other methods that we use for optimization, we can sometimes unintentionally corrupt the quality. That’s why we have to check if our image is malformed, if the quality degrades or not.

The second step here was replacing Variational AutoEncoder with its light version, a Tiny Variational AutoEncoder. That brought us additional 0.5 seconds per image. Also, there is a configuration parameter called Classifier-Free Guidance Scale, which basically determines how much your image generation process is conditioned on the input textual prompt. If we disable it somewhere in the middle of the generation process, it also brings us additional 0.2 seconds. Also, one of the most important parameters, number of steps of generation, the forward and backward passes. In the vanilla Stable Diffusion, the default number is 50. We were trying reducing this number. With 40 steps, even though you can see some slight differences because the denoising process was converging into a different image without being given additional 10 steps, in terms of time, that brought us additional 3.8 seconds.

Finally, there is a PyTorch torch.compile function that allows us to precompile the weights of the model in a more efficient way such that when we do inference, it is more optimized. We have reached 11.2 seconds per image. Finally, talking about the numbers, that was 85% decrease in terms of time that it takes to compute. What about the final numbers? That was around eight times lower in terms of cost reduction comparing to DALL·E 2. We have reached a level of less than 0.3 cents per generated image. On the volume of tens of millions, that was corresponding to around hundreds or even millions of euros in savings, which is pretty good.

Exploring the Dimension of Stable Diffusion Model Fine-Tuning

That was an ordinary day. I was just sitting doing some data science work, working with big data. By big data, I mean writing lots of emails. Then another message dropped from my product manager. Like, do we have any problems with local dishes? I thought, the model worked quite well with burgers and pizzas. Maybe that’s not that bad.

First of all, let’s take a look at the local dishes that we have. Stuffed pigeon, not that expected. There were even more exotic examples like a salad with ant eggs, or a soup with frogs. I didn’t even know that those dishes exist. I thought, maybe it’s going to work out, out of the box without us needing to do anything about that. What’s that? If I ever see this in a food delivery app, I will never come back there. That was clearly a problem that we have to solve. Or, talking in corporate language, that was an opportunity for us to explore the dimension of Stable Diffusion model fine-tuning. Whenever we talk about Stable Diffusion fine-tuning, we usually have two options. The first one is LoRA fine-tuning, which works perfectly well when we have to train the model on a new concept, like a new local dish.

However, in our case, we had thousands of local dishes. That would mean that we have to generate thousands of different LoRAs and also serve them, which wasn’t that easy. It was pretty complex in terms of the infrastructure that we would have to support for this. The other option is to go with full model fine-tuning. For that, those are some of the most popular frameworks used. We have chosen OneTrainer because it’s one of the most stable ones, and also it has nice UI for specifying the configuration parameters of the training process. How does the training process look? Only two steps, dataset preparation and configuration. Dataset preparation in our case was extracting the relevant data from the warehouse, selecting the appropriate categories, the local dishes that we were interested in. Filtering out duplicates. Also filtering out inappropriate images, because we were interested in training our model only on the best quality images.

Finally, additional recaptioning of the images with the multimodal language model. The training configuration here. The first of the parameters, there are several hundreds of parameters that you specify while training a diffusion model. One of them is image augmentation. The same augmentations that we do during computer vision model training, like image rotation, mirroring, changing the color. This is in order to make our dataset a little bit more diverse. Other parameters are pretty standard for the deep learning models whenever we train them, like learning rate, scheduler, warm-up steps, epochs, optimizer, and stuff like this.

After training the Stable Diffusion model on A100, it took us several days. We were quite impressed with the results because it was generating quite well each selected local dish. It was with the tradeoff that it has lost the concept of some of the other dishes, like pizzas became a little bit less appetizing. Since we were hosting several versions of the models in our production inference cluster, that wasn’t a problem for us to just serve several versions and route the request based on the product meta-attributes.

Which Images Are AI Generated?

Now let’s try to play a game. Let’s figure out how good is your pattern recognition working. For that, we’re going to detect which of those images are AI generated and how good did I make my job. Do you think that the first one is image generated? How about this second one, is it generated? The third one? The pizzas here are generated. Let’s go for another round. Do you think that the first one is generated? Let’s go for the first one, do you think that this is AI generated?

Why An AI Safety System?

I was working on my desk doing the data science job, pretending to be smarter than others and stuff like this. Then I received an email from my product manager which was actually prompting me with another round of a quiz. Any ideas on what could that be? I’ve taken my chance. I’ve asked if this is a chicken with potatoes. I was right. The model was right when it was generating chicken with potatoes, because this is literally chicken with potatoes. The problem was with prompt following. Given that a model should generate chicken with potatoes, sometimes, statistically somewhere around 1 per 10,000, we were detecting those weird cases. We had to come up with the AI safety system in order to eliminate those cases, because we were risking reputation. We were risking reputation for our content teams, for the restaurants that we are partnering with, and also for the final customers. The safety system consisted of several components.

The left image here, you can imagine this is chicken soup, obviously. The first part of the component of this safety system was creature and people detection. We were using a customized version of the Recognize Anything model. The vanilla configuration, it has around 5,000 tags. This is a multi-label computer vision model. After some customizations, it worked perfectly well for our case without any fine-tuning. The second problem was that sometimes generated texts were either malformed, or it contained some gibberish like pepperoni-roni. We didn’t want to have this in our final output. We were using EasyOCR for optical character recognition in order to detect the text. If any of the text is detected, we were also excluding this from the final output. Also, remember, we have product guidelines, the product should be in the center of the image, it should have 5% of margin between the product and the borders.

After using object detection model, Grounding DINO, we had a bounding box and measuring the margins. If they were less than 5% or if the bounding box was touching the borders, that was also a case that probably something went wrong during the generation. The final part here was related to coloring. Actually, the coloring part was one of the hardest ones because it is decomposed to several sub-problems like contrast, exposure, saturation, and two additional ones.

One of the most frequent ones that we have observed were problems with over-blurriness and smooth images and also low contrast ones. In order to detect those cases, we were calculating Laplacian, which is a pixel-wise gradient. Low values of the Laplacian function, they correspond to the image being too smooth, meaning that it either has low contrast or probably it’s over-smooth. Based on those components, we were weighting them and averaging, and each of the images was assigned a score. For some of the images, the score is a pretty obvious one. With a pre-selected threshold, some of the images were excluded from the final output. This is how we ensure that no one except the people working with the generated images ever see the nightmares that we generate.

What About the Results?

This is the time to wrap up the whole talk and talk a little bit about the final results that we have achieved. Talking the numbers. Up to this moment in time, we have generated more than 100,000 images, and covered more than 100,000 products with the generated images. Actually, we have generated around a million images. More than 100,000 products have generated images on our platforms. How does it actually contribute to the initial hypothesis that we had regarding the conversion rates, like whether it helps or not? After conducting the A/B test, we have observed 6% to 8% increase in terms of conversion rate from menu to cart when we add an image to a product. Yes, the hypothesis was confirmed.

Key Takeaways

What are the lessons learned after this interesting and sometimes scary journey? The first takeaway is, I would highly recommend you to avoid using cross-cloud unless you clearly understand what you’re doing or unless you have to do this. Because, as I’ve said, in our case we have experienced some problems with integrating the communication between those two clouds, which a little bit delayed some of our deliverables.

The second takeaway is, if you’re planning to run inference on millions or billions of data points and you’re using some quite heavy model, it doesn’t depend whether it’s just a heavy classification model or some GenAI model, every hour and every day spent on model optimization contributes to hundreds of thousands, if not millions of pounds of money saved. Which means that we’re going to have positive return on time invested for the time that we spend on optimization. The final takeaway, and if there’s anything that you’re going to remember out of this talk except the cringy images, let it be this one.

First, automate the way you measure the quality of your generative AI solution, and only then work on fine-tuning the actual generative part. Because if you don’t measure the quality of your system, how are you going to optimize it? Again, automate, talk to your business, understand what are the most important parts for your generative AI solution to solve, if they can be somehow coded or automated, and only then work on fine-tuning.

Questions and Answers

Participant 1: You mentioned briefly that you couldn’t train your model on non-Latin languages. Could you explain a bit more why that was the case? Because the two languages you mentioned, Chinese Mandarin, and Arabic, are widely spoken, so surely you should have enough training data to train a model on.

Amerkhanov: Why weren’t we able to train a model on non-Latin languages, on Mandarin and Arabic, because we should have enough data for this? That was the case with the CLIP part of the Stable Diffusion model, specifically. CLIP itself is already trained on 50 different languages, so it’s multilingual already, and it should work fine, but the behavior that we have observed when it was used within the whole diffusion process was that those cases from non-Latin lingual groups, they weren’t working that well. We could try fine-tuning the whole model on non-Latin languages, fine-tuning the CLIP part of this model to non-Latin languages, but it was just faster and easier to use translation.

Participant 1: I’m just curious about the legal ramification, so if you have a customer who is unhappy because he thinks that the dish he received in the restaurant doesn’t actually match the image he saw on your website, who’s liable in that case?

Amerkhanov: Who is liable for the cases when our images are uploaded to our platform, generated AI images, and the customer raises a question whether this is generative AI and that it doesn’t match the product description? This is a great question. Why? Because one of our major focus is customer experience, that’s why the whole project has been developed. First of all, we mark all the images that are AI generated, that they are generated, and they serve for reference purpose only. This is the first one.

The second one, our content teams and vendors make sure that the images that are selected to production, they somehow correspond to how these products should look like. Moreover, with the rapid development of AI, some of the countries, they have now regulations that whenever you post generative AI content, whether it is images or text, you have to mark it that this is AI generated. We make it clear that those images are for reference purposes only.

Luu: Can they get a refund if they wanted to?

Amerkhanov: You’ll have to deal with our chatbot.

Participant 2: Did the evaluation procedures that you used, for example, all those models to check if there’s birds or people in the food, did that add a lot of cost to the process, the checking itself? Was there runtime checking that you were doing on inference that would sometimes say, this is bad, let’s restart, or did that add a lot of cost in general?

Amerkhanov: The question is whether the quality related models that we were serving within our cluster contributed much to the overall expenses. In order to address your question, let’s talk about the compute time it takes to make inference for the generative part and for the other part of the system. For the generative part, as I’ve mentioned, it was taking on average around 12 seconds to generate one image, while the overall time for all other parts of the system, all the other models combined, were taking less than half a second. Transitioning this to the GPU cost, even considering that we were using different hardware and it costs different money. The overall cost came from the generative part. That’s why whenever we talk about optimization, we were talking about optimizing the generative part mostly.

Participant 3: During the talk, you mentioned that you tried to also work on reducing the time to generate images. Also, you mentioned that the system has been generating lots of images per day. I believe by this time now, we have maybe millions of images already being generated based on the text. Are we also reusing some of those images for the new generations, or we are directly always sending for the new images based on the text?

Amerkhanov: Since we’re generating quite a lot of images, some of them obviously will not be used for the products, the question is if we can save some compute and money by reusing those images that weren’t used for the cases previously. This is in our pipeline as one of the measures to optimize the compute time. Also, since we can just use one of the images that was pre-generated instead of taking additional time to generate an image, yes, but we haven’t implemented this yet.

Participant 4: My question is about the architecture of the model. You specified that the model is composed of three parts, VAE, U-Net, and CLIP. What’s the difference between this architecture and the vanilla, and other architecture that generates images? The second one, to fine-tune the architecture, do you fine-tune every single architecture or all the architecture?

Amerkhanov: The question consists of two parts. The first part is that those three components that we have outlined, U-Net, VAE, and CLIP, what’s the difference between them and the vanilla model. There’s no difference. I was explaining how the vanilla architecture works. The second part of the question is whenever we fine-tune it, whether we fine-tune the overall weights of all the three components or only one of them? In most of the cases, when we talk about image generation fine-tuning, the Stable Diffusion fine-tuning, we don’t fine-tune CLIP. Also, we don’t fine-tune Variational AutoEncoder because it already maps from pixel to latent and back. This is all that we need. There won’t be any differences independently from what images and what modalities do we generate for. The only thing that we fine-tune here is the denoising U-Net.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article 5 suspense movies that won’t leave your head when they’re over
Next Article Intel filing shows risks of US government stake | Computer Weekly
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Malaysia’s SkyeChip unveils the country’s first edge AI processor | News
News
I’d move house to fit this stunning 116in TV into my living room | Stuff
Gadget
Free Freelance Invoice Templates to Bill Clients Professionally
Computing
Want to Buy a New iPhone? Here's Why You Should Wait
News

You Might also Like

News

Malaysia’s SkyeChip unveils the country’s first edge AI processor | News

2 Min Read
News

Want to Buy a New iPhone? Here's Why You Should Wait

7 Min Read
News

Google will now let everyone use its AI-powered video editor Vids

3 Min Read
News

Anthropic Settles Lawsuit With Authors Over Use of Pirated Books for AI Training

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?