By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Beyond the Gang of Four: Practical Design Patterns for Modern AI Systems
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Beyond the Gang of Four: Practical Design Patterns for Modern AI Systems
News

Beyond the Gang of Four: Practical Design Patterns for Modern AI Systems

News Room
Last updated: 2025/05/15 at 5:04 AM
News Room Published 15 May 2025
Share
SHARE

Key Takeaways

  • AI design patterns are repeatable solutions for common problems in modern AI-driven software, saving teams from reinventing solutions. We group them into five buckets: Prompting & Context, Responsible AI, User Experience, AI-Ops, and Optimization Patterns.
  • To create effective AI outputs, you must provide effective guidance, either by crafting precise prompts and/or supplying relevant context (or external knowledge) directly within your prompt.
  • As part of building responsible AI systems, you must reduce hallucinations, prevent inappropriate or disallowed content, mitigate biases, and ensure transparency around AI decision-making.
  • Well-defined UX patterns help you, the developer, handle new types of interactions in a user-friendly way to keep users engaged and satisfied and promote transparency.
  • You must make smart optimization choices for your system, whether redirecting traffic away from unnecessarily powerful models, caching predictable responses, batching queries near-real-time, or developing smaller specialized models.

Why Design Patterns for AI Systems?

The Gang of Four’s 23 object-oriented patterns shaped how an entire generation of developers designed software. In the 2010s, cloud computing introduced patterns like publish-subscribe (“pub-sub”), microservices, event-driven workflows, and serverless models that now power most cloud based distributed-systems.

Similarly, before the current AI boom, the machine learning community had already developed “ML design patterns”. When you build and deploy ML models, you face specific challenges, and patterns like Checkpointing, Feature-Stores, and Versioning have become standard practice.

Why should you care about these patterns? They help you solve known problems in standardized ways. Instead of reinventing solutions, you use a shared vocabulary. When you say “Singleton”, “Pub-Sub”, or “Feature Store”, your team immediately understands your approach. This speeds up your development, reduces errors, and makes your systems easier to maintain.

Modern AI systems bring new challenges that neither classic software nor conventional ML patterns fully address.

For example, how do you guide model output and prevent misleading content? How do you build user experiences that help users understand, trust, and effectively use AI-powered applications? How do you manage agent interactions in multi-agent systems? How do you reduce compute costs to make your product sustainable?

Figure 1: An illustration of a well-architected modern AI-based system

To help develop a well-architected AI system as shown in Figure 1, many AI patterns have emerged across the industry. In this article, I won’t invent new design patterns. Instead, I’ll show you how existing patterns fit together. I organize key emerging patterns into five categories that build on each other as you scale your AI system.

  1. Prompting & Context Patterns: For crafting effective instructions and providing relevant context to guide the model’s output
  2. Responsible AI Patterns: For ensuring ethical, fair, and trustworthy outputs
  3. User Experience Patterns: For building intuitive interactions
  4. AI-Ops Patterns: For managing AI at scale
  5. Optimization Patterns: For maximizing efficiency and reducing cost

I specifically cover best practices for building user-facing AI applications using existing models, mainly accessed through API calls. While I focus on text-based interactions, you can also apply these patterns across multimodal applications. However, I deliberately don’t address model training, customization, hosting, or model optimization since these typically fall outside the workflow of developers using API-based AI models. I also don’t cover agentic AI systems or patterns for multi-agent interactions, as these topics deserve their own dedicated discussions.

Prompting and Context Patterns

Unlike traditional software, where you explicitly code system behavior, in modern AI systems, behavior heavily depends on the instructions and context you provide to large language models (LLMs) or large multimodal models (LMMs).  To create effective AI outputs, you must provide effective guidance, either by crafting precise prompts and/or supplying relevant context (or external knowledge) directly within your prompt.

Prompting might seem trivial at first. After all, you send free-form text to a model, so what could go wrong? However, how you phrase a prompt and what context you provide can drastically change your model’s behavior, and there’s no compiler to catch errors or a standard library of techniques. Creating prompts that reliably and consistently produce your desired behavior becomes difficult, especially as tasks grow more complex.

If you use prompting and context patterns effectively, you can improve the model’s reasoning, accuracy, consistency, and adherence to instructions. Equally important, you can create reusable prompts that generalize across models, tasks, and domains.

Let’s examine four specific prompting patterns that will help you standardize and refine your approach:

Table 1: Prompting Issues and When to Apply Each Pattern

Few-Shot Prompting Pattern

Few-Shot Prompting is one of the most straightforward yet powerful prompting approaches. Without examples, your model might generate inconsistent outputs, struggle with task ambiguity, or fail to meet your specific requirements. You can solve this problem by providing the model with a handful of examples (input-output pairs) in the prompt and then providing the actual input. You are essentially providing training data on the fly. This allows the model to generalize without re-training or fine-tuning.

Let’s look at a very simple example (using “GPT-4o-mini” via OpenAI’s API; you can run similar prompts locally using OLLAMA or Hugging Face Transformers):


PROMPT:
Classify the sentiment of the following sentences as Positive, Negative, or Neutral.
Sentence: "I absolutely loved the new Batman movie!"
Sentiment: Positive
Sentence: "The food was okay, nothing special".
Sentiment: Neutral
Sentence: "I'm really disappointed by the poor customer service".
Sentiment: Negative
Sentence: "The book was thrilling and kept me engaged the whole time".
Sentiment:

RESPONSE: (GPT 4o-mini)
Positive

Do the frontier models today need these few-shot examples to complete their task correctly? No – they already excel at zero-shot learning and don’t need spoon-fed examples to understand basic instructions. 

However, you can think of Few-Shot Prompting as a personalization tool, guiding the model to cater to your specific needs. You can:

  • Guide the model toward your expected output format, tone, or complexity.
  • Adapt models to your new scenarios or specialized tasks without fine-tuning.
  • Reduce hallucinations by anchoring your model’s output.

Role Prompting Pattern

In Role Prompting, you instruct the model to take on a specific persona or role, which helps contextualize its responses. For example: “You are a biology professor. Answer the question in detail”.

Without guidance, your model’s style or assumptions may not match your needs. Role prompting can steer your model’s output style (e.g., “Act as a JSON formatter”), tone (e.g., “Pretend you are an angry pirate for this response”), and/or content boundaries (e.g., “You are a legal assistant”). You can implement role prompting via system messages (in chat models) or by simply including a persona statement in your prompt. It’s as simple as that – just tell your model who it should be, as illustrated in the example below. 


PROMPT (without role prompting):
Explain Photosynthesis in 1-2 sentences.

RESPONSE: (GPT 4o-mini)
Photosynthesis is the process by which plants, algae, and some bacteria convert light energy into chemical energy, producing glucose and oxygen. This occurs in the chloroplasts, using sunlight, water, and carbon dioxide.

PROMPT (with role prompting):
You are a primary school teacher. Explain Photosynthesis to your students in 1-2 sentences. 

RESPONSE: (GPT 4o-mini)
Photosynthesis is how plants make their own food using sunlight, water, and air. They turn these things into sugar to grow and give off oxygen, which we breathe!

Popular AI assistants use role prompts extensively:

  • OpenAI’s developer role (previously known as system prompt) allows you to specify how the model should behave and respond.  Here is an example of role prompting from OpenAI’s prompt engineering guide: “You are a helpful assistant that answers programming questions in the style of a southern belle from the southeast United States“.
  • Similarly, Anthropic’s developer guides explicitly advise you to “Use system prompts to define Claude’s role and personality. This sets a strong foundation for consistent responses”.

Chain-of-Thought (CoT) Pattern

If you are a software developer trying to solve a complex algorithmic problem or a software architect trying to analyze complex system bottlenecks and vulnerabilities, you will probably brainstorm various ideas with your colleagues to understand their pros and cons, break down the problem into smaller tasks, and then solve it iteratively, rather than jumping to the solution right away. In Chain-of-Thought (CoT) prompting, you encourage the model to follow a very similar process and think aloud by breaking the problem down into a step-by-step process. Especially when prompting complex tasks, CoT can prevent the model from jumping to pre-learned and often incorrect patterns prematurely. 

If you’re using state-of-the-art or advanced reasoning models today (e.g., Claude-3.7, OpenAI o1), you may observe step-by-step reasoning under the hood without explicit prompting. However, explicitly instructing these models to “think step by step” remains a powerful approach, especially when using older or smaller models. OpenAI and Anthropic recommend use of CoT in their prompt engineering guides. 

By nudging the models to produce intermediate reasoning steps, you can:

  • Improve the model’s logical reasoning and decision-making capabilities.
  • Make the model’s outputs more interpretable and increase confidence in its predictions. 
  • Diagnose errors or biases in the model’s outputs.
  • Reduce hallucinations significantly. 

Let’s look at an example below:


PROMPT (with CoT):
A user recently watched and enjoyed the movies 'Inception,' 'The Social Network,' and 'Blade Runner 2049.' Think step-by-step about what these movies have in common, such as themes, genres, style, or storytelling, and then recommend a few more movies or tv shows the user might like. 

RESPONSE: (o4-mini)
Here’s my quick breakdown and some picks you might enjoy:

Step-by-step commonalities

1. High-concept premises – each film poses a big "what if?" (dream control, social media origin, AI-driven future).
2. Smart, twisty narratives – nonlinear or slowly-unfolding storytelling that rewards close attention.
3. Tech-oriented/social commentary – they explore how technology shapes identity, society, or reality.
4. Striking visuals & mood – strong production design, immersive cinematography, atmospheric score.

Recommendations

● Memento (2000) – Nolan’s early mind-bender of memory loss told in reverse.
● Ex Machina (2014) – A tense, intimate study of AI consciousness and human manipulation.
● Arrival (2016) – Thought-provoking first-contact drama where language and time intertwine.
● Black Mirror (TV series, 2011– ) – Standalone episodes examining near-future tech’s dark side.

Each shares that blend of brains-and-cinema you seem to love. Enjoy!

Retrieval-Augmented Generation (RAG) Pattern

From startups to tech giants alike, RAG has become a standard best practice across the AI industry. Advanced models learn from vast amounts of real-world data. Ask them about history, science, or popular facts, and they’ll usually answer correctly. Yet, these models have limitations. Their training cuts off at a specific date, their knowledge is general rather than specialized, and they don’t have access to the newest, proprietary, or dynamically changing information. 

This is precisely where RAG helps. RAG combines the model’s reasoning abilities with real-time access to external knowledge (like databases, vector stores, or documents). So you get the best of both worlds. 

Imagine building a chatbot for your law firm. With RAG, when a client asks about a specific legal issue, your chatbot can instantly retrieve relevant statutes and recent case summaries from your internal knowledge base, creating an accurate, well-supported response.

Figure 2: Retrieval-Augmented Generation

When building AI systems, you should consider using RAGs when:

  • Your model needs up-to-date information beyond the model’s training cutoff date.
  • Your system relies on domain-specific, proprietary, or frequently updated data.
  • Accuracy and transparency are critical, and you must reduce hallucinations or incorrect outputs.
  • You want to cite or directly reference external content or knowledge bases in responses.

Responsible AI Patterns

Prompting and Context Patterns we discussed thus far can help reduce ambiguity, inconsistency, and hallucinations through better instructions and grounded context. However, you may soon notice that additional safeguards are needed to handle ethical, fairness, and safety issues. Even accurate responses can be biased, harmful, or inappropriate. This is where Responsible AI Patterns come in.

As part of building responsible AI systems, you must reduce hallucinations, prevent inappropriate or disallowed content, mitigate biases, and ensure transparency around AI decision-making. Otherwise, your AI outputs may mislead users, spread misinformation, or even create liability issues.

Techniques like RAGs, discussed earlier, already help reduce hallucinations by grounding outputs in an external context. Let us look at a few additional patterns that focuses on safety, fairness, and ethical compliance that go beyond accuracy alone.

Figure 3: Sequence Diagram Illustrating Responsible Patterns in Modern AI-based Systems

[Click here to expand image above to full-size]

Output Guardrails Pattern

Even when you do everything right, models may still produce incorrect, biased, or harmful content. You need guardrails! These are rules, checks, or interventions applied after the model generates an output. They act as your final defense to modify or block the content before it reaches users. Guardrails are particularly important for sensitive domains such as legal or medical applications. 

Depending on your domain and use case, you can implement guardrails in several ways. For example, you can:

  • Verify outputs for ethical compliance, fairness, and accuracy using established business rules or domain guidelines.
  • Detect bias either through lightweight classifiers or fairness metrics. 
  • Use ML models to detect and filter harmful multimodal AI content.
  • Use metrics like groundedness score to measure how well the response is “grounded” in the input or retrieved references.
  • Instruct the model to regenerate content with clear warnings to avoid previous errors.

Many model providers also integrate fairness and ethics checks into their own guardrail pipelines. For example, Anthropic’s Claude models follow a constitutional approach where outputs are revised according to predefined ethical principles. However, having your own guardrail layer will provide a consistent experience for your users, regardless of which model or provider you use.

Model Critic Pattern

Beyond basic guardrails, you can use a dedicated fact-checking or “critic” model to verify your primary model’s output. This second model can be different or the same one in a “critic” or “judge” role. It’s analogous to an author or editor reviewing and correcting a draft. Even if the first pass contains hallucinations, this verification loop makes the model check its facts, reducing false information and bias in your final output.

Adding a secondary judge or critic isn’t always practical without increasing system complexity, latency, or cost. However, you should definitely consider this approach for automated QA testing. Consider a scenario where your production system uses a smaller “mini” or “nano” LLM version for efficiency. You could use the larger model as a judge in your offline testing to validate accuracy and ensure responsible outputs are generated. Github Copilot, for example, uses a second LLM to evaluate its primary model. 

User Experience (UX) Patterns

After stabilizing your outputs with proper prompts and guardrails, your next big concern is the user experience (UX). AI systems don’t behave like traditional software systems and often produce unpredictable, open-ended content that may occasionally be wrong, slow, or confusing. Similarly, users have different expectations for these tools. For example, they might want to ask follow-up questions, refine the AI’s responses, or see disclaimers when the AI isn’t sure.

That’s why well-defined UX patterns are essential. They help you, the developer, handle these new types of interactions in a user-friendly way to keep users engaged and satisfied and promote transparency. There are many techniques you can use to smooth these complexities, such as:

  • Providing clear onboarding examples
  • Signaling uncertainty transparently
  • Allowing quick edits of generated content
  • Enabling iterative explorations
  • Assisting users through suggested follow-ups
  • Explicitly confirming critical user intents

Let’s look at a few illustrative UX patterns in detail. 

Contextual Guidance Pattern

This may seem obvious, but many new AI tools launch without proper user guidance. Users often don’t understand how to interact with these tools or know their capabilities and limitations. Don’t assume users will immediately know how to use your tool. Lower their learning curve by providing prompt examples, contextual tips, and quick feature overviews. Show these aids at the right moment in users’ journey when they need them. For instance, in Notion, pressing the spacebar in an empty page triggers writing suggestions (since users likely want to draft content), while selecting text brings up editing options like “Improve writing” or “Change tone“, displayed alongside the original text for easy comparison.

Figure 4: An illustration of contextual guidance

Editable Output Pattern

With GenAI models, there is no single correct answer in many scenarios. Your best output depends on the context, application, and user preferences. Recognizing this, you should consider letting users modify or rewrite generated content. This creates a better perception of human-AI collaboration. Your tool will no longer be a black box, giving users control over their final outputs. Sometimes, this is an obvious feature (like GitHub Copilot letting users edit suggested code directly in their IDE). In other cases, it’s a deliberate design choice (such as ChatGPT’s canvas).

Figure 5: An illustration of the editable output pattern

Iterative Exploration Pattern

Never assume the first output will satisfy users. Include “regenerate” or “try again” buttons so users can quickly iterate. For image generation, show multiple options at once. In chatbots, allow users to refine or follow up on responses. This feedback loop helps users discover the best output without feeling stuck. Microsoft research shows that when users try many prompts, newer attempts sometimes perform worse than earlier ones – so letting them revert to previous outputs (or combine parts from different generations) significantly improves their experience.

Figure 6: An illustration of an AI video editor tool allowing iterative exploration

[Click here to expand image above to full-size]

AI-Ops Patterns

When you start putting your AI software into production, you’ll face new operational challenges that traditional software doesn’t have. You’ll still need versioning, monitoring, and rollbacks, but now your core “logic” lives in prompts, model configurations, and generative pipelines. Additionally, GenAI outputs can be unpredictable, requiring new testing and evaluation methods. 

Think of AI-Ops as DevOps specifically for modern AI systems. You’re not just deploying code; you’re shipping AI “knowledge” embedded in prompt-model-config combinations that might change weekly. You must manage performance and cost, track user interactions, identify regressions, and maintain reliable, available systems.

You can adopt many familiar operational tactics from traditional software, plus an entirely new set of AI-specific methods you’ve never needed before. Let’s look at a couple of AI-specific patterns in detail (though this is just a tiny sample of the complete playbook) to understand nuances of AI-Ops.

Metrics-Driven AI-Ops Pattern

When your change goes to production, track everything: latency, token usage, user acceptance rate, and cost per call. Define success metrics that matter most for your business. It could be a daily acceptance score from user feedback or a “hallucination rate” measured by an LLM-judge pipeline. Set up alerts if those metrics dip. This data-driven approach lets you quickly detect when a new model or prompt version hurts quality. Then, you can roll back or run an A/B test to confirm. Think of metrics as your safety net in an unpredictable environment.

Prompt-Model-Config Versioning Pattern

Your AI system can fail if there are uncontrolled prompt changes, configuration tweaks, or ad-hoc model swaps. If you consider each (prompt, model, configuration) combination a “release”, you can manage it like any other software build. To ensure no regressions, you must tag it with version tags, QA tests, and a golden dataset. Automated pipelines can run these test queries whenever you update a prompt, modify config settings, or switch from one model to another. If the outputs degrade according to your metrics, you revert. This discipline prevents “stealth changes” that break your UX.

In addition to AI-specific practices, you should continue standard operational best practices from traditional software development, such as:

  • Rigorous QA Checks: Ensuring thorough testing before deploying changes.
  • Regression Testing: Regularly running tests to verify new changes haven’t introduced issues.
  • Canary Deployments: Gradually deploying new features to smaller user groups before wider release.
  • Rollback Strategies: Establishing clear and simple processes to revert changes quickly if metrics decline.
  • Backup and Fallback Systems: Having backup models or versions available in case your primary model becomes unavailable.

Here is how your typical AI-Ops workflow would look:

Figure 7: An AI-Ops workflow illustration for managing, testing, and deploying AI System changes

[Click here to expand image above to full-size]

Optimization Patterns

As your AI application grows, you’ll face operational bottlenecks. API rate limits, increasing latency, and rapidly rising inference costs can quickly become significant challenges. That impressive prototype your leadership loved becomes suddenly unsustainable in production. 

These problems are common but manageable if you always follow some best practices. For example, don’t automatically choose the biggest, most powerful model. Ask yourself if your task can be handled faster, cheaper, or with reused outputs. Similarly, you must make smart optimization choices for your system, whether redirecting traffic away from unnecessarily powerful models, caching predictable responses, batching queries near-real-time, or developing smaller specialized models.

Let us dive into three powerful optimization patterns that you can directly implement in your AI workflows:

Prompt Caching Pattern

The fastest, cheapest LLM call is the one you don’t make. Consider caching and reusing responses if your system frequently uses identical or similar prompts. This works exceptionally well for documentation assistants, customer support bots, or internal chat tools where user questions often repeat.

Even more effective is prefix caching, where you can cache the expensive part of the prompt (e.g., system instructions or few-shot examples). Amazon Bedrock (and many others) supports this feature natively and reports up to 85% latency reduction on large prompts. 

Continuous Dynamic Batching Pattern

If you manage a high-volume AI system, maximizing GPU utilization and system throughput is critical for minimizing costs and efficiently scaling. If you process each query sequentially, you will underutilize your computing resources, pay more fees, and perhaps hit API limits sooner.

Instead of processing each request as soon as it arrives, consider waiting briefly, perhaps tens to hundreds of milliseconds, depending on your application’s latency tolerance, to batch your incoming requests dynamically. You can then process these batches through your inference servers and LLMs. This approach can help increase your system’s throughput and ensure your GPUs operate at near-optimal utilization.

While you could implement custom queuing and batching logic in bespoke systems, production-ready tools such as vLLM, NVIDIA Triton Inference Server, and AWS Bedrock offer robust, out-of-the-box solutions suitable for most use cases. 

Intelligent Model Routing Pattern

Rather than indiscriminately sending every request to your largest, most expensive model, implement intelligent model routing. The idea is simple but powerful. Introduce a lightweight, preliminary model at the entry point, similar to a reverse proxy or API gateway in traditional microservices. Like a reverse proxy, this model can help with load balancing between models, caching frequent responses, and gracefully handling fallbacks. It also serves as an API gateway, intelligently routing queries to the appropriate downstream models based on the complexity or context of each request. 

For common or repetitive queries, the routing model can directly pull from caches or prefetches, altogether avoiding model inference. For queries requiring moderate reasoning or domain-specific knowledge, route to specialized, cost-effective models. You should only route the most complex or ambiguous queries to your largest, general-purpose models.

Intelligent Model Routing Pattern is particularly useful if you are building general-purpose systems handling diverse queries. This pattern can balance cost-efficiency and model accuracy, ensuring each query uses precisely the computational resources it requires. 

Advanced Patterns

This article explored foundational patterns that can help you incorporate best practices into different stages of AI software development. However, there are several advanced areas we intentionally didn’t cover. However, I want to briefly mention three key topics with many emerging patterns and best practices, as they’re becoming critically important in modern AI systems.

  • Fine-Tuning and Customizing Models: Sometimes, off-the-shelf models aren’t enough for your specific use case, are too expensive, or require you to run on local networks or devices. This is where fine-tuning, customization, and optimizing large foundational models will benefit your use case. Common approaches include Domain-Specific Fine-Tuning, Knowledge Distillation, Low Rank Adaptation (LoRA), Mixture of Experts (MoE), and Quantization. Platforms like Hugging Face, VertexAI, and AWS Bedrock enable you to easily customize and fine-tune models.
  • Multi-Agent Orchestration: When tasks become too complex for a single model, consider using multiple specialized AI agents working collaboratively. Some common patterns you’ll encounter include LLM-as-a-Judge, Role-Based Multi-Agent Collaboration,  Reflection Loops, and Tool-Using Agents.
  • Agentic AI and Autonomous Systems: Arguably, one of the hottest fields today is building autonomous AI agents. Agentic systems involve models that dynamically plan, reason, and execute complex tasks independently, often using external tools and APIs. Agentic AI is a fascinating and rapidly growing domain with its own emerging best practices. It deserves a dedicated exploration beyond our scope here.

These advanced concepts are beyond our current scope. Recognizing their importance is key to keeping up with evolving trends in modern AI systems. Watch out for the ever-growing collection of innovative AI patterns, and keep adding them to your arsenal. They can help you unlock even more powerful and specialized applications!

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Today's NYT Connections: Sports Edition Hints, Answers for May 15 #234
Next Article This modern cassette boombox will lure you in with glowing VU meters
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Power Up Your Survival Analysis: BayesPPDSurv Makes History (and Futures)! | HackerNoon
Computing
Amazon has the Samsung Galaxy Tab A9+ on sale for just under $200
News
Coinbase Agents Bribed, Data of ~1% Users Leaked; $20M Extortion Attempt Fails
Computing
What is CarPlay Ultra? Apple’s next-gen auto system explained
Gadget

You Might also Like

News

Amazon has the Samsung Galaxy Tab A9+ on sale for just under $200

2 Min Read
News

Trump wants Apple to stop making more iPhones in India

0 Min Read
News

Apple just gave us a big hint that the AirTag 2 will launch soon

3 Min Read
News

Marketplace Momentum: How Google Cloud is Rewiring the Economics of Enterprise Software

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?