By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Lessons Learned From Shipping AI-Powered Healthcare Products
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Lessons Learned From Shipping AI-Powered Healthcare Products
News

Lessons Learned From Shipping AI-Powered Healthcare Products

News Room
Last updated: 2025/12/19 at 6:17 AM
News Room Published 19 December 2025
Share
Lessons Learned From Shipping AI-Powered Healthcare Products
SHARE

Transcript

Clara Matos: My name is Clara. I’m going to talk about lessons learned from shipping AI-powered healthcare products. AI-powered products are changing healthcare by allowing a more personalized, effective, and efficient way of delivering care. However, shipping these products in highly regulated industries presents unique challenges in ensuring safety, consistency, and reliability. In this talk, we walk through Sword Health’s journey, shipping these products, so we’ll cover topics in building guardrails, developing effective evaluation frameworks, choosing the right optimization approach, collecting user feedback, and finally, maintaining data-driven development practices. Sword Health is a digital health company specialized in remote physical therapy. We have three main products: Thrive, Move, and Bloom. Thrive covers chronic pain, Move, pain prevention, and Bloom, pelvic healthcare.

Phoenix- The AI Care Agent

Although 65 years separates the image on the left from the image on the right, we can see that the way that we deliver care has not changed much. In fact, healthcare has always faced a dichotomy between quality and affordability. Current approaches have either maximized quality with high clinician involvement, we add prohibitive costs, or have compromised quality over affordability. Phoenix, our AI care agent, disrupts this paradigm by allowing a more convenient and accessible and scalable way of delivering care. Phoenix creates a seamless support system for the patient during his rehabilitation program. During the session, Phoenix delivers a true one-on-one experience by providing real-time feedback to the member and being available to answer any question that the patient might have.

Also, outside of the session, Phoenix is available to solve any non-clinical related task. With the physical therapist, Phoenix acts as a co-pilot, allowing the physical therapist to focus on what truly matters, building a relationship with the patient. Throughout the last few years, we have shipped and iterated on many LLM-powered features, such as features that support the patient during the session, features that support the patient outside of the session, and features that support the physical therapist. During this time, we have learned a lot about what to do and what not to do. These bullet points are a summary of the learnings. Let’s deep dive on each.

Building Guardrails

Once the feature is released to production, we start to run into inconsistency issues because the types of inputs and queries will start to vary a lot. Building guardrails covers how to deal with the inherent inconsistency of large language models. Guardrails are essentially for highly regulated industries, such as healthcare. They act as safety controls between the user and the model, preventing unwanted content from reaching the user. We can consider two types of guardrails, input guardrails and output guardrails. Input guardrails prevent unwanted content from reaching the model. Prompt injection, jailbreaking, content safety are just an example of some.

On the other hand, output guardrails prevent unwanted content from reaching the user. As well, content safety, structural wellness, and medical advice are just an example of a few guardrails. This is an example of a conversation held between Phoenix and the patient at the end of the exercise session. Phoenix starts by congratulating the patient on his performance and asks for feedback. Once the patient mentions that he’s feeling a bit of pain on his shoulder, Phoenix provides some tips on how to better manage that pain. However, given this is an unsupervised conversation, we want to make sure that the types of tips that Phoenix is able to provide are within very constrained guidelines. To achieve this, we have built medical advice guardrails that constrain the types of tips that Phoenix is able to provide. These guidelines were devised in collaboration with our clinical team. When building guardrails, some concerns include task specificity, latency, and accuracy. Regarding task specificity, guardrails must be task specific.

For example, when building guardrails for Bloom, our pelvic health product, we needed to adjust the threshold for content safety, as it is usual in this context to use sexual terms. Another thing to have in consideration is latency, as when you have more time sensitive applications, such as those that require feedback in real time, latency is a concern. When adding online guardrails for Phoenix, we observed an increase in latency of around 30%. As a result, we needed to optimize the online guardrails to improve latency. The third thing to consider is accuracy, as guardrails can also trigger false positives, thus preventing good content from reaching the user. Such as this example that was flagged as having sexual content.

Using Evals to Measure Performance

The second learning is to use evals to measure performance. Lack of evals has been a key challenge in deploying LLMs to production. How do you deliver consistently without regressions? This is something that is very hard to do with large language models, given their non-deterministic nature. We can see model evals as unit tests of LLMs. Prompting is often seen as an art, but when paired with evaluations, you can treat it as a software delivery. The main benefits of evaluations include iterative prompt development, so, is v2 of my prompt better than v1? Quality assurance before and after deployment, so has my latest release introduced any regression? Objective model comparison, so can I switch to provider X’s newest and most performant model and maintain or improve my current performance? Finally, potential cost savings. Can I switch to a smaller and cheaper model and maintain my current performance? Choosing the right rating approach is important, and each rating approach is suited for a different type of task.

The first rating approach is called human-based. In this approach, a human, usually a subject matter expert, reviews outputs of the models and assigns a score to each. This evaluation excels in evaluating things such as tone, factuality, and reasoning. However, it is very time consuming and can be extremely costly, especially at scale. Also, there’s the issue of different raters not providing the same evaluation and disagreeing amongst themselves. To perform human-based evaluation, we have developed a tool in-house called Gondola, using Streamlit. Using this tool, we ask our subject matter experts, the physical therapists, to provide feedback on outputs before releasing the new model versions to production.

The second rating approach is non-LLM based, so this approach uses classification metrics, NLP metrics, or programmatic methods to evaluate the output of the model. This task can only be used when the output is clear and objective. The main advantages are, of course, speed and scalability. However, it fails to evaluate things that are nuanced and subjective. This is an example of a situation where we use non-LLM based rating. When you have the output that was generated by the model and the output that was then generated by the human, you can compare both using things such as NLP metrics, such as BLEU, ROUGE, or Sequence Matcher. This is an example where we have used a sequence matcher algorithm that evaluates the similarity between two sentences, providing a score between 0 and 1, where 1 corresponds to exact matching. The final rating approach is LLM based. This is a middle ground between human-based and non-LLM based.

In this approach, you usually ask the same model or a different one to evaluate the output. However, when using this approach, you have to make sure that the rating prompt is well crafted. The model itself can also introduce some bias. You have to make sure that the model evaluation is aligned with human evaluation. This is an example of a situation where we have used LLM based rating. This is an evaluation prompt used to evaluate the output of our customer support agents. The same question that we are making with the model, we can also make to a human, and then we can measure the alignment between the model and the human. These are some learnings from building LLM based rating, or also known as LLM-as-a-Judge type of rating.

Usually, the best approach is to use binary decisions such as ok, not ok, good, bad, pass, or fail. Binary decisions can also then be enhanced with detailed critiques, both from the model and from the human. These detailed critiques help you understand why the human and the model made such evaluation. Finally, you can measure the agreement between both, between the human and the model, and if they disagree to a high extent, you know that you have to improve your evaluation.

This is usually the flow that we follow when developing with evaluations. You start by creating a test set, either by asking subject matter experts to provide ideal outputs, or by using real data from production when available. Then, you create the first version of the system. Then you evaluate the outputs with offline evaluations and live checking. You can iterate on this and refine the system until the system meets the current evaluation suit. Once that happens, you then ask for human expert evaluation, and then refine the system until the human evaluation is positive. Finally, once the human evaluation is positive, you can A/B test the new version in production. Then, once the A/B test finishes, you can promote the successful version and continue monitoring the outputs with product metrics, manual audits, and offline evaluations. Then, you start all over again on the next iteration.

Start with Prompt Engineering

The third learning is that prompt engineering can get you very far. Optimizing large language models’ performance is not always linear. Most guys paint it as a linear flow. You start with prompt engineering, then go with few-shot learning, retrieval-augmented generation, and then finally use fine-tuning. However, each one of these approaches solves for different things, and to solve the right problem, you need to choose the right approach. Like I mentioned, prompt engineering is the best place to start and can be pretty much the best place to finish. It helps you to get to a baseline, and understand your goal and what it would look like.

Once you get to that baseline, then there are two directions you can go. You can go on to the context optimization or LLM optimization. Context optimization helps you with what the model needs to know, and it’s achieved through retrieval-augmented generation. This technique is usually used when you want to include domain knowledge in the model that was not available at training time, or if you want to include proprietary information. This LLM optimization access is followed when you need to improve how the model needs to act.

This is usually the case when the model is struggling with following instructions, or the tone and style is not what you would expect. Like I mentioned before, prompt engineering is the best place to start and can be pretty much the best place to finish. It helps you to get to a baseline and understand what good looks like. If you are satisfied with the results, amazing, you don’t need to do anything else. If you are not, then you will know in which direction you will need to go.

These are a few strategies for getting better results with prompt engineering. You can write clear instructions. You can give models time to think by using prompting techniques such as chain-of-thought. You can also use few-shot examples instead of using very descriptive prompts with very detailed instructions. You can use few-shot examples. However, building few-shot examples is hard and can be very time consuming, and also the examples can easily get out of date. One way to deal with this is through dynamic in-context learning. Dynamic in-context learning is an inference time optimization technique where you embed the inputs of production examples into a vector database, and then at prediction time based on your current input, you pull the examples that more closely resemble the inputs that you want to generate at the moment.

Then you include those examples in the prompt as few-shot, and then response is improved. Another thing you can do as well is to split complex tasks into simpler subtasks by using a state machine-based agentic approach or something even simpler, you can try different models. This is an example where just by switching from GPT-4.0 to Claude 3.5 Sonnet, and keeping everything mostly the same, just a few tweaks on prompt engineering, we observed an increase in performance of around 10% points just by switching to a different model. Like I mentioned, prompt engineering is the best place to start and can be pretty much the best place to finish. This is an example of how successive prompt iterations resulted in an improvement in performance. In this example, the goal was to reduce the percentage of heavy hitters and increase the percentage of acceptance.

Retrieval-Augmented Generation to Improve Model’s Domain Knowledge

The fourth learning is to use retrieval-augmented generation for what a model needs to know. When you want to improve the domain knowledge of the model, RAG is likely the best next step. You might ask, why not use the long context window? In fact, using the model’s large context window, which is only increasing with newest releases, might be a good option and a good place to start. However, from experience, we noticed that when you placed instructions either at the top or at the bottom of the input context, the performance was best than when the information was placed at the middle of the prompt.

This issue is known as lost in the middle. This paper describes how these models struggle to keep equal attention to all its input prompts, and it covers how they tend to pay more attention to information that’s placed either at the top or at the bottom, which correlates with our internal experiments and perception. This is an example of a retrieval-augmented system that we built, and that’s used for customer support. We embedded the same article that are used by our human support agent into a vector database.

Then, when the patient asks a question, we retrieve the top and most similar articles from the knowledge base or from the vector database, and then we include those articles in the prompt and generate an answer to the user. When building retrieval-augmented generation systems, you need to think about evaluation. You need to consider not only metrics related with generation, but also metrics related with retrieval and the knowledge base itself. Things related with generation help you measure how well the LLM answers the question. Metrics related with retrieval help you to understand how relevant is the content that’s being retrieved.

Finally, you also need to make sure that the information that you are looking for is available in the knowledge base. A framework that’s usually used to evaluate retrieval-augmented systems is the RAGAS score. It is composed of four metrics, two that measure generation and two that measure retrieval. Faithfulness and relevance measure generation, and context precision and context recall measure retrieval. Faithfulness measures how factually accurate is the generated answer, relevance measures how relevant is the answer to the question, and this is calculated based on the answer, creating or asking a model to generate several answers.

Then measuring the cosine distance similarity between the original answer and the newly generated answer. If the distance is small, then the answer is relevant. Then context precision measures how relevant is the information related to the question. It basically helps you to understand if you are pulling a lot of context that is not relevant to the question. Then, context recall helps you to understand if you are able to retrieve the article with the relevant information. In this example, you can see that retrieval is underperforming, so both context precision and context recall. Something that helped us improve this is query rewriting, so using world knowledge to rewrite the query. When we extract the articles and we see that the similarity score is low, we can ask the user to provide more clarification regarding the question.

Collecting User Feedback

The fifth learning is to collect user feedback. User feedback helps our models improve. By learning what users like and don’t like, we are able to improve our products and our systems. Feedback can be implicit or explicit. Implicit feedback is collected indirectly from the users. This is an example of a sentiment analysis that we run after each conversation between the Phoenix and the patient.

As you can see, when the patient engages with Phoenix, the conversation sentiment is mostly neutral or positive. However, you can also see that around 50% of the times, the patient does not engage us in conversation. This gives us a very strong hint that we need to understand why this is happening. The other type of feedback that you can collect is explicit feedback, and this feedback is collected by asking the users directly for feedback. This can be achieved, for example, with things such as the thumbs down available here. By collecting this feedback, we can build high-quality datasets that can be used for things such as guardrails, evaluations, few-shot learning, and fine-tuning.

Repeated Data Evaluation

The sixth and final learning is to look at your data, and now look again and again. Although these models have very strong zero-shot capabilities, they can still fail in very unpredictable ways. As you can see in this very famous tweet, manual inspection is one of the highest return on investment tasks that someone can perform in machine learning. In traditional machine learning, this is usually described as performing error analysis. By looking at a sample of inputs and outputs, we gain a very strong understanding of why the model is failing.

Then, also, by regularly reviewing the outputs and the inputs, you can understand new patterns and failure modes and easily mitigate them. Looking at the data should be easy and should be a mindset that’s promoted amongst everyone in the team, from product managers, machine learning engineers, subject matter experts, or even stakeholders. Looking at the data should be easy. It can be achieved through Google Forms, Google Sheets, dashboards, a data viewing app built in Streamlit, or something similar, or even observability and tracing platforms such as Langfuse and LangSmith. Here, the main thing is that the platform you use does not really matter. What really matters is that you actively look at the data consistently and upon every release.

Wrap-Up

To wrap up, the main learnings are, start by building guardrails. They are extremely important to ensure safety and reliability. Then, use evaluations to measure performance and understand the new iteration performance before releasing it to production. Prompt engineering is the best place to start. It can be pretty much the best place to finish as well. If it’s not, then RAG is a good approach if the goal is to improve the model’s domain knowledge. Then, collect user feedback to understand how the system or the product is performing in production. Finally, look at your data.

Questions and Answers

Dr. Kreindler: First of all, have you got any actual metrics in terms of where you started when you first dropped in whichever large language model into various areas? What the performance was, and in a sense, numerically how it’s changed and qualitatively how it’s changed?

Clara Matos: We try to do that with evaluations. Evaluations, both human-based and non-human-based, they help you have things such as acceptance metrics. You can ask humans to evaluate the outputs in a score from, for example, 1 to 5, or binary, like I described. Then, you can see that that number will increase through consecutive iterations. Or you can use, for example, in the other example I provided, where you can use similarity scores. You can see that the similarity score of the human-generated and the model-generated output will increase with successive iterations.

Dr. Kreindler: Today, still, the gold standard is what the professional would have done in that circumstance. When will we get to a point where the variability in the professional, which can vary just because of skill level, because of experience, but also because they might not have had a cup of coffee on that Tuesday morning, becomes superseded, in fact, by the model, and yet the gold standard and the insured entity is still the variable human?

Clara Matos: Inter-human variability is something that we are measuring. When we have human evaluation, you can ask a group of different PTs to score the exact same clinical decision, and then you can measure agreement between them. We have seen that the agreement is not 100%, of course. Different physical therapists, which are our care providers, they will give different recommendations for the same example, for the same use case. At the moment, for legal reasons, the ground truth answer is always one of the physical therapists or the doctor that is legally responsible for that patient.

Dr. Kreindler: When’s your hunch that you will have enough data on the auto-generated advice being comparable and maybe insurable? How many years have we got?

Clara Matos: It depends a lot. If you asked that question 2 years ago, it will be like 10 years from now. If you’re making it now, who knows? Each new large company model release presents a new set of capabilities. You never know, really.

Dr. Kreindler: Somewhere between tomorrow and 10 years’ time.

Clara Matos: Yes.

Dr. Kreindler: Let’s take a Bayesian guess at that.

Participant 1: In terms of your point number six of looking at the data, when working in a highly regulated industry, looking at GDPR, sensitive data can be prohibitive for developers. Have you overcome the challenge of integrating anonymization in your development pipeline in order to use anonymized or pseudo-anonymized data while maintaining a good workflow?

Clara Matos: From a legal perspective, so we operate mostly in the U.S. market, and there we have to abide with HIPAA, which is a law regarding healthcare data. Basically, what it states is that anyone within the company can have access to patient information if the goal is to provide or improve care. By that, we are safe. Of course, like you mentioned, anonymization is also something that makes sense and something that we have explored in order to protect the patient’s identity. From a legal standpoint, we are covered by that.

Participant 2: How easy was it to implement output guardrails while maintaining things like response, streaming, and things like that? Are you looking at things like DSPy to optimize your few-shot examples and things like that? Are you optimizing prompts in a programmatic way or it’s more like vibing.

Clara Matos: For streaming, like I mentioned, for real-time use cases, like in the conversation that Phoenix has with the patient at the end of the session, having online guardrails was a problem and it’s not a solved problem for now. Basically, we resorted to offline guardrails. Some of the examples that I’ve shown cover that. We try to achieve most online guardrails through prompt engineering, and then once the conversation is held, we have a high level of post-conversation analysis that helps us to understand what was discussed and if any red cross was passed.

Then regarding the second one, we haven’t. We haven’t yet at the moment. We are currently using or taking advantage of things such as few-shot. Also with increased model capability, something that we have shown is that prompt engineering is like losing a bit of importance, if that makes sense, because nowadays these models are getting way better in following instructions. Through a combination of few-shot examples and clear instructions, we haven’t yet had the need to resort to something as DSPy like you described.

Participant 3: I was curious about how you’re dealing with incorrect answers. In the medical space, I’m sure if you tell someone with a torn ACL, “Twist your knee”, that’s bad. How are you adding in those guardrails?

Clara Matos: We don’t have clinical decisions or clinical feedback being provided to the patient without a human in the loop. For everything that’s clinical related, we always have the physical therapist reviewing or accepting the recommendation or changing it if they do not agree with it. That’s like our human guardrail.

Dr. Kreindler: Is that then the output is an aid memoir more than an output?

Clara Matos: Yes. Like I mentioned, so Phoenix acts as the physical therapist co-pilot. A bit like writing code, like when we are using an IDE and we are writing code. We are seeing what’s happening and we can approve or reject everything that we are receiving as a recommendation or as a suggestion. It’s the same but applied to a clinical setting. The PTs have access to a large back office of where they can control everything that’s going on with their patients, and where we provide the recommendations and PTs can accept them or reject them or change them. Then we learn from it, and it builds a feedback loop.

Dr. Kreindler: I’d be fascinated to see whether very similar inputs from multiple different clinicians result in variability of output and whether that’s measurable. That’d be very interesting to know.

Clara Matos: This is the thing that I was discussing. This is an example, it’s not real data. This is a mockup, of course. Yes, you can see that this is like the back office of the physical therapist where you can see everything that he needs to do for Jane, Jenny Jackson.

Participant 4: Are there any specific changes that you have to make in your software development to comply with the regulations? I would imagine that in the U.S. you’re under FDA regulations. Any specific areas that you have to do for regulatory purposes?

Clara Matos: Yes, the way that the development cycle is developed is in order to be compliant with, like you discussed, like FDA regulations. We received FDA approval a long time ago. For me, it’s just like the way we developed. I cannot say specifically what we did to address that. I know that the way that the process is developed has FDA in consideration in order for us to be compliant. It’s embedded and it’s part of the usual development cycle.

Dr. Kreindler: Normally in the U.S., a clinical decision support system is almost a self-regulated thing because you’re still taking responsibility. It’s not that the machine is actually allowed to do anything on its own. That then becomes Class II. Is it IIb?

Clara Matos: Yes, I think we are I.

Dr. Kreindler: You think you’re I?

Participant 4: Are you Class II or Class I?

Dr. Kreindler: Class I.

Clara Matos: Class I. Yes, the least invasive one.

Participant 5: My question is about RAG. We know that it contains many steps, but one of the most specific is chunking. Is there a test chunking for the context? What’s the most efficient strategy for chunking?

Clara Matos: Luckily that problem is solved for us, because our chunking unit is the article and each article is fairly small and covers a topic. We don’t have to deal with chunk. We consider that each chunk unit is the knowledge article.

Participant 6: You do have an AI agent in multiple applications.

Clara Matos: It’s the same in multiple applications.

Participant 6: It’s the same. Does it mean that it has all the context at all time of this user? Does it have all that knowledge over time? Does it build it, understand, store it somewhere, and leverages as time passes? Or does it rely on just short-lived sessions?

Clara Matos: Yes. It’s like it has all the memory from the current user, everything that the user has done so far within our interaction with our products. Also, we have access to past healthcare data from the patient as well. Then it learns from the crowd as well. It also has access to decisions that PTs made for patients in a similar condition. It also learns from its peers.

Participant 6: Does it mean that when I’m speaking with a physical therapist, I can tell it, do you remember that move I did during the session which caused me pain? It has awareness of that.

Clara Matos: Yes, it’s like full cycle, yes.

Participant 6: How do you ingest all that user interaction? What datastore system do you use?

Clara Matos: Our memory system, it’s currently composed of MySQL databases, vector databases. It depends a bit on the use case and in the type of information we want to store. It’s like usual persistence databases, it depends on the use case.

Participant 7: In looking at the data, you’ve put great tooling in place, it looks like to make that accessible for everyone in the team. How do you actually build a culture where people want to go and look at that data and make time to do so alongside everything else that they’re doing in their jobs?

Clara Matos: How you convince? That’s a struggle for sure. Sometimes I seem a bit like a priest, like, have we done the analysis to understand what has happened here? You really have to advocate for it. The way we try to do it is lead by example. I actively do it a lot. By showcasing examples of doing a run on production data, it gives you a very good intuition about what’s going on, where the system is failing. I think the best way to convince developers to do something is to find a bug on something they have built. I try to do it. Then, people end up seeing the value on it if you can show them, ok, do you know that this is happening in production? Then, yes, it builds itself from there.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Social Media Analytics: What You Need to Know in 2025 Social Media Analytics: What You Need to Know in 2025
Next Article Reels are coming to the big screen with new Instagram TV app Reels are coming to the big screen with new Instagram TV app
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Influencer Marketing for Enterprise: How to Scale Success
Influencer Marketing for Enterprise: How to Scale Success
Computing
Hands Down One Of The Best Credit Cards For Balance Transfers
Hands Down One Of The Best Credit Cards For Balance Transfers
News
Strong Password, Stronger Savings: The 1Password Security Suite Is Now 50% Off
Strong Password, Stronger Savings: The 1Password Security Suite Is Now 50% Off
News
Travel Goes Crypto: The Push to Make Tourism Faster, Cheaper, and Borderless | HackerNoon
Travel Goes Crypto: The Push to Make Tourism Faster, Cheaper, and Borderless | HackerNoon
Computing

You Might also Like

Hands Down One Of The Best Credit Cards For Balance Transfers
News

Hands Down One Of The Best Credit Cards For Balance Transfers

1 Min Read
Strong Password, Stronger Savings: The 1Password Security Suite Is Now 50% Off
News

Strong Password, Stronger Savings: The 1Password Security Suite Is Now 50% Off

4 Min Read
Best gaming monitor deal: Save 0 on the ASUS ROG Swift 32-inch 4K OLED monitor at Amazon
News

Best gaming monitor deal: Save $400 on the ASUS ROG Swift 32-inch 4K OLED monitor at Amazon

3 Min Read
Roku Users Need To Change These Privacy Settings Immediately – BGR
News

Roku Users Need To Change These Privacy Settings Immediately – BGR

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?