By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Swedish welfare authorities suspend ‘discriminatory’ AI model | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Swedish welfare authorities suspend ‘discriminatory’ AI model | Computer Weekly
News

Swedish welfare authorities suspend ‘discriminatory’ AI model | Computer Weekly

News Room
Last updated: 2025/11/22 at 11:57 AM
News Room Published 22 November 2025
Share
Swedish welfare authorities suspend ‘discriminatory’ AI model | Computer Weekly
SHARE

A “discriminatory” artificial intelligence (AI) model used by Sweden’s social security agency to flag people for benefit fraud investigations has been suspended, following an intervention by the country’s Data Protection Authority (IMY).

Starting in June 2025, IMY’s involvement was prompted after a joint investigation from Lighthouse Reports and Svenska Dagbladet (SvB) revealed in November 2024 that a machine learning (ML) system being used by Försäkringskassan, the Swedish Social Insurance Agency, was disproportionally and wrongly flagging certain groups for further investigation over social benefits fraud.

This included women, individuals with “foreign” backgrounds, low-income earners and people without university degrees. The media outlets also found the same system was largely ineffective at identifying men and rich people that actually had committed some kind of social security fraud.

These findings prompted Amnesty International to publicly call for the system’s immediate discontinuation in November 2024, which it described at the time as “dehumanising” and “akin to a witch hunt”.

Introduced by Försäkringskassan in 2013, the ML-based system assigns risk scores to social security applicants, which then automatically triggers an investigation if the risk score is high enough.

According to a blog published by IMY on 18 November 2025, Försäkringskassan was specifically using the system to conduct targeted checks on recipients of temporary child support benefits – which are designed to compensate parents for taking time off work when they have to care for their sick children – but stopped using it amid investigation.

“While the inspection was ongoing, the Swedish Social Insurance Agency took the AI ​​system out of use,” said IMY lawyer Måns Lysén. “Since the system is no longer in use and any risks with the system have ceased, we have assessed that we can close the case. Personal data is increasingly being processed with AI, so it is welcome that this use is being recognised and discussed. Both authorities and others need to ensure that AI use complies with the [General Data Protection Regulation] GDPR and now also the AI ​​regulation, which is gradually coming into force.”

IMY added that Försäkringskassan “does not currently plan to resume the current risk profile”.

Under the European Union’s AI Act, which came into force on 1 August 2024, the use of AI systems by public authorities to determine access to essential public services and benefits must meet strict technical, transparency and governance rules, including an obligation by deployers to carry out an assessment of human rights risks and guarantee there are mitigation measures in place before using them. Specific systems that are considered as tools for social scoring are prohibited.

Computer Weekly contacted Försäkringskassan about the suspension of the system, and why it elected to discontinue before IMY’s inspection had concluded.

“We discontinued the use of the risk assessment profile in order to assess whether it complies with the new European AI regulation,” said a spokesperson. “We have at the moment no plans to put it back into use since we now receive absence data from employers among other data, which is expected to provide a relatively good accuracy.”

Försäkringskassan previously told Computer Weekly in November 2024 that “the system operates in full compliance with Swedish law”, and that applicants entitled to benefits “will receive them regardless of whether their application was flagged”.

In response to Lighthouse and SvB’s claims that the agency had not been fully transparent about the inner workings of the system, Försäkringskassan added that “revealing the specifics of how the system operates could enable individuals to bypass detection”.

Similar systems

Similar AI-based systems used by other countries to distribute benefits or investigate fraud have faced similar problems.

In November 2024, for example, Amnesty International exposed how AI tools used by Denmark’s welfare agency are creating pernicious mass surveillance, risking discrimination against people with disabilities, racialised groups, migrants and refugees.

In the UK, an internal assessment by the Department for Work and Pensions (DWP) – released under Freedom of Information (FoI) rules to the Public Law Project – found that an ML system used to vet thousands of Universal Credit benefit payments was showing “statistically significant” disparities when selecting who to investigate for possible fraud.

Carried out in February 2024, the assessment showed there is a “statistically significant referral … and outcome disparity for all the protected characteristics analysed”, which included people’s age, disability, marital status and nationality.

Civil rights groups later criticised DWP in July 2025 for a “worrying lack of transparency” over how it is embedding AI throughout the UK’s social security system, which is being used to determine people’s eligibility for social security schemes such as Universal Credit or Personal Independence Payment.

In separate reports published around the same time, both Amnesty International and Big Brother Watch highlighted the clear risks of bias associated with the use of AI in this context, and how the technology can exacerbate pre-existing discriminatory outcomes in the UK’s benefits system.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article The Best Portable Projectors We’ve Tested for 2025 The Best Portable Projectors We’ve Tested for 2025
Next Article Las Vegas First Responders Lean on AT&T's FirstNet to Stay Connected During the F1 Race Las Vegas First Responders Lean on AT&T's FirstNet to Stay Connected During the F1 Race
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Meta-backed Hupo finds growth after pivot to AI sales coaching from mental wellness |  News
Meta-backed Hupo finds growth after pivot to AI sales coaching from mental wellness | News
News
Iran Appears to Be Jamming Starlink Amid Protests, Internet Blackout
Iran Appears to Be Jamming Starlink Amid Protests, Internet Blackout
News
Panasonic’s 2019 4K HDR and HD TV lineup explained
Panasonic’s 2019 4K HDR and HD TV lineup explained
Gadget
Best smart ring deal: Save 20% on the RingConn Gen 2
Best smart ring deal: Save 20% on the RingConn Gen 2
News

You Might also Like

Meta-backed Hupo finds growth after pivot to AI sales coaching from mental wellness |  News
News

Meta-backed Hupo finds growth after pivot to AI sales coaching from mental wellness | News

5 Min Read
Iran Appears to Be Jamming Starlink Amid Protests, Internet Blackout
News

Iran Appears to Be Jamming Starlink Amid Protests, Internet Blackout

8 Min Read
Best smart ring deal: Save 20% on the RingConn Gen 2
News

Best smart ring deal: Save 20% on the RingConn Gen 2

4 Min Read
This ,400 Android Phone Proves Rich People Will Buy Almost Anything – BGR
News

This $5,400 Android Phone Proves Rich People Will Buy Almost Anything – BGR

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?