By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Top 10 technology ethics stories of 2025 | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Top 10 technology ethics stories of 2025 | Computer Weekly
News

Top 10 technology ethics stories of 2025 | Computer Weekly

News Room
Last updated: 2025/12/30 at 9:04 AM
News Room Published 30 December 2025
Share
Top 10 technology ethics stories of 2025 | Computer Weekly
SHARE

Throughout 2025, Computer Weekly’s technology and ethics coverage highlighted the human and socio-technical impacts of data-driven systems, particularly artificial intelligence (AI).

This included a number of reports on how the Home Office’s electronic visa (eVisa) system, which has been plagued by data quality and integrity issues from the outset, is affecting migrants in the UK; the progress of both domestic and international efforts to regulate AI; and debates around the ethics of autonomous weaponry.

A number of stories also covered the role major technology companies have played in Israel’s genocide against Palestinians, which includes providing key digital infrastructure and tools that have enabled mass killings.

In June 2025, Computer Weekly reported on ongoing technical difficulties with the Home Office’s electronic visa (eVisa) system, which has left scores of people living in the UK with no means to reliably prove their immigration status or “right” to be in the country.

Those affected by the eVisa system’s technical failings told Computer Weekly, on condition of anonymity, that the entire experience had been “anxiety-inducing” and described how their lives had been thrust into “uncertainty” by the transition to a digital, online-only immigration system.

Each also described how the “inordinate amount of stress” associated with not being able to reliably prove their immigration status had been made worse by a lack of responsiveness and help from the Home Office, which they accused of essentially leaving them in the lurch.

In one case that was reported to the Information Commissioner’s Office, the technical errors with data held by the Home Office were so severe that it found a breach of UK data protection law.

Following the initial AI Safety Summit at Bletchley Park in November 2023 and the follow-up AI Seoul Summit in May 2024, the third AI Action Summit in Paris saw dozens of governments and companies outline their commitments to making the technology open, sustainable and work for the “public interest”.

However, speaking with Computer Weekly, AI experts and summit attendees said there was a clear tension in the direction of travel, with the technology caught between competing rhetorical and developmental imperatives.

They noted, for example, that while the emphasis on AI as an open, public asset was promising, there was worryingly little in place to prevent further centralisations of power around the technology, which is still largely dominated by a handful of powerful corporations and countries.

They added that key political and industry figures – despite their apparent commitments to more positive, socially useful visions of AI – were making a worrying push towards deregulation, which could undermine public trust and create a race to the bottom in terms of safety and standards.

Despite the tensions present, there was consensus that the summit opened more room for competing visions of AI, even if there was no guarantee these would win out in the long run.

In February 2025, Google parent Alphabet dropped its pledge not to use AI in weapons systems or surveillance tools, citing a need to support the national security of “democracies”.

Despite previous commitments that made it explicit the company would “not pursue” the building of AI-powered weapons, Google – whose company motto ‘Don’t be Evil’ was replaced in 2015 with ‘Do the right thing’ – said it believed “democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights”.

For military technology experts, however, the move represented a worrying change. They noted that while companies such as Google had already been supplying military technology to a range of actors, including the US and Israel, “it indicates a worrying acceptance of building out a war economy” and “signals that there is a significant market position in making AI for military purposes”.

Google’s decision was also roundly condemned by human rights organisations across the globe, which called it “shameful” and said it would set a “dangerous” precedent going forward.  

Speaking during an event hosted by the Alan Turing Institute, military planners and industry figures claimed that using AI in military contexts could unlock a range of benefits for defence organisations, and even went as far as claiming there was an ethical imperative to deploy AI in the military.

Despite being the lone voice not representing industry or military interests, Elke Schwarz, a professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, warned there was a clear tension between speed and control baked into the technology.

She especially argued this “intractable problem” with AI risks taking humans further out of the military decision-making loop, in turn reducing accountability and lowering the threshold for resorting to violence.

Highlighting the reality that many of today’s AI systems are simply not very good yet, she also warned against making “wildly optimistic” claims about the revolutionary impacts of the technology in every aspect of life, including warfare.

Workers in Kenya employed to train and maintain the AI systems of major technology companies formed the Data Labelers Association (DLA) this year to challenge the “systemic injustices” they face in the workplace, with 339 members joining the organisation in its first week.

While the popular perception of AI revolves around the idea of an autodidactic machine that can act and learn with complete autonomy, the reality is that the technology requires a significant amount of human labour to complete even the most basic functions.

Despite Kenya becoming a major hub for AI-related labour, the DLA said data workers were tremendously underpaid, often earning just cents for tasks that take a number of hours to complete, and yet still face frequent pay disputes over withheld wages that are never resolved.

During the launch, DLA secretary Michael Geoffrey Abuyabo Asia said weak labour laws in Kenya were being deliberately exploited by tech companies looking to cheaply outsource their data annotation work.

The Home Office is operating at least eight AI-powered surveillance towers along the south-east coast of England, which critics have said are contributing to migrant deaths in the English Channel, representing a physical marker of increasing border militarisation that is pushing people into taking ever more dangerous routes.

As part of a project to map the state of England’s coastal surveillance, the Migrants Rights Network (MRN) and researcher Samuel Story identified eight operational autonomous surveillance towers between Hastings and Margate where people seeking asylum via the Channel often land, as well as two more that had either been dismantled or relocated.

Responding to their freedom of information (FoI) requests, the Home Office itself also tacitly acknowledged that increased border surveillance would place migrants crossing the Channel in “even greater jeopardy”.

Created by US defence company Anduril – the Elvish name for Aragorn’s sword in The Lord of the Rings, which translates to “flame of the west” – the 5.5m-tall maritime sentry towers are fitted with radar, as well as thermal and electro-optical imaging sensors, enabling the detection of “small boats” and other water-borne objects in a nine-mile radius.

Underpinned by Lattice OS, an AI-powered operating system marketed primarily to defence organisations, the towers are capable of autonomously piecing together data collected from thousands of different sources, such as sensors or drones operated by Anduril, to create a “real-time understanding of the environment”.

The European Commission has been ignoring calls to reassess Israel’s data adequacy status for over a year, despite “urgent concerns” about the country’s data protection framework and “repressive” conduct in Gaza.

In April 2024, a coalition of 17 civil society groups coordinated by European Digital Rights signed an open letter voicing concerns about the commission’s January 2024 decision to uphold Israel’s adequacy status, which permits the continued free flow of data between the country and the European Union on the basis that each has “essentially equivalent” data protection standards.

Despite their calls for clarification from the commission on “six pivotal matters” – including the rule of law in Israel, the scope of its data protection frameworks, the role of intelligence agencies, and the onward transfer of data beyond Israel’s internationally recognised borders – the groups received no response, prompting them to author a second open letter in June 2025.

They said it was clear the commission is unwilling to uphold its own standards when politically inconvenient.

Given that Israel’s tech sector accounts for 20% of its overall economic output and 53% of total exports, according to a mid-2024 report published by the Israel Innovation Authority, losing adequacy could have a profound effect on the country’s overall economy.

The European Commission told Computer Weekly it was aware of the open letters, but did not answer questions about why it had not responded.

Francesca Albanese, the special rapporteur for the human rights situation in Palestine, said in July 2025 that technology firms globally were actively “aiding and abetting” Israel’s “crimes of apartheid and genocide” against Palestinians, and issued an urgent call for companies to cease their business activities in the region.

In particular, she highlighted how the “repression of Palestinians has become progressively automated” by the increasing supply of powerful military and surveillance technologies to Israel, including drones, AI-powered targeting systems, cloud computing infrastructure, data analytics tools, biometric databases and high-tech weaponry.

She said that if the companies supplying these technologies had conducted the proper human rights due diligence – including IBM, Microsoft, Alphabet, Amazon and Palantir – they would have divested “long ago” from involvement in Israel’s illegal occupation of Gaza and the West Bank.

“After October 2023, long-standing systems of control, exploitation and dispossession metamorphosed into economic, technological and political infrastructures mobilised to inflict mass violence and immense destruction,” she said. “Entities that previously enabled and profited from Palestinian elimination and erasure within the economy of occupation, instead of disengaging, are now involved in the economy of genocide.”

Under international law, however, Albanese pointed out that the mere fact that due diligence had been conducted did not absolve companies from legal liability over their role in abuses. Instead, the liability of companies is determined by both their actions and the ultimate human rights impact.

Later, in October 2025, human rights organisations jointly called for Microsoft to immediately end any involvement with the “Israeli authorities’ systemic repression of Palestinians” and work to prevent its products or services being used to commit further “atrocity crimes”.

This followed credible allegations that Microsoft Azure was being used to facilitate mass surveillance and lethal force against Palestinians, which prompted the company to suspend services to the Israeli military unit responsible.

As part of a joint Parliamentary inquiry set up to examine how human rights can be protected in “the age of artificial intelligence”, expert witnesses told MPs and Lords that the UK government’s “uncritical and deregulatory” approach to AI would ultimately fail to deal with the technology’s highly scalable harms, and could lead to further public disenfranchisement.

“AI is regulated in the UK, but only incidentally and not well … we’re looking at a system that has big gaps in [regulatory] coverage,” said Michael Birtwistle, the Ada Lovelace Institute’s associate director of law and policy, adding that that while the AI opportunities action plan published by the government in January 2025 outlined “significant ambitions to grow AI adoption”, it contained little on what actions could be taken to mitigate AI risks, and made “no mention of human rights”.

Experts also warned that the government’s current approach, which they said favours economic growth and the commercial interests of industry above all else, could further deepen public disenfranchisement if it failed to protect ordinary people’s rights and made them feel like technology was being imposed on them from above.

Witnesses also spoke about the risk of AI exacerbating many existing issues, particularly around discrimination in society, by automating processes in ways that project historical inequalities or injustices into the future.

In January 2025, Computer Weekly reported on how Black mothers from Birmingham had organised a community-led data initiative that aims to ensure their perinatal healthcare concerns are taken seriously by medical professionals.

Drawn from Maternity Engagement Action (MEA) – an organisation that provides safe spaces and leadership for black women throughout pregnancy, birth and early motherhood – the women came together over their shared concern about the significant challenges faced by black women when seeking reproductive healthcare.

Through a process of qualitative data gathering – entailing discussions, surveys, workshops, trainings and meetings – the women developed a participatory, community-focused approach to black perinatal healthcare, culminating in the launch of MEA’s See Me, Hear Me campaign.

Speaking with Computer Weekly, Tamanda Walker – a sociologist and founder of community-focused research organisation Roots & Rigour – explained how the initiative ultimately aims to shift from the current top-down approach that defines black perinatal healthcare, to one where community data and input drives systemic change in ways that better meet the needs of local women instead.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Running out of cloud storage? Get 20TB for life for 0. Running out of cloud storage? Get 20TB for life for $390.
Next Article DeepSeek founder Liang Wenfeng joins global billionaires list · TechNode DeepSeek founder Liang Wenfeng joins global billionaires list · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Why does IPTV work without VPN but fail with VPN in the UK?
Why does IPTV work without VPN but fail with VPN in the UK?
Gadget
The Canon EOS R6 Mark III is great, but this lens is amazing
The Canon EOS R6 Mark III is great, but this lens is amazing
News
What Is a Preamp, and Do I Really Need One?
What Is a Preamp, and Do I Really Need One?
Gadget
Tencent invests €1.16 billion in new Ubisoft subsidiary, securing 25% stake and key IP rights · TechNode
Tencent invests €1.16 billion in new Ubisoft subsidiary, securing 25% stake and key IP rights · TechNode
Computing

You Might also Like

The Canon EOS R6 Mark III is great, but this lens is amazing
News

The Canon EOS R6 Mark III is great, but this lens is amazing

8 Min Read
Meta Platforms buys Manus to bolster its agentic AI skillset –  News
News

Meta Platforms buys Manus to bolster its agentic AI skillset – News

6 Min Read
10 Major Smart Thermostats Ranked From Worst To Best Based On User Reviews – BGR
News

10 Major Smart Thermostats Ranked From Worst To Best Based On User Reviews – BGR

21 Min Read
Early buyers of this high-end Android handheld now face a multi-month wait
News

Early buyers of this high-end Android handheld now face a multi-month wait

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?