Skip to main content

Tech sector progress on AI accountability threatens to stall 

While 38% of major tech companies publish their ethical AI principles, none of them disclose their human rights impact assessment results, exposing weak accountability across the sector.

Big data connection technology. Smart city and digital transformation.jpg

AI dominance raises stakes for accountability across the value chain

Artificial intelligence is reshaping the global economy and turning its leading companies into key players in international power struggles. Record-breaking valuations and the entrenched dominance of hyperscalers[1] such as Alphabet, Amazon and Meta demonstrate how a small group of actors has come to control the computational backbone of the internet and AI.

Yet the influence of AI stretches far beyond the tech sector. The reality and ambition of automation now underpin food systems, governments, healthcare, finance, academia and military operations, raising urgent concerns about corporate accountability and human rights with regard to AI deployment and use.

Companies in every layer of the AI value chain have a responsibility to act as stewards of good governance. The largest corporate actors in industries that rely on digital technology have embedded AI systems into their operations, from hardware and chip design to cloud services and consumer-facing applications. Any company that develops, procures or deploys these systems can affect risks and rights downstream.

Rift between AI principles and responsible practice is growing

Although universal AI governance standards are still emerging, many of the frameworks that do exist coalesce around several core expectations.

Ethical AI principles are one of them. They offer the public a chance to understand how each company interprets its responsibilities and create a starting point for accountability. Large software giants, telecom operators, retail companies, passenger transport firms, and other highly digitised corporate actors should all be expected to explain their stance on ethical AI.

In contrast, our findings show that high-level AI transparency is slowing down. Only nine companies[2] reported their AI principles for the first time in 2025, compared with 19 in 2024. While the overall trend is still positive with more companies reporting their principles, major industry-shaping firms such as ASML, Oracle, SK Hynix and TSMC, and platforms that millions of people use every day such as Spotify or Uber, still have no public AI principles. What’s more, some of these firms have been persistently unresponsive to investors’ efforts to engage with them on their AI practices, as we revealed last month in a report on our Collective Impact Coalition for Ethical AI.

Indeed, most tech giants are far from fulfilling even fundamental, achievable criteria. Out of the 200 companies assessed:

  • only 38% had public AI principles (77 companies);
  • around 19% committed to any regional or international AI frameworks (38 companies) or incorporated respect for human rights into their principles (41 companies);
  • slightly more than 10% explained their internal AI governance mechanisms (24 companies);
  • not a single company showed proof of conducting comprehensive human rights impact assessments (HRIAs) on the AI systems they developed, procured or deployed.

The lack of transparency on human rights impact assessments is particularly striking. It is pivotal for companies to aim high here, as the new generation of AI tools can severely distort public discourse, amplify misinformation, facilitate arbitrary surveillance and act as vehicles for discrimination and gender-based violence. Without more transparent assessments, these risks will fall through the cracks and escalate into actual harms.

Regardless, even companies that claimed to conduct appropriate HRIAs rarely disclosed their scope, affected groups, AI functionalities covered or assessment results. All of these are essential to robust accountability and are enshrined in both guidance from human rights institutes and emerging legislation. As it stands, only Microsoft’s commissioned assessment of technologies licensed to law enforcement agencies comes close to meeting WBA’s strengthened assessment criteria.

There are signs of progress. Compared to 2023, far more companies now acknowledge AI as a material risk, signalling a shift away from pure hype towards recognition of AI’s real-world impacts. For instance, NEC elevated human-rights-respecting AI to its top sustainability priority, LG Electronics published dedicated AI accountability reporting, and Salesforce linked AI to environmental sustainability. The newest Ranking Digital Rights Index (RDR Index) also found major tech companies adopting detailed new policies to guide their algorithmic governance and improving on algorithmic transparency overall.

AI transparency risks hitting a glass ceiling

Publishing AI principles alone is only the first step towards improving human rights outcomes. Without more robust disclosures and buy-in across industries, even this initial action risks becoming paralysed before it becomes the norm. Companies that are further ahead should make strengthening and clearly explaining their impact assessment processes a key priority, especially as the use and development of some AI systems will require mandatory assessments in the EU this year. Early movers will set the tone for their peers.

The current momentum is fragile. The slowdown in the disclosure of AI principles and the near-total absence of meaningful HRIAs cannot become the norm in a sector that defines modern infrastructure. Companies must connect high-level principles to deeper transparency and be willing to disclose HRIA results, which are essential for understanding who is affected and how, and what risks require mitigation. Greater clarity on supply-chain relationships is also needed to assign responsibility across the ecosystem.

These challenges are only the starting point, but they unite consumers, governments, investors, and civil society, all of whom are demanding a more accountable and rights-respecting technological future.

[1] Hyperscalers are large, global cloud computing and data management companies that operate the vast infrastructures that underpin the modern internet, with capacity to handle massive increases in demand.
[2] The nine companies were Analog Devices, Digital Realty Trust, e&, HCL, Kyocera, Logitech International, MTN, ServiceNow and Tele2.

Subscribe to stay informed on our work

Keep up to date with all the latest from World Benchmarking Alliance

Join our newsletter