Blog

New data on ethical AI to ring in the Global Digital Compact

Download latest data

With its release of ChatGPT in 2022, OpenAI launched the global AI race across the tech sector. That same year, WBA launched the multi-stakeholder Collective Impact Coalition (CIC) for Ethical AI with one clear vision: to push the world’s tech giants towards greater transparency and accountability in how they develop and use AI. With 61 investors and 13 civil society groups behind us, we set out on a journey to close the glaring gaps in transparency around responsible AI practices that WBA’s Digital Inclusion Benchmark had previously exposed. Our message was simple: companies must step up and commit publicly to ethical AI.

Research has shown that AI systems can perpetuate and even exacerbate existing inequalities due to biases in data and algorithmic design. For instance, racial minorities, women and non-binary individuals often face biased or harmful outcomes as a result of discriminatory hiring algorithms, facial recognition systems and predictive policing tools. New generative AI systems also produce answers that often minimise, condemn or omit women and minorities, among a laundry list of other concerns regarding their impact on society.

Fast forward to September 2024: the UN Global Digital Compact (GDC), the first global framework to establish international norms for a safe and inclusive digital environment, is set to be adopted at the Summit of the Future, hosted by the UN General Assembly.

Among a slew of other goals, from digital inclusion to privacy, the Compact includes principles aimed at mitigating the risks associated with AI technologies. Meanwhile, new data from WBA shows that 71 of the 200 largest digital companies now have AI principles in place—one-third of the industry’s biggest players. That’s up from just 52 last September, and we know of at least a few more companies that are planning to announce their principles before the year ends. We’re encouraged to see that over half of these principles now include human rights considerations, an important step in recognising the importance of more responsible AI development. 

What do the numbers tell us? 

On the surface, companies have made progress. The development of comprehensive ethical AI documents has seen notable growth. Sixty-six companies now have AI principles that they developed themselves (as opposed to endorsing third-party principles), and 60 of those companies have released standalone documents outlining their commitments.  

Human rights considerations are an essential component of responsible AI governance, but progress in this area has been slower than anticipated. While the number of companies with ethical AI principles has grown, the portion of those that include explicit human rights considerations remains relatively small. In 2024, 38 out of 71 companies (53%) explicitly embedded human rights into their AI principles, compared to 31 out of 52 (60%) in 2023. In other words, despite a growing number of companies embedding human rights, many still fail to integrate these into AI frameworks.  

The real test, however, lies in how companies operationalise their commitments, and this remains a significant challenge. While the number of companies publicly disclosing how they implement their AI principles rose from 8 in 2023 to 29 in 2024—marking a positive step towards greater transparency—this still represents only a small portion of the 71 companies with ethical AI principles.  

Similarly, the number of companies with relevant internal governance structures, such as ethical AI committees, grew from 19 in 2023 to 41 in 2024. While these numbers are encouraging, they indicate that many companies have yet to fully operationalise their ethical AI principles. Nor does forming a committee guarantee that a company will consistently choose responsible growth paths. The gap between high-level commitments and concrete action remains a critical area for improvement. More research is also needed to distinguish between effective governance structures and those that exist only on paper. 

The most striking gap, however, lies with human rights impact assessments (HRIAs) on AI systems. While 16 companies conducted HRIAs in 2024, this is an even smaller fraction of the 71 that claim to be committed to ethical AI — though it does still mark progress from just 6 companies in 2023. This blind spot is especially alarming given the stipulations of recent legislation, such as the EU Artificial Intelligence Act, which will require Fundamental Rights Impact Assessments (FRIAs) for high-risk AI systems beginning in 2026. 

What’s next? 

While these commitments are a step in the right direction, they are still just the first step. The next challenge lies in tracking how these principles are implemented in practice. Many companies’ reporting on their AI operations lacks transparency, making it difficult to assess whether they are truly living up to their ethical AI commitments. At present, 184 companies still fail to disclose how they assess the human rights impacts of their AI, despite these technologies shaping the lives of millions globally. This gap between promises and tangible actions is exactly what we aim to close. 

The Global Digital Compact (GDC) and national-level legislation have a key role to play in bridging this gap. By emphasising transparency and accountability, the GDC can serve as a global framework, in tandem with national laws, ensuring that companies not only commit to ethical AI but also actively demonstrate how they are putting their principles into practice. But high-level commitments alone are not enough. The UN and GDC signatories must do more to provide clarity on the responsibilities of businesses and establish a pathway towards clear consequences for companies that fail to meet the goals set forth in the GDC.

Through the Collective Impact Coalition for Ethical AI, we will also be continuing to push companies to move beyond rhetoric and show real progress in the operationalisation of their AI principles. This includes conducting human rights impact assessments (HRIAs) and establishing robust governance mechanisms to oversee their AI systems. However, this process is not without its challenges. A major obstacle is the lack of comprehensive, clear guidelines for conducting HRIAs in the context of AI systems. Developing these guidelines is an urgent next step.  

In conjunction with the GDC, national legislation and structured collective action would ensure that ethical AI becomes a tangible reality. By establishing clear expectations and accountability mechanisms, we can move from mere principles to meaningful practices—ensuring that AI serves humanity in a responsible and rights-respecting way.

Newsletter signup