2024 Investor statement on Ethical AI

The World Benchmarking Alliance’s (WBA) Digital Inclusion Benchmark tracks the performance of the world’s most influential digital technology companies on four areas of digital inclusion: enhancing universal access to digital technologies; improving all levels of digital skills; fostering trustworthy use; and innovating openly and ethically. As of the 2023 Digital Inclusion Benchmark, published in March 2023, only 44 out of 200 digital technology companies disclosed the principles they follow in the development, deployment, and/or procurement of ethical artificial intelligence (AI) tools. While this was an improvement over the 2021 results, the pace of progress remains insufficient. Companies must move beyond high-level principles in the face of significant risks stemming from the new generation of AI tools to demonstrate robust AI governance and implementation. 

As responsible investor participants in WBA’s Collective Impact Coalition (CIC) for Ethical AI with over USD $8.5 trillion in assets under management, we believe that inclusion and trust are important to harness the full positive potential of digital technologies to enable the achievement of the 17 UN Sustainable Development Goals. This is particularly evident in the proliferation of AI applications in many domains, such as finance, health, media and entertainment, advertising, law enforcement, and human capital management.  

We have seen that AI can help to improve the accuracy of medical diagnoses, broaden financial inclusion, slow the spread of abusive content online, and bolster accessibility for people with disabilities. At the same time, evidence has shown that AI may increase the risk of harms such as bias and discrimination; invasion of privacy; denial of individual rights; and non-transparent, unexplainable, unsafe, and unjust outcomes. 

Both the risks and opportunities of AI have materialised with exceptional speed in the last two years. The rise of generative AI (GenAI) tools, which generate text and audiovisual content from very large datasets, illustrates the potential of algorithmic systems to expand the frontiers of innovation but also their capacity to exacerbate and distribute harms, especially to vulnerable groups. These developments accompany long-held concerns about more established uses of AI, particularly high-risk deployments. 

If leading digital technology companies fail to adopt, implement, and disclose robust governance policies and controls, backed by strong ethical principles, they face reputational as well as revenue losses, and society as a whole faces tremendous risk. Thus, it is necessary to raise the bar. 

Ethical AI is a critical area of digital inclusion that requires systems change as identified by WBA. A stronger commitment to ethical AI, manifested through ethical AI principles and robust AI governance and oversight, evidence of their implementation, and comprehensive impact assessments, will allow the developers of AI applications to contribute to sustainable development, build trust with users, and reduce the risks and harms to individuals, companies, and society. 

Since September 2022, we have engaged with 44 companies assessed by WBA’s Digital Inclusion Benchmark on ethical AI that did not yet have publicly available ethical AI principles. We have seen some progress in the adoption of these principles, with 52 out of 200 companies disclosing them as of September 2023. However, this still represents only a quarter of the 200 most influential technology companies in the world meeting a minimum standard of disclosure.  

We believe that in addition to the publication of high-level principles, companies must demonstrate robust AI governance and implementation of safeguards in the face of significant risks stemming from the new generation of AI tools. 

We encourage the companies we invest in to implement policies and mechanisms to ensure the ethical development and application of AI, guided by respect for human rights and the principle of leaving no one behind. We specifically ask that companies implement, demonstrate, and publicly disclose: 

  1. a set of ethical principles that guide the company’s development, deployment, and/or procurement of AI tools;
  2. strong AI governance and oversight across the value chain of AI development and use;
  3. how these principles are implemented via specific tools and programs of action relevant to the company’s business model, including on the product and service level;
  4. impact assessment processes applied to AI, emphasizing human rights impact assessments (HRIAs), especially in high-risk use cases.

Such actions and disclosures will signal that a company gives serious attention to this issue from the highest levels of management. 

In this new phase of the Collective Impact Coalition for Ethical AI, we will continue our collective engagements with companies on this issue that began in September 2022. We will also expand our engagement efforts to all 200 companies evaluated in the Digital Inclusion Benchmark. We encourage other investors and their representatives to join us in signing this statement. 


A collaborative initiative organised by the World Benchmarking Alliance

Lead investors:

Investor participants as of February 2024:





Newsletter signup