What’s next for Investor Engagement on Ethical AI?
When? Friday, 11th of October, 8:30-10.30am
Where? NEI Investment, 12 – 151 Yonge St, Toronto
This roundtable will focus on the progress that has been made by investors engaging technology companies on Ethical AI. How can this investor-company engagement be further strengthened? And how can investors work with other actors (e.g. governments, civil society organisations) to strengthen Ethical AI governance mechanisms? This session will be guided by short interventions from experts working in this space and welcomes input from all attendees.
Interested participants should sign up by Monday 7th October and a confirmation email will be sent out on Tuesday 8th October. Anyone with questions, please email Nikki Gwilliam-Beeharee, n.gwilliam@worldbenchmarkingalliance.org.
Register hereContext: Both the risks and opportunities of AI have materialised with exceptional speed. The rise of generative AI (GenAI) tools, which generate text and audiovisual content from very large datasets, illustrates the potential of algorithmic systems to expand the frontiers of innovation but also their capacity to exacerbate and distribute harms, especially to vulnerable groups. These developments accompany long-held concerns about more established uses of AI, particularly high-risk deployments. If leading digital technology companies fail to adopt, implement, and disclose robust governance policies and controls, backed by strong ethical principles, they face reputational as well as revenue losses, and society as a whole faces tremendous risk. Thus, it is necessary to raise the bar.
Since September 2022, members of WBA’s Ethical AI Collective Impact Coalition have been engaging companies assessed by WBA’s Digital Inclusion Benchmark on ethical AI, focussing initially on companies that did not yet have publicly available ethical AI principles. There has been some progress in the adoption of these principles, with 52 out of 200 companies disclosing them as of September 2023. In February 2024, a new phase of the Collective Impact Coalition for Ethical AI was launched, supported by investors representing over USD $8.5 trillion in assets under management. This second phase of the engagement is ongoing, with investors encouraging companies to implement policies and mechanisms to ensure the ethical development and application of AI, guided by respect for human rights and the principle of leaving no one behind.
Over the past year, government attention and, in some case regulation, of AI has also been evolving. For instance, the first Global AI Safety Summit took place at Bletchley Park, UK in early November 2023. The summit, which drew together government officials from 28 countries, representatives from the biggest global tech sector companies, academia and civil society organisations, has led to the adoption of The Bletchley Declaration, which includes agreements to support the development of an independent ‘State of the Science’ report and to establish AI Safety Institutes. Alongside this, the EU AI Act entered into force in August 2024. The act establishes clear obligations on AI, including a mandatory fundamental rights impact assessment, as well as targeting the harmonisation of rules on AI systems in the EU. On the global stage, the UN Global Digital Compact (GDC) was finalised and launched at the 2024 United Nations Summit of the Future. The GDC is intended to be the first global framework to establish cross-cutting international norms and standards for the building blocks of a safe and inclusive digital environment, from expanding connectivity to protecting privacy. The GDC is set to lay a foundation for private sector accountability by articulating its responsibility to uphold, promote, and protect human rights online, integrate human rights law into emerging technologies, and mitigate risks associated with AI technologies.