Filling the Governance Void: Corporate Responsibility in the Age of Unregulated AI

Artificial Intelligence is increasingly shaping our lives, with an array of sectoral applications – from medical diagnosis, improving supply chain efficiency in agribusiness to cybercrime detection and prevention. Yet, the lack of strong legislation in many regions to curb the challenges to human rights brought about by AI is worrying. There is, therefore, a growing need to hasten the creation of relevant accountability mechanisms. While the adoption of the UN Global Digital Compact is a step in the right direction, it is imperative that the Compact’s enforcement mechanisms be put in place to ensure AI benefits everyone.
To understand why corporate accountability is crucial in this process, let’s start by imagining a scenario that may sound far-fetched but is increasingly plausible. Imagine a financial services firm that has adopted an AI-driven system to help assess loan applications. The system uses a vast amount of data to predict the likelihood of loan repayment and assign credit scores. While it initially performs well, over time, the system begins to make biased decisions, rejecting loan applications from certain demographic groups at disproportionately high rates. People who have been wrongly denied loans, perhaps even struggling families or small business owners, feel the devastating effects of this error. But who is held accountable for the harm caused?
This hypothetical example highlights a key issue: when AI systems are deployed without sufficient oversight, they can perpetuate bias, discrimination and harmful outcomes. But the story doesn’t end there. The real challenge lies in the fact that the company behind the AI system may not be held accountable for its failure. Instead, the individuals and communities affected bear the brunt of the consequences while the corporation behind the system remains relatively untouched. This is where corporate accountability becomes not just a legal imperative but also an ethical one.
In many cases, companies have been slow to adopt robust AI governance frameworks, either because of a lack of awareness or because the pressure to innovate and outpace competitors outweighs a desire to protect users.
Another example is the use of AI in hiring. AI systems are increasingly being used to screen resumes and assess candidates, offering a promise of objectivity and efficiency. However, if these systems are not carefully designed and regularly audited, they can reinforce existing biases in hiring practices. A simple anecdote can demonstrate the danger: imagine a promising candidate who, on paper, should be hired but is overlooked by an AI system because of subtle biases embedded in its algorithms—biases that reflect historical prejudices or flawed training data. The person’s potential is disregarded, and the company’s ability to innovate and diversify suffers. This is not just a lost opportunity for the candidate, but a missed chance for the company to benefit from a broader pool of talent.
Today, these examples are not just hypothetical. The key issue here is that companies, in their race to develop and deploy these technologies, often fail to ensure that algorithms are thoroughly tested and regularly audited for fairness, transparency and accountability. When things go wrong, the consequences can be far-reaching, affecting not just individuals but entire communities.
This is where the importance of corporate accountability comes into play. AI governance should not be an afterthought or a checkbox on a compliance list. Instead, it should be an ongoing, proactive process that involves constant monitoring, testing and refinement of AI systems to ensure they are used ethically and fairly. Corporate leaders must recognise that they have a moral obligation to protect the public from the potential harms of AI systems.
Accountability in AI governance can take many forms. For example, companies can establish internal ethics committees to evaluate the ethical implications of their AI projects. They can also create transparency by making their decision-making processes and AI algorithms open to public scrutiny. Regular independent audits of AI systems, particularly those that impact vulnerable groups, can ensure that these technologies are not inadvertently causing harm. Furthermore, businesses must be willing to take responsibility when their AI systems violate human rights, by offering clear channels for redress and making amends when mistakes happen.
However, it should not be left up to companies alone to exert change. As consumers, we all have a role to play in holding companies accountable. When we engage with businesses that rely heavily on AI, we should demand transparency and ethical standards to minimise harm. Whether through voting with our wallets or by pressuring companies to prioritise responsible AI practices, we can collectively drive change and encourage businesses to think more carefully about the long-term societal impacts of their innovations. Of course, governments must also act by ensuring strong, implementable legislation to keep companies in check, shielding consumers from abuse through unethical innovations at all costs.
Ultimately, corporate accountability in AI governance is not just about avoiding legal liability or damage to reputation; it is about ensuring that AI systems are used in ways that benefit all of society. By holding companies accountable for their AI systems, we can create a future where technology serves humanity in a way that is equitable, transparent and fair – instead of a world in which new AI systems benefit only a few, while harming many.
The road to effective AI governance will not be easy, but it is essential. As we continue to build and shape the future of AI, we must demand that those who create these systems do so with the utmost care, responsibility and ethical consideration.