Embracing AI governance

Arnaud Cave is a Director in the Corporate Governance & ESG team at FTI Consulting. He advises public and private companies on corporate governance and sustainability matters, with a focus on corporate reporting, stakeholder engagement, and the development of tailored governance frameworks.

Niamh O’Brien joined FTI Consulting following her Masters in Technology Law, where she specialised in AI, personalised pricing and data governance. Previously, she worked in policy and public affairs in the telecoms and technology sectors in London and Brussels.


Journal Issue April 2024

Arnaud Cave and Niamh O’Brien argue that proactive steps to AI governance are non-negotiable for companies and outline a framework for businesses to effectively manage the technology’s risks and opportunities.

AI holds immense potential to transform businesses and boost productivity through streamlined operations and personalised services. However, amidst the promise of AI, a pressing concern around governance has emerged.


As with all technology, companies must ensure their governance structures uphold ethical standards and mitigate risks fostering long-term success and trust in their AI initiatives. Effective AI governance will be key to ensuring AI’s responsible use, to maintain public and investor confidence and ensure regulatory compliance with new laws like the EU AI Act. In this article, we provide practical advice and considerations for business leaders who are looking to develop a robust AI governance framework.


Navigating AI’s risks


Businesses must establish clear AI guidelines to navigate its integration into operations and use by the workforce. This will be an important safeguard against potential errors and cyber vulnerabilities that can reverberate through financial markets, spark legal disputes and tarnish corporate reputations.


One example is AI-powered tools, like chatbots, which can inadvertently leak confidential data or provide inaccurate advice leading to customer dissatisfaction and potential legal liability. There are a number of recent examples of legal cases involving a company chatbot providing misleading information, resulting in litigation, regulatory repercussions and compensation.


To address these challenges, companies need comprehensive action plans covering various scenarios, including data privacy and security threats, mitigating biases and energy consumption concerns. Employees must act as AI’s stewards, understanding its risks and benefits, while senior leaders must adopt a holistic and multifaceted approach to consider AI inventories, their use cases, and an assessment of their implications on all stakeholders.


Legal scrutiny of AI


As businesses embrace AI, they need proactive governance frameworks to avoid reputational upsets, navigate escalating regulatory oversight and potential litigation. Under the EU AI Act, stringent compliance requirements await AI providers (developers) and users (deployers). Additionally, legal actions by individuals or NGOs loom large for flawed AI decisions.


Instances like the Writers Guild of America and Screen Actors Guild strikes highlight concerns about AI plagiarising creations, emphasising the urgency for copyright protections and workers’ rights. Such disputes may set precedents for labour battles arising from AI-driven content. Copyright lawsuits are only increasing and so businesses should observe how copyright applications are determined and changing EU legislative requirements to understand if there are any shifts to the current human-centric focus and compensation requirements.


Furthermore, although existing liability laws may address how AI's integration into products affects liability dynamics for businesses, the lack of practical cases and specific AI liability legislation may potentially lead to corporate disputes over fault attribution between companies deploying and developing AI.


Corporate governance must adapt to changing legal landscapes. For instance, New York's pioneering Local Law 144 of 2021 regarding automated employment decision tools mandates employers to conduct and publicly disclose annual impact assessments of AI systems in recruitment. Laws, like those appearing in Europe, are increasingly providing workers with greater protections such as prohibiting solely automated dismissals and requiring transparency of AI’s involvement in hiring.


The imperative for transparent disclosures and audits in AI practices is growing. Companies must develop adaptable governance frameworks to meet evolving regulatory demands and stay ahead of legal challenges.


Investor demands


As the regulations continue to evolve, investors will likely lead the charge in demanding robust governance structures for AI. Ensuring businesses create value through AI while effectively managing associated risks is paramount.


Governance expectations from the Chartered Governance Institute emphasise the importance of boards adopting coherent approaches to AI’s risks. The Norwegian pension fund, Norges Bank Investment Management, with managed assets over $1.4trn, expects board accountability, risk management processes and transparency to be built into AI practices.


Investor engagement, led by the Collective Impact Coalition for Digital Inclusion, is on the rise. With more than $6.9trn in assets under management, it pushes for technology companies to disclose their ethical AI commitments. One of the Coalition’s members, Aviva, outlined the potential to vote against management at AGMs if engagement by companies on the topic fell short.


As a tool for escalation, shareholders have also started to file proposals focused on AI. In October 2023, an investment trust for union members, the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), with assets exceeding $12bn, submitted proposals against five entertainment companies pushing for transparency reports on AI’s ethical guidelines. In February 2024, Legal & General Investment Management and abrdn, two major UK fund managers, announced support for the resolution filed at one of these companies. Notably, two influential proxy advisors, ISS and Glass Lewis, have recommended shareholders vote for the AFL-CIO’s proposal in this instance.


While shareholder proposals are more prevalent in the US, demands for AI governance are universal, especially considering strict EU regulation.


Ensuring responsible AI governance


As regulatory scrutiny and investor interest rise, companies must address AI's ethical and governance concerns for sustained growth. Hence, AI has shifted from the domain of technology teams to senior leadership's agenda and organisational priorities, requiring transparent governance structures. FTI Consulting's Responsible AI Governance Report offers a proactive approach to future-proof corporate governance, strategy, risk management and reporting – which is summarised below.


To manage AI risks effectively, companies should consider the ESG double materiality concept, assessing AI’s potential impacts on the company's financial bottom line as well as wider society. This is particularly relevant to the nature of AI’s risks and also because of the potential synergies with the EU’s new Corporate Sustainability Reporting Directive (CSRD) – which requires that companies carry double materiality assessments and provide disclosure on material risks.


However, a critical challenge for responsible AI governance lies in unclear decision-making and responsibility for AI policies, necessitating clear escalation paths to senior leadership and, ultimately, the board. Again, leveraging existing ESG frameworks like the Taskforce on Climaterelated Financial Disclosures (TCFD) or the Taskforce on Nature-related Financial Disclosures (TNFD) can help effectively structure and communicate a business' approach to managing AI risks. Investors are familiar with the four-pillar approach covering governance, strategy, risk management, and metrics/targets.


Businesses can outline their responsible AI policy within these pillars, specifying ethical commitments, guardrails for employees and ensure clear oversight. It requires decisions on which committees or individuals are responsible for AI, how they are informed and how responsibilities on strategy, implementation, risks, and operations are assigned to senior executives and teams.


An integrated approach with a steering committee that includes the head of AI or the chief technology officer, general counsel, chief people officer, head of data protection and head of sustainability, may support companies in their holistic assessment of AI’s opportunities and risks.


Relevant knowledge and expertise among the board, executives and employees, engagement with external stakeholders and alignment with diversity, equity, and inclusion frameworks are essential for responsible AI governance.


Building the AI governance foundations


Companies should tailor their governance process to their own circumstances, but common principles exist that act as a foundation for responsible AI. These encompass data governance and protection, traceability, explainability, accountability, AI literacy, accuracy, fairness, transparency, security, safety, and contestability. As these are common features in global frameworks like the EU AI Act and US National Institute of Standards and Technology (NIST) AI Risk Management framework.


Building a commitment to responsible AI entails developing and disclosing a comprehensive policy covering these principles and the four governance pillars mentioned above. This policy should be effectively communicated internally to create awareness across the organisation, supplemented by adequate training where relevant.


The public disclosure of the AI policy should be supplemented by periodic implementation reports, integrated into annual or sustainability reports, to facilitate constructive investor engagement. Third-party audits of AI systems and public disclosures on their outcome will further strengthen the company’s commitment to ethical AI.


Finally, sharing industry best practice fosters ongoing improvement in policies and processes, ensuring a safer AI environment for all. These steps form an iterative loop for continuous improvement and adaptation to technological advancements and regulatory changes.


FTI Consulting’s Report on Responsible AI Governance is available here.

If you've enjoyed this content, subscribe today for our exclusive governance insights.

Subscribe
March 25, 2025
With the need to go further and faster on net zero transition, the rise in climate and nature risks, and new reporting demands such as the CSRD putting businesses in the spotlight, 2025 is the year that sustainability moves firmly onto the boardroom agenda.
March 24, 2025
Spencer Stuart has analysed every CEO transition since 2010 in each of the major European stock indices. Here they break down what the numbers mean for you as a board director, CEO or CHRO.
March 20, 2025
The 2025 WTW Global D&O Survey in collaboration with Clyde & Co LLP reveals the key risks impacting boardrooms today—and how directors, officers, and risk managers are responding.
March 19, 2025
Antitrust enforcement saw seismic shifts in 2024, with record fines, a divergence in approaches between jurisdictions, and regulators focused on digital markets. What will the priorities be in 2025 and beyond?
March 18, 2025
Grant Thornton asked 200 private equity (PE) leaders across the globe to share their priorities for this year.
March 17, 2025
According to Korn Ferry, smart boards know they need to reshuffle succession plans as fast as possible.
March 11, 2025
EY teams have conducted qualitative research to identify board priorities in 2025, how board members can best support their organizations over the next 12 months, and which questions they should be asking of management teams.
March 10, 2025
Boards face a range of challenges, from navigating AI integration and global instability to addressing leadership gaps and evolving investor expectations. Odgers Berndtson look at the key priorities for boards in 2025.
More Posts