The governance of artificial intelligence is no longer speculative. Boards can no longer run nor hide from having to define acceptable AI.
At the global level, the OECD AI Principles provide the broad normative foundation: transparency, accountability, and human oversight as baseline expectations. Boards will now need to focus on these foundations as part of evaluating the risk/reward in AI adoption across their business.
Geography adds complexity. These principles now sit beneath divergent but converging regulatory regimes. The EU AI Act marks a turning point by tying enforceable obligations directly to market access, transforming AI governance from soft ethics into hard compliance.
The UK has chosen a regulator-led approach, empowering existing bodies such as the CMA, ICO, MHRA, and FCA to apply AI oversight within their domains. China has moved faster still, mandating algorithmic transparency and watermarking to combat manipulation and information control….
