Strategies to adapt to changing consumer expectations are not the only thing the Artificial Intelligence (Fourth Industrial) Revolution should borrow from the Telegraph Revolution. Two of the key challenges that faced the telegraph revolution will continue to pose challenges today as companies and governments grapple with the global governance of artificial intelligence.
First, both transformative technologies raise questions about competing interests because they affect almost every aspect of society. Second, questions about how to balance these interests become exponentially more difficult when we consider the global scope from which these questions arise. When considering the answers to the questions below and input from AI governance expert Mark Esposito, it is clear that polycentric (multilevel) governance is the appropriate framework for global AI governance.
What are the competing interests or values in the global governance of artificial intelligence?
While the telegraph raised questions about freedom of expression versus corporate control, AI creates conflicts between security and privacy. Advanced AI pattern detection improves the accuracy and efficiency of surveillance programs, but its invasiveness often infringes on various civil liberties and personal freedoms.
So how will we assess the significance of these competing interests and who will we rely on to do so? One way to balance them is to use an informal version of the proportionality doctrine, which takes into account the extent to which each interest is affected and the net difference a proposed measure or technology will have on society. Although proportionality review is often carried out by the judiciary, we should use an informal version of it, relying on input from many different stakeholders who have different perspectives on the technology and its impact. For example, civil society groups can advocate to voiceless citizens about threats to their civil liberties, while intelligence experts and military leaders can speak about actual threats to national security with the aim of justifying them.
Global Governance of Artificial Intelligence Governance: How do we conduct this analysis through a global lens?
Managing these interests is certainly difficult when examining the impact of AI on a single country, but the task of balancing these interests in the context of the world’s 5,000 ethnic groups, 4,000 religions represented in 195 countries, is discouraging. It is impossible to balance these interests in a uniform way: every nation, ethnicity and religion will have different interpretations about the appropriate levels of government interference and personal freedoms. For example, the collectivist mindset of a social democracy might be comfortable with the added violation of enhanced surveillance methods, while the individualist mindset of a liberal democracy might force people to reject them because they place a higher premium on civil liberties.
So how do we standardize certain AI governance principles, while also taking into account regional and national differences? The answer: a decentralized system that allows specific regions and groups to determine their own calibration between competing interests. When faced with questions about its members’ efforts to maintain national sovereignty over the languages used for communications, the ITU recommended the use of internationally recognized languages for certain communications to ensure clarity in cross-border messaging. At the same time, the ITU encouraged governments to use their own languages. for domestic communications. Similarly, in the context of AI, we need to establish universal principles of AI ethics (such as transparency, fairness, accountability and safety), while also allowing countries to tailor the application of these principles to their unique cultural, political and economic conditions. contexts.
Why is polycentric governance the right answer for global governance of artificial intelligence?
Unlike centralized governance frameworks in which power is concentrated and delegated to a few decision makers, polycentric systems are decentralized to accommodate country-specific value systems. Mark Esposito, an expert on global artificial intelligence governance who holds appointments at Hult International Business School and Harvard University (disclosure: I am also a professor at Hult), explains more specifically: “Elinor Ostrom’s eight principles for polycentric governance are of vital importance. They provide a structure to balance global collaboration with local autonomy, ensuring that AI’s transformative potential is harnessed responsibly. By implementing clear boundaries, collective choice arrangements and conflict resolution mechanisms, we can address the complex, often conflicting interests and values inherent in AI’s global impact.”