GettyImages-1195411440.jpg

Three Approaches to AI Governance

October 17, 2023

The race for regulating artificial intelligence (AI) is intensifying. Various countries have adopted different approaches to strike the right balance between safety and innovation. Over-regulation could stifle innovation, while failing to set clear rules for the development and deployment of AI could have a negative effect on the economy and human rights. Despite this multiplicity of individual approaches, three potentially competing regulatory approaches have emerged for the regulation of AI: the market-driven approach, the state-driven approach and the rights-driven approach. The question as to which approach will prevail will have a profound effect on companies’ investment decisions.

Market-Driven Approach

The market-driven approach is based on the premise that markets provide the best incentives for innovation, technological progress and economic growth, whereas state intervention only impedes such progress. The paragon of this approach is the United States, as Washington views AI as potential opportunity for economic, geopolitical and military supremacy. The United States did not introduce any substantial federal legislation but relied heavily on voluntary standards and self-regulation. The Blueprint for an AI Bill of Rights, introduced by the White House in October 2022, served solely as a handbook to provide guidance for AI developers and users on how to protect the rights of American citizens. However, the Biden Administration supplemented the guidelines with an executive order on October 30, 2023 to mitigate AI risks while capitalizing on its potential. The executive order outlined several new actions, focusing on areas like safety, privacy and protecting workers while promoting innovation and competition to ensure American leadership in AI. The order also called to pass a bipartisan legislation to protect all Americans since the executive order is limited in its authority in terms of enforceability.

Given its third place in the world for private investment into AI in 2020, the UK also adopted a light-touch regulatory approach based on sectoral regulation to avoid stifling technological progress and promote innovation. The UK government published its National AI Strategy in September 2022, followed by the AI white paper in March 2023. The white paper proposed a flexible definition of AI systems and a principles-based framework for the existing regulators, such as the Office of Communications (Ofcom) and the Competition and Markets Authority (CMA). However, the UK government has yet to propose any AI-specific regulation.

State-Driven Approach

The state-driven approach is based on a command-and-control model in which the state is the primary actor guiding economic policies, including the planning, development, and regulation of technology and innovation. It prioritizes common prosperity under state guidance to maintain social, economic, and political stability and security of the state.

China adopted this regulatory approach in line with its overarching state-led development model and as part of its strategy to strengthen the Chinese Communist Party’s (CCP) political control, particularly over the increasing power of Chinese tech companies, particularly in the area of generative AI, to ensure social stability and effectively advance its global competition in technology. In line with this approach, China has adopted over the past two years rules and regulations specifically targeting AI companies over the past two years to ensure that the development of such technologies does not undermine the CCP’s control of China’s digital economy and the country’s political values.

Specifically, in 2022, the Chinese government introduced strict regulations on deepfake technologies and recommendation algorithms to control the flow of information. In 2023, the government passed draft regulations on generative AI that held AI developers responsible for developing content that deviated from the CCP’s political values.

Rights-Driven Approach

The rights-driven approach focuses on the fundamental rights of users and citizens, as well as the potential risks AI technologies pose on those rights. According to this approach, innovation and technological progress should not come at the expense of citizens’ fundamental rights. Thus, governments should intervene to protect the rights of individuals, ensure fair distribution of economic progress, and promote social peace and prosperity generated by the adoption of the AI.

This approach is led by the EU. The EU has already applied this approach in several regulations on technology, such as the General Data Protection Regulation (GDPR), aimed to protect citizens’ data privacy; Digital Markets Act  (DMA), aimed to regulate the digital tech giants’ potential dominance of digital markets; and Digital Services Act (DSA), aimed to hold online platforms accountable for the content they host. However, the most comprehensive piece of legislation on AI has been the AI Act. This legislation addresses various sensitive issues related to fundamental rights and freedoms, such as predictive policing, facial recognition in public places and discrimination in the workplace and public office—introducing substantial limits to the use of such technologies. It is expected to be finalized by the end of 2023 and go into force by 2026, making it the first comprehensive AI regulation in the world.

Additionally, Spain has announced the creation of the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which will be the first AI regulatory body among EU Member States. The agency’s primary goal is to develop an “inclusive, sustainable and citizen-centered” AI by creating risk assessment protocols, auditing algorithms and data practices, and setting rules companies must follow in the development and deployment of AI systems.

In other regions, Brazil (Draft AI Law ) and Canada (AI and Data Act– AIDA of 2022) took inspiration from the EU’s AI Act and have aimed to set up comprehensive, cross-sectoral and risk-based regulatory frameworks to mitigate AI harms and promote responsible innovation. These regulations are still pending approval. Once approved, a couple of years will be needed to implement the supportive regulations necessary for the regulation to be enforced.

The regulatory landscape for AI is still taking shape. Yet, one can identify three major approaches competing for global dominance. Companies developing or utilizing AI must closely monitor these evolving regulatory approaches in key markets, as the varying approaches have profound implications for corporate strategy and operations. Proactive monitoring and strategic engagement in policymaking will be crucial for companies to inform their AI investment decisions and shape favorable regulations that allow sustainable business growth.

Related Articles

AI for Corporate DEI Strategies

Perspectives

From Algorithms to Inclusion: AI’s Role in Corporate DEI Strategies 

July 25, 2024
Crisis and Issues Management

Perspectives

Crisis and Issues Management: How Leaders Can Proactively Prepare 

July 24, 2024
AI and Market Research

Perspectives

AI in Market Research: A Handy Sidekick but Watch Out for Its Shenanigans! 

July 23, 2024