Spotlight on AI Regulation—With a Focus on How Antitrust Agencies Say They Are Watching the Space
January 18, 2024
The last few weeks of 2023 saw a flurry of activity around the world in relation to the regulation of artificial intelligence (AI). This is occurring against a backdrop of differing political approaches to the regulation of AI within and between the United States, EU and China. Meetings of various groups of countries and other stakeholders were convened; guiding principles, codes of conduct, and declarations issued; and some countries also took unilateral action. In the United States, President Biden issued an executive order devoted to AI, and in the EU, the AI Act continued its legislative journey towards adoption. Antitrust and competition law agency officials also articulated the issues they consider AI can give rise to and began to take action in the AI space. This activity is likely to continue in 2024.
In this blog, APCO shines a spotlight on some of these recent policy, legislative and enforcement developments. They are relevant not only to the businesses driving the development of AI but also more broadly within and beyond the tech sector. AI already touches numerous aspects of our activities and this is likely only to accelerate.
On October 30, 2023, the leaders of the G7 countries—Canada, France, Germany, Italy, Japan, the UK and the United States—agreed on International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers under Japan’s presidency of the G7. The G7 leaders stated:
“We believe that our joint efforts …will foster an open and enabling environment where safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximize the benefits of the technology while mitigating its risks, for the common good worldwide, including in developing and emerging economies with a view to closing digital divides and achieving digital inclusion.”
Also on October 30, 2023, President Joe Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Prior Biden Administration actions in relation to AI include work that led to “voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI” and a Blueprint for an AI Bill of Rights. In the current Congressional session, hearings are being held and several draft bills, some bipartisan, have been introduced but it is uncertain whether they will pass into law in the current session.
The 63-page Executive Order sets out a “Federal Government-wide,” multi-agency framework for addressing the benefits and risks related to AI development. Many executive departments and agencies are directed to take actions within specified periods in the coming months (i.e., ahead of the Presidential election on November 5, 2024). The Order anticipates a wide-ranging set of guiding principles, priorities, reports and other actions for AI policy.
Setting the context, the Order states:
“Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”
Specifically in relation to competition, the Order states:
“The Federal Government will promote a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation” (see also Spotlight on antitrust below).
Other aspects of the Order include:
On November 1, 2023, the Bletchley Declaration was issued by the 28 countries from around the world (including the United States, China and the UK) and the EU that attended the AI Safety Summit in Bletchley Park in the UK (Bletchley Park is notable for having been the top-secret location of Alan Turing and other World War Two codebreakers).
The Declaration affirms that “for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.” It recognizes that “AI also poses significant risks, including in [the] domains of daily life” and “the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content.”
The Declaration notes that “many risks arising from AI are inherently international in nature, and so are best addressed though international cooperation” and it “recognize[s] that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximizes the benefits and takes into account the risks associated with AI.”
Several countries announced the establishment of national AI safety institutes around the summit; no overall regulatory framework or organization resulted.
A virtual summit will be hosted by South Korea in six months’ time and a second full summit by France in a year’s time.
Welcoming the G7 statement (see above), European Commission President Ursula von der Leyen said:
“The potential benefits of Artificial Intelligence for citizens and the economy are huge. However, the acceleration in the capacity of AI also brings new challenges. Already a regulatory frontrunner with the AI act, the EU is also contributing to AI guardrails and governance at global level. I am pleased to welcome the G7 International Guiding Principles and the voluntary Code of Conduct, reflecting EU values to promote trustworthy AI.”
On December 9, 2023, the European Parliament and Council reached provisional agreement on the EU’s AI Act. President von der Leyen described the Act as a “global first”; European Commissioner Thierry Breton proclaimed it “historic”; and the EU Council announced:
“As the first legislative proposal of its kind in the world, it can set a global standard for AI regulation in other jurisdictions, just as the GDPR has done, thus promoting the European approach to tech regulation in the world stage.”
With Spain holding the presidency of the EU Council, the Spanish secretary of state for digitalisation and artificial intelligence, Carme Arigas, said:
“…in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.”
Despite the provisional agreement, the AI Act continues to be controversial. For example, Executive European Commission Vice President Margrethe Vestager has said the AI Act will create “legal certainty” and “not harm innovation and research, but actually enhance it. French President Emmanuel Macron has expressed a contrary view: “we can decide to regulate much faster and stronger than our major competitors [the United States and China]. But we will regulate things that we will no longer produce or invent. This is never a good idea.”
Political agreement at EU and Member State levels would need to be reached before the AI Act can be finalized.
The final text of the AI Act has not yet been published but it seems that the main elements include:
The AI Act’s enforcement will likely be overseen at EU level by a new body within the European Commission—the European AI Office—that will have administrative, standard setting and enforcement responsibilities to ensure coordination at EU level. It will be supported by a panel of independent scientific experts, an AI Board and an AI Advisory Forum. In addition, Member States will designate national authorities to supervise the AI Act’s application and implementation and market surveillance activities at national level.
Once the final text is agreed, the AI Act will need to be approved by the Parliament and Council. The current legislative mandate ends in April 2024 ahead of the European Parliamentary elections in June 2024. The majority of the AI Act’s provisions would apply two years after the Act’s entry into force with the prohibitions on “unacceptable risk” AI systems and the requirements for “high risk” systems coming into force in stages before then.
Some jurisdictions have already adopted AI legislation—notably China—or are planning to do so (e.g., Brazil and Canada) or are drafting non-binding guidelines or frameworks (e.g., Australia, India, Japan and South Korea). Other jurisdictions also have AI regulation on their agendas including the UK where the government published a white paper “AI Regulation: a pro-innovation approach” on March 29, 2023, and the Competition and Markets Authority published “proposed principles which aim to ensure consumer protection and healthy competition” on September 18, 2023, following its initial review of foundational models.
Indeed, the OECD’s database lists some 69 countries with over 1,000 AI policy initiatives.
Several of the initiatives outlined above recognize that antitrust (or competition) law can be a tool for AI regulation, although other laws also have important roles to play in this respect. For example:
President Biden’s Executive Order expresses concern about the potential for (further) consolidation and reduced competition in AI-related markets and the potential to harm competition in other markets. This “requires stopping unlawful collusion and addressing risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors” and “harness[ing] the benefits of AI to provide new opportunities for small businesses, workers, and entrepreneurs.”
The Order encourages the Department of Justice (DoJ) and the Federal Trade Commission (FTC) “to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI” and the FTC to exercise its rule making authority in this respect. This is consistent with the two agencies’ current approach to enforcement, as is the reference to “workers” as well as “consumers.”
DoJ and FTC
DoJ Assistant Attorney General for Antitrust Jonathan Kanter, FTC Chair Lina Khan and other DoJ and FTC officials have spoken about AI and antitrust in the past few weeks. For example:
Senior officials from the antitrust and competition agencies of the G7 countries held their annual meeting on November 8, 2023. They discussed antitrust concerns in the digital economy with a focus on generative AI and (for the first time) issued a joint statement highlighting their concerns about the new technology.
The statement states that “anticompetitive mergers or exclusionary conduct” can tip rapidly developing areas such as AI; notes that algorithm-based digital cartels can involve unlawful collusion or price manipulation; calls for the use of horizon-scanning tools such as research, market surveys and stakeholder engagement; and encourages agencies to address competition concerns at an early stage because “[i]naction can be especially costly in these markets because consolidated power can stifle the rate and distort the path of innovation.”
Like the DoJ and FTC, the G7 statement also notes that:
“massive amounts of data are necessary to train generative AI models… Significant computational resources such as cloud computing services and large-scale computing power also are critical. An inability to access these key inputs may inhibit competition to develop AI and AI applications, reducing innovation and harming consumers.”
It continues:
“Incumbent tech firms that control these key AI inputs or adjacent markets could harm rivals with anticompetitive conduct such as bundling, tying, exclusive dealing, or self-preferencing” and notes that “[i]ncumbents could also use acquisitions or partnerships to facilitate such conduct or to further entrench existing positions of market power or create new ones.”
It concludes by recognizing the importance of internal cooperation (i.e., relevant government departments, authorities and regulators within a jurisdiction considering the role of effective competition alongside other issues such as consumer protection, data privacy and cyber security and working closely with each other to address “systemic issues in consistent and effective ways”), as well as international cooperation among agencies in different jurisdictions.
European Commission
Director General of DG Competition Olivier Guersent spoke in similar terms about AI on November 8, 2023. He said:
“AI is likely to bring many benefits and create new opportunities. However, it may also raise challenges related to biases, fairness, privacy, security, accountability and transparency.”
He described the AI Act proposal (see above) as “a new package to boost investment in AI and regulate it.”
Like the DoJ and FTC, Guersent said AI may also raise competition concerns:
“First, AI may facilitate collusion between algorithms or make it more difficult for competition authorities to detect them… Second, the AI sector itself may raise competition concerns. Based on our experience in digital markets, anti-competitive strategies and a “winner-takes-all” outcome cannot be excluded. This is because AI systems rely on vast amounts of computing power and data. Companies that have access to cloud services’ facilities and vast amounts of data—or to unique data sets—may be incentivized to favour their own AI systems.”
He continued:
“The Commission has an important role to play in ensuring that AI remains innovation-intensive, and that consumers and businesses have a broad choice of AI systems”
and put down a marker about potential EC intervention:
“We stand ready to address competition concerns through: antitrust, if these competition concerns materialize; merger control, if companies engage in “killer acquisitions”; and the Digital Markets Act, where we have the possibility to add new core platform services like AI systems, if warranted.”
The pace of technological developments in the AI space shows no signs of slowing. It remains to be seen what role regulation will play. The current array of somewhat fragmented policy, legislative and other initiatives around the world may be starting points but reaching any agreement nationally, let alone internationally, on appropriate next steps lies ahead. APCO will be keeping a close watch on these developments during 2024 to help clients achieve the outcomes that further their business objectives.