ai-policy-legal-GettyImages-1.jpg

Spotlight on AI Regulation—With a Focus on How Antitrust Agencies Say They Are Watching the Space

January 18, 2024

A flurry of activity

The last few weeks of 2023 saw a flurry of activity around the world in relation to the regulation of artificial intelligence (AI). This is occurring against a backdrop of differing political approaches to the regulation of AI within and between the United States, EU and China. Meetings of various groups of countries and other stakeholders were convened; guiding principles, codes of conduct, and declarations issued; and some countries also took unilateral action. In the United States, President Biden issued an executive order devoted to AI, and in the EU, the AI Act continued its legislative journey towards adoption. Antitrust and competition law agency officials also articulated the issues they consider AI can give rise to and began to take action in the AI space. This activity is likely to continue in 2024.

In this blog, APCO shines a spotlight on some of these recent policy, legislative and enforcement developments. They are relevant not only to the businesses driving the development of AI but also more broadly within and beyond the tech sector. AI already touches numerous aspects of our activities and this is likely only to accelerate.

The G7 agrees on International Guiding Principles

On October 30, 2023, the leaders of the G7 countries—Canada, France, Germany, Italy, Japan, the UK and the United States—agreed on International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers under Japan’s presidency of the G7. The G7 leaders stated:

“We believe that our joint efforts …will foster an open and enabling environment where safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximize the benefits of the technology while mitigating its risks, for the common good worldwide, including in developing and emerging economies with a view to closing digital divides and achieving digital inclusion.”

President Biden issues an Executive Order

Also on October 30, 2023, President Joe Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Prior Biden Administration actions in relation to AI include work that led to “voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI” and a Blueprint for an AI Bill of Rights. In the current Congressional session, hearings are being held and several draft bills, some bipartisan, have been introduced but it is uncertain whether they will pass into law in the current session.

The 63-page Executive Order sets out a “Federal Government-wide,” multi-agency framework for addressing the benefits and risks related to AI development. Many executive departments and agencies are directed to take actions within specified periods in the coming months (i.e., ahead of the Presidential election on November 5, 2024). The Order anticipates a wide-ranging set of guiding principles, priorities, reports and other actions for AI policy.

Setting the context, the Order states:

“Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”

Specifically in relation to competition, the Order states:

“The Federal Government will promote a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation” (see also Spotlight on antitrust below).

Other aspects of the Order include:

  • Consumer protection and safeguards against fraud, unintended bias, discrimination, and infringements on privacy: “The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI. Such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights.”
  • A call for Congress to pass data privacy legislation and for Federal agencies to assess how they collect and use “commercially available information.”
  • United States to lead global initiatives: “The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change. This leadership is not measured solely by the technological advancements our country makes. Effective leadership also means pioneering those systems and safeguards needed to deploy technology responsibly—and building and promoting those safeguards with the rest of the world.”

Twenty-eight countries issue the Bletchley Declaration

On November 1, 2023, the Bletchley Declaration was issued by the 28 countries from around the world (including the United States, China and the UK) and the EU that attended the AI Safety Summit in Bletchley Park in the UK (Bletchley Park is notable for having been the top-secret location of Alan Turing and other World War Two codebreakers).

The Declaration affirms that “for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.” It recognizes that “AI also poses significant risks, including in [the] domains of daily life” and “the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content.”

The Declaration notes that “many risks arising from AI are inherently international in nature, and so are best addressed though international cooperation” and it “recognize[s] that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximizes the benefits and takes into account the risks associated with AI.”

Several countries announced the establishment of national AI safety institutes around the summit; no overall regulatory framework or organization resulted.

A virtual summit will be hosted by South Korea in six months’ time and a second full summit by France in a year’s time.

EU welcomes G7 initiatives and reaches provisional agreement on its AI Act

Welcoming the G7 statement (see above), European Commission President Ursula von der Leyen said:

“The potential benefits of Artificial Intelligence for citizens and the economy are huge. However, the acceleration in the capacity of AI also brings new challenges. Already a regulatory frontrunner with the AI act, the EU is also contributing to AI guardrails and governance at global level. I am pleased to welcome the G7 International Guiding Principles and the voluntary Code of Conduct, reflecting EU values to promote trustworthy AI.”

On December 9, 2023, the European Parliament and Council reached provisional agreement on the EU’s AI Act. President von der Leyen described the Act as a “global first”; European Commissioner Thierry Breton proclaimed it “historic”; and the EU Council announced:

“As the first legislative proposal of its kind in the world, it can set a global standard for AI regulation in other jurisdictions, just as the GDPR has done, thus promoting the European approach to tech regulation in the world stage.”

With Spain holding the presidency of the EU Council, the Spanish secretary of state for digitalisation and artificial intelligence, Carme Arigas, said:

“…in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.”

Despite the provisional agreement, the AI Act continues to be controversial. For example, Executive European Commission Vice President Margrethe Vestager has said the AI Act will create “legal certainty” and “not harm innovation and research, but actually enhance it. French President Emmanuel Macron has expressed a contrary view: “we can decide to regulate much faster and stronger than our major competitors [the United States and China]. But we will regulate things that we will no longer produce or invent. This is never a good idea.”

Political agreement at EU and Member State levels would need to be reached before the AI Act can be finalized.

The final text of the AI Act has not yet been published but it seems that the main elements include:

  • A risk-based approach classifying AI systems into four different risk categories depending on their use cases: unacceptable-risk; high-risk; limited-risk; and minimal/no-risk.
  • AI systems that create an unacceptable risk, contravening EU values and considered to be a clear threat to fundamental rights, will be banned in the EU. These include biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race); untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases; emotion recognition in the workplace and educational institutions; social scoring based on social behavior or personal characteristics; AI systems that manipulate human behavior to circumvent their free will; AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation); and some applications of predictive policing.
  • Certain AI systems with “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law” will be classified as high-risk. AI systems classified as high-risk will be subject to mandatory compliance obligations.
  • AI systems classified as limited-risk, including chatbots, and certain emotion recognition and biometric categorization systems, and systems generating deep fakes, will be subject to transparency obligations.
  • Other AI systems not falling under one of the main risk categories are classified as minimal/no-risk. Voluntary codes of conduct are encouraged.
  • Regulation of general-purpose AI (GPAI) systems/models distinguishes between transparency obligations that apply to all GPAI models and more stringent obligations for GPAI models with systemic risk.

The AI Act’s enforcement will likely be overseen at EU level by a new body within the European Commission—the European AI Office—that will have administrative, standard setting and enforcement responsibilities to ensure coordination at EU level. It will be supported by a panel of independent scientific experts, an AI Board and an AI Advisory Forum. In addition, Member States will designate national authorities to supervise the AI Act’s application and implementation and market surveillance activities at national level.

Once the final text is agreed, the AI Act will need to be approved by the Parliament and Council. The current legislative mandate ends in April 2024 ahead of the European Parliamentary elections in June 2024. The majority of the AI Act’s provisions would apply two years after the Act’s entry into force with the prohibitions on “unacceptable risk” AI systems and the requirements for “high risk” systems coming into force in stages before then.

Other jurisdictions are also taking action

Some jurisdictions have already adopted AI legislation—notably China—or are planning to do so (e.g., Brazil and Canada) or are drafting non-binding guidelines or frameworks (e.g., Australia, India, Japan and South Korea). Other jurisdictions also have AI regulation on their agendas including the UK where the government published a white paper “AI Regulation: a pro-innovation approach” on March 29, 2023, and the Competition and Markets Authority published “proposed principles which aim to ensure consumer protection and healthy competition” on September 18, 2023, following its initial review of foundational models.

Indeed, the OECD’s database lists some 69 countries with over 1,000 AI policy initiatives.

Spotlight on antitrust

Several of the initiatives outlined above recognize that antitrust (or competition) law can be a tool for AI regulation, although other laws also have important roles to play in this respect. For example:

President Biden’s Executive Order expresses concern about the potential for (further) consolidation and reduced competition in AI-related markets and the potential to harm competition in other markets. This “requires stopping unlawful collusion and addressing risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors” and “harness[ing] the benefits of AI to provide new opportunities for small businesses, workers, and entrepreneurs.”

The Order encourages the Department of Justice (DoJ) and the Federal Trade Commission (FTC) “to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI” and the FTC to exercise its rule making authority in this respect. This is consistent with the two agencies’ current approach to enforcement, as is the reference to “workers” as well as “consumers.”

DoJ and FTC

DoJ Assistant Attorney General for Antitrust Jonathan Kanter, FTC Chair Lina Khan and other DoJ and FTC officials have spoken about AI and antitrust in the past few weeks. For example:

  • Khan has stressed that AI is not exempt from FTC’s antitrust or consumer protection laws: “As companies race to deploy and monetize AI, the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices. As these technologies evolve, we are committed to doing our part to uphold America’s longstanding tradition of maintaining the open, fair and competitive markets that have underpinned both breakthrough innovations and our nation’s economic success—without tolerating business models or practices involving the mass exploitation of their users. Although these tools are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with administering, even in this new market.”
  • “Vast stores of data,” cloud services and computing power held by “a handful of powerful businesses” are areas of “risk”: Khan has identified the “necessary raw materials” of “vast stores of data,” cloud services and computing power held by “a handful of powerful businesses” as areas of “risk” and has suggested that “dominant firms could use their control over these key inputs to exclude or discriminate against downstream rivals, picking winners and losers in ways that further entrench their dominance.” The FTC has also noted that “… even with responsible data collection practices in place, companies’ control over data may also create barriers to entry or expansion that prevent fair competition from fully flourishing” and the DoJ has highlighted that switching costs between cloud providers can raise concerns about ’entry and lock in’ that “loom large.”
  • Availability of server chips: DoJ and FTC have drawn attention to the “highly concentrated” nature of some markets for specialized chips against a background that “increasing demand for server chips may outpace supply in some instances” and the risk that “firms in highly concentrated markets are more prone to engage in unfair methods of competition or other antitrust law violations.”
  • Availability of engineering experience and professional talent: the FTC has noted that “[s]ince requisite talent is scare, powerful companies may be incentivized to lock-in workers and thereby stifle competition from actual or would-be rivals. To ensure a competitive and innovative marketplace, it is critical that talented individuals with innovative ideas be permitted to move freely, and, crucially, not be hindered by non-competes.”
  • Closing “open-source generative AI”: the FTC has said: “Experience has…shown how firms can use ‘open first, closed later’ tactics in ways that undermine long-term competition. Firms that initially use open-source to draw business, establish steady streams of data, and accrue scale advantages can later close off their ecosystem to lock-in customers and lock-out competition.”
  • Collusion and unlawful price discrimination: DoJ and FTC officials have spoken of the risk that AI facilitates collusive behavior, including information sharing, that inflates prices or results in unlawful targeted price discrimination. FTC Commissioner Rebecca Kelly Slaughter has also drawn attention to the impact that generative AI will increasingly have beyond the tech sector: “We hope that more companies will be using data-driven processes and systems and so more and more markets will become important from a digital perspective.”
  • Anti-competitive M&A: Both agencies have also focused on the risk of anti-competitive M&A activity in the AI space from the consolidation of “market power in the hands of a few players” or the acquisition of critical inputs or applications by “large firms” giving them the incentive or ability to cut off rivals’ access to such inputs or applications. Acquiring “complementary applications and bundl[ing] them together” or buying “nascent rivals instead of trying to out-compete them by offering better products or services” has also been spotlighted by the agencies.
  • Enhanced agency resources: both agencies have enhanced their AI expertise to support their investigations—the DoJ’s Antitrust Division with the appointment of its first Chief Technologist and the FTC with its Office of Technology.
  • International cooperation: Kanter has drawn attention to the role of international cooperation among antitrust agencies. “Consider artificial intelligence. While this technology holds boundless potential, it’s sure to have huge competitive impacts. These risks transcend borders. So we’re engaging with our international colleagues to exchange knowledge about this rapidly developing area and its effect on competition law and policy.”

Senior officials from the antitrust and competition agencies of the G7 countries held their annual meeting on November 8, 2023. They discussed antitrust concerns in the digital economy with a focus on generative AI and (for the first time) issued a joint statement highlighting their concerns about the new technology.

The statement states that “anticompetitive mergers or exclusionary conduct” can tip rapidly developing areas such as AI; notes that algorithm-based digital cartels can involve unlawful collusion or price manipulation; calls for the use of horizon-scanning tools such as research, market surveys and stakeholder engagement; and encourages agencies to address competition concerns at an early stage because “[i]naction can be especially costly in these markets because consolidated power can stifle the rate and distort the path of innovation.”

Like the DoJ and FTC, the G7 statement also notes that:

“massive amounts of data are necessary to train generative AI models… Significant computational resources such as cloud computing services and large-scale computing power also are critical. An inability to access these key inputs may inhibit competition to develop AI and AI applications, reducing innovation and harming consumers.”

It continues:

“Incumbent tech firms that control these key AI inputs or adjacent markets could harm rivals with anticompetitive conduct such as bundling, tying, exclusive dealing, or self-preferencing” and notes that “[i]ncumbents could also use acquisitions or partnerships to facilitate such conduct or to further entrench existing positions of market power or create new ones.”

It concludes by recognizing the importance of internal cooperation (i.e., relevant government departments, authorities and regulators within a jurisdiction considering the role of effective competition alongside other issues such as consumer protection, data privacy and cyber security and working closely with each other to address “systemic issues in consistent and effective ways”), as well as international cooperation among agencies in different jurisdictions.

European Commission

Director General of DG Competition Olivier Guersent spoke in similar terms about AI on November 8, 2023. He said:

“AI is likely to bring many benefits and create new opportunities. However, it may also raise challenges related to biases, fairness, privacy, security, accountability and transparency.”

He described the AI Act proposal (see above) as “a new package to boost investment in AI and regulate it.”

Like the DoJ and FTC, Guersent said AI may also raise competition concerns:

“First, AI may facilitate collusion between algorithms or make it more difficult for competition authorities to detect them… Second, the AI sector itself may raise competition concerns. Based on our experience in digital markets, anti-competitive strategies and a “winner-takes-all” outcome cannot be excluded. This is because AI systems rely on vast amounts of computing power and data. Companies that have access to cloud services’ facilities and vast amounts of data—or to unique data sets—may be incentivized to favour their own AI systems.”

He continued:

“The Commission has an important role to play in ensuring that AI remains innovation-intensive, and that consumers and businesses have a broad choice of AI systems”

and put down a marker about potential EC intervention:

“We stand ready to address competition concerns through: antitrust, if these competition concerns materialize; merger control, if companies engage in “killer acquisitions”; and the Digital Markets Act, where we have the possibility to add new core platform services like AI systems, if warranted.”

Keep a close watch on this space

The pace of technological developments in the AI space shows no signs of slowing. It remains to be seen what role regulation will play. The current array of somewhat fragmented policy, legislative and other initiatives around the world may be starting points but reaching any agreement nationally, let alone internationally, on appropriate next steps lies ahead. APCO will be keeping a close watch on these developments during 2024 to help clients achieve the outcomes that further their business objectives.

Related Articles

APCO’s 2024 ‘TradeMarks’ Study

News & Events, Perspectives

APCO’s 2024 ‘TradeMarks’ Study: Associations Show Strength on Unity and Connections

October 8, 2024
Securitized Globalization and Cyber Resilience

Perspectives

2024: The Year of Securitized Globalization and Cyber Resilience

October 4, 2024
Understanding Online Slander in Japan

Perspectives

From Connection to Conflict: Understanding Online Slander in Japan

September 27, 2024