G7 Competition Summit: AI Governance and Regulation
November 22, 2024
On October 3-4, 35 delegates from the various G7 competition authorities (antitrust and government delegations from the European Commission and the Group’s seven member countries – Canada, France, Germany, Japan, Italy, the United Kingdom and the United States) met in Rome to discuss artificial intelligence (AI) governance and regulation in the context of fair competition.
In this era of significant innovation, a balanced approach to competition regulations and enforcement are crucial to ensure fair market practices, protect consumer interests and foster innovation. The EU has taken significant steps with the Digital Markets Act (DMA) and Digital Services Act (DSA), targeting gatekeeper platforms and ensuring safer digital spaces. In the United States antitrust enforcers have also closely scrutinized digital markets. Japan has implemented the Act on Improving Transparency and Fairness of Digital Platforms. Other non-EU G7 countries, like Canada and the UK, have also introduced measures to regulate digital markets, emphasizing the need for international cooperation in this domain. In this regard, similar rules shared worldwide would also greatly benefit companies that intend to launch their products globally or across different continents.
At this year’s G7 summit AI was a top priority due to the risks connected to the fast development of this sector: ‘concentration of market power’, ‘fuelling algorithmic collusion’ and the ‘high barriers to entry for smaller firms and start-ups’. However, regarding this third aspect, it is important to note that only certain features of AI service development are impacted by the substantial resource demands. Smaller firms can mitigate entry barriers by purchasing access to computational resources (“compute” in AI vernacular) from hyperscalers. The rapid evolution of AI threatens to outpace traditional regulations, prompting the need for more adaptive, forward-thinking strategies. The goal is clear: prevent monopolistic dominance and market collusion while ensuring regulatory tools evolve swiftly to keep up with emerging technologies and hold major players accountable.
The Digital Competition Communiqué highlighted the increasing role of AI in society as well as the serious risks that the development of AI technologies could cause to competition. As a result, the G7 competition authorities tried to outline a pathway to be followed in the future to ensure fair competition amidst the current AI surge.
The first priority is the need to avoid monopolistic conditions, given the presence of a strong first mover advantage in the sector. At the same time, the concentrated control of crucial AI inputs has the potential to place a small number of firms in key market positions, leading to high entry barriers given by induced or natural bottlenecks.
To avoid these shortcomings and to establish a fair level playing field, the authorities support the idea of maintaining contestability in digital markets through data access and sharing. In fact, by encouraging data portability and interoperability, the G7 aims to foster a more dynamic and competitive digital ecosystem, avoiding algorithmic collusion.
The document also mentions factors that are crucial in the regulatory framework of AI, such as consumer protection, data and privacy protection and the central role of human innovation.
The Digital Competition Communiqué also identifies some principles to effectively ensure fair competition in digital markets:
Businesses in the AI sector can expect heightened regulatory scrutiny, necessitating compliance with new and existing regulations to avoid penalties and maintain market position. For instance, practices that enhance the market position of established companies, such as access to specialized data or even partnerships/agreements for the supply or co-design of chips and their programming models, could become subject to regulatory revisions by the competition authorities, as could transactions involving acquisition of specialist firms in the AI stack, like those providing pre-training datasets.
Despite the consensus on the need for international cooperation on AI policy development, the significant differences in regulatory approaches towards AI observed in the last two years are likely to remain or even further diverge, especially given the current geopolitical competition. The EU has prioritized AI ethics and data privacy, as is exemplified by the EU’s AI Act and the Ethics Guidelines for Trustworthy AI. In contrast, the United States has focused on fostering AI innovation and business development, as seen in President Biden’s Executive Order. Similarly, China has pursued a strategy aimed at becoming the global leader in AI by 2030, with policies like the “New Generation Artificial Intelligence Development Plan.” At the multilateral level the Council of Europe has agreed the first global treaty addressing AI with participation of many non-European states. The Framework Convention on Artificial Intelligence was opened for signature on 5th September 2024. Further global regulatory developments at the multilateral level were agreed by the UN as part of the Global Digital Compact and these initiatives will impact national policy and regulatory development.
Due to these variations and the acceleration in the pace of national regulatory development, maintaining global business operations while staying compliant with policy changes that occur unevenly around the world is challenging. Companies must navigate a complex landscape of different regulations, with new national approaches in the works in many parts of the world, which can lead to increased compliance costs and operational inefficiencies. However, the ability to identify and adapt to these regulations early provides a significant competitive advantage, allowing businesses to proactively adjust their strategies, ensuring compliance and minimizing disruptions.