Close

Artificial Intelligence and the Global Policy Landscape

2019 AI Policy Paper ThumbnailDiscussions around artificial intelligence, and its potential impact, are becoming increasingly common in corporate boardrooms around the world. Global companies in all sectors are keenly looking towards the advancements in AI to see how the technology can be used to drive efficiency and improve business outcomes.

However, as is the case with every innovative technological advancement, companies need to be aware of the policy landscape and understand how governments around the world are dealing with AI. APCO’s team of global experts took a closer look at current state of AI policy in key markets around the world, including Europe, the United States, China and Southeast Asia.

Click here to download a PDF version of this analysis.

Europe in the Quest for the Gold Standard in Artificial Intelligence Regulation

By: Aurélien Maehl, Brussels

Ursula von der Leyen, the new President-elect of the European Commission, surprised observers when she announced plans to introduce “legislation for a coordinated approach on the human and ethical implications of artificial intelligence (AI)” during her first 100 days in office. And they were quick to wonder what exactly it would regulate, given the diversity of AI applications and the existing privacy, liability and security framework.

Towards technological sovereignty

As a German “social” Christian-democrat who is a close ally of Chancellor Merkel and is supported by President Macron, Ursula von der Leyen may represent the comeback of a “continental” perspective in a post-Brexit Europe. She will surely seek to implement an agenda that has already been forming in front of our eyes: achieving Europe’s “technological sovereignty” by strengthening its industrial base, supporting global European players in key industries and positioning the EU as a standard setter.

Europe as a standard setter?

The General Data Protection Regulation (GDPR) is often regarded as a gold standard in how the EU can lead efforts in setting international norms. Europe’s activism in framing ethical principles for AI indicates an ambition to replicate that effort, with the idea that the technology requires a reevaluation of existing views on principles of equality, human dignity, privacy and safety in the digital age. The risk-based approach of the “Policy and investment recommendations for trustworthy AI,” drafted by a Commission-mandated expert group, may serve as a blueprint of such efforts.

Von der Leyen’s legislation will likely focus on the most problematic, user-facing AI applications such as facial recognition and surveillance, extending the impact assessment obligations under the GDPR. Additional rules for such AI applications may cover transparency and explainability, as well as human oversight. Exactly what the additional rules will cover remains to be seen, as the group of EU data protection regulators have already started to address the issue, having proposed stricter oversight on facial recognition under the existing GDPR framework.

AI will also be analyzed from other angles. The Commission has already announced its intention to examine the impact of automated online content filtering systems to free speech and consumer rights and may further assess the need to strengthen the transparency of online platforms’ algorithms.

Building leadership

The EU knows, however, that leadership can’t be decreed: it stems from a powerful industrial base. Besides investment plans, the EU also considers diverse cloud and software markets that are open to new European entrants to be a prerequisite for a competitive industry. We can therefore expect additional measures on data interoperability, data sharing and a revision of competition law to acknowledge the power of data.

As participants in our recent panel on Industrial AI in Brussels said, manufacturing is where Europe has the most chances to sustain a global competitive edge. Over the next few months, we’ll continue exploring how AI is already changing industries in Europe and online, as well as what regulation could look like.

Artificial Intelligence in the United States: The Rocky Quest to Remain on Top

By: APCO alumna Kelsey Harclerode, Washington

While it’s tempting to group artificial intelligence into the barrage of tech buzzwords, AI is not just here to stay—it’s here to totally transform our lives.  In fact, we’re now amidst a transition from nascent AI investments to transformative AI deployments in the United States and abroad.

Promoting Innovation. . .

To shepherd the transition from investment to deployment, the Trump Administration has taken several steps to bolster the United States’ AI strategy, including holding an AI summit with tech companies in 2018 and expanding an Obama-era national R&D plan via executive order in 2019.  Underpinning this federal strategy is a strong desire to protect the United States’ status as a leader in AI through substantial financial commitments and removal of regulatory barriers to allow private and public use of AI to surge.

Although there is a lot up in the air for 2020, no matter the administration, we expect executive agencies to continue down this path.

. . . But Staying Accountable

While recent executive-level initiatives have primarily focused on promoting AI innovation, we’ve seen a more concerted effort from city, state and federal lawmakers to pass accountability measures and reign in the riskier aspects of this technology. Notable examples include San Francisco’s ban of facial recognition software use by the police and other government agencies, Illinois’s restrictions on employers’ use of AI interview bots during hiring, and current debates underway in the United States Congress on whether to require organizations using AI systems to conduct audits for bias and discrimination.

Fueling this effort to reign in AI is a diverse set of experts examining the potential impact of this technology.  Groups like Upturn and Harvard University’s Berkman Klein Center are exploring possible methods to ensure AI-based systems are not used as tools of inequality or bias.  The debate around AI ethics will only get more expansive—and potentially contentious—and will cover what principles should be considered, who should make those considerations, and how those considerations should be deployed.

What’s Next?

As companies and organizations look to harness the power of AI technology or otherwise engage in this space, they must remain up to date on this current landscape.  We recommend:

  • surveying existing incentive programs for AI innovation;
  • monitoring policy developments at the local, state and federal levels and developing a government relations strategy for engagement;
  • engaging proactively and transparently with experts on AI ethics; and applying AI ethics principles at outset of your development—not retroactively.

China Joins International Efforts to Govern AI Ethics

By: Caroline Meinhardt, Beijing

Ethics has recently become a major priority area of artificial intelligence (AI) discussions in China. This may come as a surprise to many, since most international discussions of China’s AI development have focused on the country’s outsized public AI funding, its companies’ growing global successes and the rapid increase in its published AI research papers. Meanwhile, Europe has been the leading voice in advocating for norms and guidelines to guard against the risks related to AI development.

However, China’s national AI strategy in July 2017 already clearly indicated the government’s goal of creating ethical norms and regulations by as early as 2020. Awareness and concerns regarding AI ethics have grown among policymakers, think tanks and key industry players since then, and so has China’s ambition to lead the development of international AI ethics standards.

Creating a Chinese Framework for AI Ethics

Over the past six months, a variety of Chinese entities have issued a flurry of documents that aim to create the country’s first framework for the ethical applications of AI.

In March 2019, the CEOs of two of China’s largest technology companies, Baidu’s Robin Li and Tencent’s Pony Ma, made headlines for submitting proposals to China’s annual “Two Sessions” political gathering, in which they urged the government to create ethical guidelines for the development of emerging technologies, including AI. Two months later, several academic institutions and leading technology companies joined forces to publish the “Beijing AI Principles” and a “Joint Pledge on AI Industry Self-Discipline,” which laid out high-level guidelines for the research and development of AI, as well as companies’ responsibility to self-regulate.

June then saw the publication of “Governance Principles for a New Generation of AI,” the first ministry-level document on AI, developed by an expert committee under the Ministry of Science and Technology that had been established specifically to research policy recommendations for AI governance, including ethics. The document defines eight guiding principles for the development of “responsible AI” in China—harmony and friendliness; fairness and justice; inclusivity and sharing; respect privacy; secure/safe and controllable; shared responsibility; open collaboration; and agile governance.

Global Ambitions: Taking the Reins in Standards-Setting

Increasingly, the Chinese government is seeking to not only formulate standards to govern AI domestically, but also become a leading figure in international AI standards-setting. Its ambitions, laid out clearly in a white paper on AI standardization, are unsurprising given the rapid expansion of Chinese AI companies across the world.

China’s AI ethics principles thus far broadly align with European and other international documents that call for “trustworthy” and “responsible” AI. Chinese policymakers are also heavily emphasizing the protection of data privacy in the use of AI systems, which coincides with recent efforts to create a robust data protection regulatory scheme in China.

However, first signs indicate that China will be defining and governing AI ethics on its own terms and based on its own values. As international cooperation on AI inevitably expands, confrontations caused by potential differences in the value systems underpinning China’s and other countries’ AI ethics governance schemes will also increase.

Now is a good opportunity for multinational companies—in their quest for safe and responsible AI — to engage with Chinese institutions and policy makers to understand the issues and technical challenges unique to China.

Embracing the Power of AI: Is Southeast Asia AI-Ready?

By: Bee Shin, Singapore

As artificial intelligence (AI) continues to be widely adopted and researched globally, Southeast Asia is emerging as the next hotbed for AI acceleration. The optimistic outlook in the region stems from the fast-growing adoption of AI, which has nearly doubled in percentage from 2017 to 2018, illustrating the hunger for innovation-driven growth. Despite the commonly-told narrative of AI’s potential disruption of labor markets, Southeast Asian governments have highlighted the importance of the technology to catch the next wave of scientific development, which, in turn, will fuel regional economic growth. However, is Southeast Asia truly AI-ready?

Aspirations of the region are yet to materialize as Southeast Asian nations continue to overcome challenges, including the lack of infrastructure, trailing digital skills and absence of widespread adoption of AI policies. Nonetheless, the region is steadily improving its AI readiness and is in preliminary stages of drafting policies that can spur the development of AI technologies. For instance, few countries in the region—Indonesia, Singapore, Thailand and Vietnam—recently witnessed the introduction of cybersecurity and data privacy regulations, which can help accelerate AI adoption by harnessing the policy foundations to fuel technological advancements.

Recent developments also indicate that dedicated AI policy initiatives are top priorities for policymakers. Examples include:

Southeast Asia is in the midst of strengthening its AI readiness. The initial strides by Singapore, Malaysia and Vietnam have ignited the search for best practices in AI adoption, which will have a spillover effect throughout the region. Through continuous discourse on AI policy ecosystem, Southeast Asia is poised to embrace the power of human-centered AI and converge digital disruption with national growth.

Artificial IntelligencePolicyPublic AffairsStay Ahead

Close