How countries around the world are trying to regulate AI

There’s a lot of excitement around generative AI’s ability to create content, ranging in use cases from producing marketing materials to translating podcasters’ voices into different languages. But with great promise come great concerns, including—but certainly not limited to—the spread of misinformation at scale, especially via deepfakes; using creators’ work without attribution; and massive job loss thanks to automation.
With those potential downsides in mind, governments around the world are attempting to create AI regulations that promote safety and fair use while still encouraging innovation.
US 🇺🇸

The US government has so far let tech companies come up with their own safeguards around AI. But lawmakers say that AI regulation is necessary. They have held multiple meetings in 2023 with leading AI companies from OpenAI to Nvidia. Lawmakers have discussed licensing and certification requirements of high-risk AI models when necessary.
Meanwhile, federal agencies like the Consumer Financial Protection Bureau, the Department of Justice, the Equal Employment Opportunity Commission, and the Federal Trade Commission have said that many AI applications are already subject to existing laws.
UK 🇬🇧

The United Kingdom, which is home to AI companies like Google’s AI lab DeepMind and AI video maker Synthesia, wants to avoid heavy-handed legislation that could stifle innovation. Regulation would be based on principles such as safety, transparency, fairness, and accountability, reported Reuters.
The UK government also wants to play a key role in regulating AI. It is expected to hold its first AI safety summit on Nov. 1 and Nov. 2. “The UK has long been home to the transformative technologies of the future,” Prime minister Rishi Sunak said in a statement. “To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead.”
The country plans to split responsibility for governing AI between existing regulators for human rights, health and safety, and competition rather than creating a new body dedicated to the technology—which differs from US regulators who have discussed creating an independent body to oversee the licensing of high-risk AI models. Academics have criticized the independent regulator, saying that it will take time to create an entirely new agency.
EU 🇪🇺

Since 2021, the EU, which has a track record of implementing stricter rules on the tech industry compared to other regions, has been working towards passing the AI Act, which would be the first AI law in the West.
The act proposes classifying AI systems by risk. Higher-risk systems, including AI hiring tools and exam scoring software, would face greater compliance standards, such as data validation and documentation, compared to lower-risk ones. The proposed rules also prohibit intrusive and discriminatory uses of AI, such as real-time remote biometric identification systems in publicly accessible spaces or predictive policy systems based on profiling.
Last month, European Commission committee member Thierry Breton said that the EU is developing an AI Pact that will help companies prepare for the implementation of the AI Act and that startups—not just Big Tech—need to be included in discussions around developing governance of AI. Startups in the past have said that the EU’s proposed regulation is too restrictive.
China 🇨🇳

In China, generative AI providers must undergo a security assessment, and AI tools must adhere to socialist values. But generative AI technology developed only for use outside of the country is exempt from the rules. Chinese tech companies like Baidu, Tencent, and Huawei have research and development centers in Silicon Valley.
To rival OpenAI’s ChatGPT, the country’s biggest internet companies—Alibaba, Baidu, and JD—have announced their own AI chatbots. In the past, China has said it wanted to be the world leader in AI by 2030 and has laid out plans to commercialize AI in a number of areas from smart cities to military uses, as CNBC reported.
Japan 🇯🇵

Japan is leaning towards softer rules governing the use of AI. This week, prime minister Fumio Kishida pledged that the next economic package will include funds for AI development in small and midsize companies, which could help boost Japan’s lost lead in technology.
Japan has also said that using copyrighted images to train AI models doesn’t violate copyright laws, which means that generative AI providers can use copyrighted work without securing permission from the owners of the images. “In Japan, works for information analysis can be used regardless of the method, whether for non-profit purposes, for profit, for acts other than reproduction, or for content obtained from illegal sites,” said a member of the House of Representatives for the Constitutional Democratic Party of Japan.
That said, the representative acknowledged that using the image against the will of the copyright holder is problematic from the viewpoint of rights protection, and suggested a need for “new regulations to protect copyright holders.”
Brazil 🇧🇷

Lawmakers in Brazil have started to draft regulations for AI that include requiring providers of AI systems to submit a risk assessment before rolling out the product to the public.
Regardless of the risk classification of AI systems, regulators say that people affected by those systems have the right to an explanation about a decision or a recommendation made within 15 days of the request. Academic experts contend that it’s hard to explain why an AI system does something.
Israel 🇮🇱

The country, which has been having discussions with both Israeli and US-based tech companies, has proposed regulation that focuses on “responsible innovation” that protects the public while also advancing Israeli’s high-tech industry.
The government has said, vaguely, that AI policy will include the use of “soft” regulatory tools to govern AI and the adoption of ethical principles similar to what is acceptable around the world. Regulation is expected to arrive “in the coming years.”
Italy 🇮🇹

In March, Italy was the first Western country to—temporarily—ban ChatGPT over unlawful data collection. Since then, the government said it has allocated 30 million euros (or $33 million) to help unemployed people and workers in positions at risk of automation to boost their digital skills.