AI Regulations: Can we keep control?
How does Europe tackle the challenge and how do US and Chinese regulatory approaches differ?
In 2021, an AI experiment shocked the scientific community when the founder of Collaboration Pharmaceuticals, Sean Elkins, and the Swiss Spiez Laboratory used Al to help discover the least toxic molecules able to fight diseases, and the Al instead generated over 40.000 lethal molecules within the span of just a few hours.
This is only one demonstration of the potential dangers in the misuse of Al. Whereas the necessity for control and regulation is not contested and some AI companies have even signed a petition for regulation, the task of doing so without hindering progress is difficult - as we can see from the fact that every nation adopts its own approach.
Europe
In Europe, Artificial Intelligence is regulated by two main systems:
The European Union AI Act entered into force on 1 August 2024. AI systems are categorised based on their risk. The riskier the system, the stricter the regulations they have to follow.
Low-risk AI systems such as e-mail spam filters or video game characters underlie almost no regulation, whereas high risk systems such as AI used in law enforcement are way more regulated. Without proper regulations, these high risk systems can cause significant harm.
Some AI systems are even deemed to have an unacceptable risk by the AI Act. Systems intended for cognitive manipulation or that use large scale biometric identification are strictly unauthorised in the EU, with only a few exceptions being made for law enforcement in high-risk scenarios.
On top of the AI Act, the Council of Europe’s Committee of Artificial Intelligence (CAI) recently approved a draft for the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, that offers a legal basis to ensure all AI systems are consistent with human rights. If a country decides to implement this framework, its AI companies have to follow an extra set of laws designed to protect the user.
A lot of people are concerned that Europe’s strict AI regulation may backfire. Companies that do not want to be slowed down by these frameworks could set their focus on other more lenient markets such as the USA. For example, the CEO of OpenAI, Sam Altman, has already stated that OpenAI might leave the EU if stricter AI regulations were implemented. Such a shift of AI companies leaving the EU could deal a substantial economic blow to the Union.
But many experts think otherwise. Dr. Peter van der Putten, assistant professor at Leiden University, explains that users of AI systems want themselves and their data to be safe and might have ethical concerns about unregulated systems. This would result in self-regulation of the sector by people turning towards companies that regulate their AI systems, causing other companies to either take an economic hit or follow the path of regulations. In the end, the result would be the same but accompanied by many financial problems and developments of unethical systems. “It would be a long, painful bloodbath” says Van der Putten.
Max Gindt, AI policy expert, adds to this idea by saying that companies look for areas with an abundance in expertise and knowledge in R&D. Europe has many such areas, such as universities and other centers of excellence, which hold a crucial importance for these companies and help retain them in the EU regions.
United States of America
The United States’ approach is more market oriented and tries to give enterprises more freedom to boost its economy. The main regulatory measure taken by the USA is President Biden’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. In addition to the executive order, states can have their own individual laws.
The market-oriented approach in the US allows companies to quickly introduce their products to the market. In 2023, San Francisco already allowed self-driving taxis. The inherent risk of this ‘shoot first, ask questions later’ approach is that companies invest huge sums into the launch of prototypes which are later deemed unsafe and are taken off the market, sending developers back to the drawing board. According to Max Gindt, the European approach to take it slower and be safer in the long run can be beneficial for entrepreneurs.
According to Dr. Tal Mimran, associate professor at the Zefat Academic College, too much freedom is a dangerous approach given the impact AI potentially has on human rights and dignity.
Peter van der Putten, however, believes that the US will not go as far as the EU and that large scale regulations like the EU AI Act are unlikely in the coming years, but that the current regulations are already a step in the right direction.
In regard to the programs and measures announced by president-elect Donald Trump, one can reasonably expect that the USA will want to further relax regulation and partly roll back on the Biden Act in order to foster more business.
China
China’s AI regulation’s first purpose is to benefit the national state. Companies are partly state owned, and the regulations make sure that the State can get the most out of any available data. Certain technologies that exist in China would be unthinkable from an ethical perspective in Europe. Systems like large scale facial recognition used for social scoring are deemed to be in the unacceptable risk category of the EU AI act and are therefore illegal in Europe.
This system is cause for the misconception that China has little to no AI regulations. This is however incorrect. According to Tal Mimran “China is light years ahead of the EU and the US”. Indeed, China has very specific laws in place for every AI use case. The issue is not the lack of regulation but rather the discrepancies of moral values between different cultures.
This lack of compatibility is cause for concern for many, since China is on a good track to overtake the USA and become the next global technological leader. With the rise of an AI that is trained on massive amounts of data, through the sheer size of the Chinese population and the use of AI regulation in the state’s interest, China is gaining an indisputable edge in this technology race. Tal Mimran believes it is not a matter of “if” but a matter of “when” China will overtake the USA.
According to Max Gindt, however, Europe has nothing to be ashamed of. European countries are using their artificial intelligence technology in different ways and the regulations ensure it progresses ethically. The EU’s prohibited high risk systems are only the top of the pyramid and the sole ability to use such systems will not make enough of a competitive advantage for China, say both Peter van der Putten and Max Gindt.
Military
Many people are concerned that too many regulations can result in weaker AI and therefore a considerable military disadvantage, but the regulations presented in this article do not affect military developments.
For example, the CAI Framework specifically includes exemptions for defensive use. Different regions as well as the UN have specialised committees looking into regulatory solutions to AI in warfare.
Given its vast capabilities, AI use in the military seems inevitable. Most regulations are meant for civil use and will not hinder defensive advancements while specialized groups in different regions are searching for military specific rules.
Conclusion
Artificial Intelligence is a powerful tool that can easily be misused and regulations are necessary, but it is essential not to hinder research and progress of such technologies. Time will tell which approach will be more effective in regulating AI: the strict, risk-based European approach, or the more liberal, market oriented US philosophy, or the Chinese are right, where it is used for strengthening the State’s interest but also strongly regulated.
Sources
Interviews
Dr Peter van der Putten, Director of AI Lab at Pegasystems and Assistant Professor at Leiden University, Netherlands
Dr Tal Mimran, Associate Professor at the Zefat Academic College, Adjunct Lecturer at the Hebrew University of Jerusalem, Head of Program at Tachlith Institute, Israel
Mr. Max Gindt, Attache at the Ministry of State – Department of Media, Connectivity and Digital Policy, Luxembourg
EU AI Act: first regulation on artificial intelligence | Topics | European Parliament
The Framework Convention on Artificial Intelligence - Artificial Intelligence
Here is what's illegal under California's 18 (and counting) new AI laws | TechCrunch
Explained: China Social Credit System, Punishments, Rewards - Business Insider
0 Comments
Add a comment