The European Union on Wednesday unveiled strict regulations to regulate the use of artificial intelligence, a first-rate policy that highlights how companies and governments can use technology considered one of the most significant, but ethical, scientific advances in recent memory.
The draft rules would set limits on the use of artificial intelligence in a range of activities, from self-driving cars to employment decisions, bank loans, school enrollment selections and exam scores. It would also cover the use of artificial intelligence by law enforcement and the judiciary – areas considered “high risk”, as it could threaten human security or fundamental rights.
Some uses would be banned altogether, including live facial recognition in public spaces, although there would be more exemptions for national security and other purposes.
The 108-page policy is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for major technology companies, including Amazon, Google, Facebook and Microsoft, which have invested resources in developing artificial intelligence, as well as dozens of other companies that use software to develop drugs, to subscribe to policies. insurance and to judge the creditworthiness. . Governments have used versions of the technology in criminal justice and the allocation of public services, such as income support.
Companies that break the new regulations, which could take several years to go through the European Union’s policy-making process, could face fines of up to 6% of global sales.
“When it comes to artificial intelligence, trust is a necessity, not a pleasant thing,” said Margrethe Vestager, the European Commission’s executive vice president who oversees digital policy for the bloc of 27 nations. “With these benchmarks, the EU is leading the development of new global rules to ensure that IM can be trusted.”
European Union regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how technology makes decisions. Companies must also ensure human oversight of how systems are created and used.
Some applications, such as chats that provide human conversations in customer service situations, and software that creates manipulated images that are difficult to detect, such as “deepfakes,” should make it clear to users that what they see is computer-generated. .
For the past decade, the European Union has been the world’s most aggressive watchdog in the technology industry, with its policies often used as plans by other nations. The bloc has already adopted the world’s most comprehensive data privacy regulations and is debating additional antitrust and content moderation laws.
But Europe is no longer the only one pushing for tougher surveillance. The largest technology companies now face a broader estimate from governments around the world, each with their own political and political motivations, to counter the power of the industry.
In business today
In the United States, President Biden has filled his administration with industry criticism. The UK is creating a system of technology regulation to control the industry. India is stepping up surveillance of social networks. China has pursued domestic technology giants such as Alibaba and Tencent.
The results of the coming years could reshape the way the global Internet works and the way new technologies are used, with people having access to different content, digital services or online freedoms, depending on where they are.
Artificial intelligence – where machines are trained to perform tasks and make decisions on their own by studying huge volumes of data – is seen by technologists, business leaders and government officials as one of the most transformative technologies in the world, promising major gains in productivity.
But as systems become more sophisticated, it can become more difficult to understand why software makes a decision, a problem that could worsen as computers become more powerful. Researchers have raised ethical questions about its use, suggesting that it could perpetuate existing prejudices in society, invade confidentiality or lead to the automation of more jobs.
The release of the bill by the European Commission, the bloc’s executive body, drew a mixed reaction. Many industry groups expressed relief that regulations were not stricter, while civil society groups said they should have moved on.
“There has been a lot of discussion in recent years about what AI regulation would mean, and the alternative so far has been to do nothing and wait to see what happens,” said Carly Kind, director of the Ada Lovelace Institute in London. , which studies the ethical use of artificial intelligence. “It’s the first time he’s tried any country or regional bloc.”
Ms Kind said many were concerned that the policy was too broad and left too much discretion to companies and technology developers to regulate themselves.
“If it doesn’t set strict red lines and guidelines and very firm limits on what’s acceptable, it opens up a lot for interpretation,” she said.
The development of correct and ethical artificial intelligence has become one of the most controversial issues in Silicon Valley. In December, the co-leader of a Google team studying the ethical uses of the software said she was fired for criticizing the company’s lack of diversity and the prejudices embedded in modern artificial intelligence software. There have been debates within Google and other companies about selling state-of-the-art software to governments for military use.
In the United States, the risks of artificial intelligence are also considered by government authorities.
This week, the Federal Trade Commission warned against selling artificial intelligence systems that use racially biased algorithms or that could “deny people employment, housing, credit, insurance or other benefits.”
Elsewhere, in Massachusetts and in cities like Oakland, California; Portland, Ore .; and San Francisco, governments have taken steps to restrict the use of facial recognition police.