Skip to content

The European Union wants to make corporations liable for AI that causes harm

EU lawmakers are working on new regulations that will make it simpler to file damage claims against AI developers. As part of Europe's effort to stop the release of harmful AI systems by developers, a bill was introduced this week that is expected to become law within the next couple of years. Consumer activists say it doesn't go far enough, while tech companies say it could stifle innovation.

There is ample evidence of the negative effects of advanced AI technologies on individuals, communities, and even entire societies. Predictive artificial intelligence systems used to approve or reject loans can be less accurate for minorities, and social media algorithms promote misinformation.

The EU's AI Act, which is also expected to become EU law around the same time, will be given more teeth by the new bill, which is called the AI Liability Directive. Systems for law enforcement, labor force development, and medical care are all examples of "high risk" applications of AI that could be subject to additional scrutiny under the proposed AI Act.

With the new liability bill in place, victims of harm caused by AI systems would be able to file lawsuits for compensation. The point is to get everyone involved in creating and using AI to take responsibility for what they've done and provide details about the design and training of their systems. Those tech companies that don't play by the rules could face EU-wide class actions.

If a job applicant claims they were unfairly rejected by an AI system used to screen resumes, they may petition the court to compel the AI company to provide them with access to the system's data in order to determine who was at fault and why. They can now file a lawsuit with this evidence in hand.

It will take at least a few years for the proposal to make its way through the EU's legislative process. Members of the European Parliament and EU governments will propose amendments to it, and they can expect to face heavy lobbying from tech companies that argue the proposed rules will have a "chilling" effect on innovation.

According to Mathilde Adjutor, Europe's policy manager for the tech lobbying group CCIA, which represents companies like Google, Amazon, and Uber, the bill could have a negative impact on software development.

The new regulations "risk becoming liable for software bugs, but also for the potential impact on the mental health of users," she says.

Given AI's potential for discrimination, the bill's power shift from corporations to consumers is welcomed by Imogen Parker, associate director of policy at the AI research institute Ada Lovelace. Thomas Boué, head of European policy at tech lobby BSA, whose members include Microsoft and IBM, says the bill will ensure that there is a uniform way to seek compensation across the EU when an AI system causes harm.

However, there are those who believe the proposals don't go far enough and will make it too difficult for consumers to file claims, so they've organized to counteract them.

European Consumer Organization deputy director general Ursula Pachl called the proposal a "real letdown" because it would place the burden of proof on individual consumers to show that an AI system caused them harm or that the developer was negligent.

Pachl warns that consumers won't be able to make use of the new rules in a world full of "black box" AI systems that are difficult to understand. She cites the difficulty of proving that a person's racial treatment was a result of the design of a credit scoring system as an example.

According to Claudia Prettner, EU representative at the Future of Life Institute, a non-profit that focuses on existential AI risk, the bill also fails to take into account indirect harms caused by AI systems. Better versions, Prettner argues, would make corporations liable for harm caused by their actions without requiring fault on the part of the company.

"Artificial intelligence systems are often developed for a specific goal, but they end up causing harm in a different domain. The algorithms used by social media platforms, for instance, were designed to increase users' engagement with the sites, but they also promoted content that could be seen as divisive.

EU officials want their AI Act to become the industry standard for AI governance around the world. Foreign nations, including the United States, where attempts are already under way to regulate the technology, are keeping a close eye on the situation. Companies that illegally collected data have been ordered to delete their algorithms, and the Federal Trade Commission is considering rules regarding how businesses handle data and develop algorithms. Weight Watchers was ordered to do so by the government earlier this year for illegally collecting data on minors.

The success or failure of the new EU legislation will have far-reaching consequences for the global regulation of artificial intelligence. It is crucial that the European Union (EU) gets liability for AI right, as this will benefit citizens, businesses, and regulators. "Without it, we will never be able to make AI useful to people and society," argues Parker.