The EU AI Act is a far-reaching comprehensive set of rules and regulations for AI systems seemingly modeled after the EU’s influential privacy legislation GDPR. The EU seemingly intends to take the lead again in the realm of AI and has substantially completed work on the sweeping AI Act. To read more on the Act, click here.
While the AI Act is not law, the EU has been working on its provisions since 2017 when the European Council called for a “sense of urgency to address emerging trends including artificial intelligence …, while at the same time ensuring a high level of data protection, digital rights and ethical standards.” The AI Act was progressing towards its final stages of review. According to reports, the EU originally intended to pass the AI Act before 2024 to avoid any entanglement with the 2024 EU elections.
Now, however, passage of the Act seems likely to be delayed. Three of the largest member states of the EU – France, Italy and Germany – have raised concerns with the AI Act, and according to reports seek to alter aspects of the AI Act before assenting to its passage. Indeed, there are now several competing policy preferences that have arisen between the original European Commission draft, proposals from the European Parliament and proposals from these three key member states.
First, certain MEPs want to impose stricter requirements on facial recognition technology than are already provided by the original 2023 draft AI Act. Notably, the AI Act permitted some exceptions for law enforcement in certain instances and more broadly for national security, whereas these MEPs appear to desire something more akin to a total ban on the use of facial recognition technology in public spaces.
Second, other MEPs seek to enhance aspects of governance, in particular, provisions relating to potential fines. The original 2023 draft of the AI Act could impose up to 6% of “worldwide annual turnover” (essentially revenue) for certain violations – namely, the use of prohibited AI systems. Other, less severe violations of the EU AI Act originally resulted in up to 4% of worldwide turnover. Misleading or incorrect information, in contrast, could result in a 2% of worldwide turnover fines. Reports are not clear what the new, more stringent proposals will be.
The third and final roadblock largely centers on how so-called “Foundational AI” and “High-Risk” AI systems are treated. In the original AI Act draft, Foundational AI systems, such as ChatGPT and Bard, were not specifically addressed. Instead, the focus was on more task-specific AI systems – for example, an AI system that might run a power plant, pilot a vehicle, or manage a pacemaker’s heartbeat.
This is where the Franco-German-Italian block comes into play. The original attempt to address so-called “Foundational” AI systems was to establish a voluntary but binding set of rules of conduct. Further, the European Parliament wanted to exclude smaller AI providers from certain rules and regulations, that the larger providers would be subjected to. This represented, at least in part, an attempt to favor smaller EU AI firms and disfavor the larger US-based firms. However, the French, Germans and Italians believe apparently that their own firms are competitive. Moreover, they believe that having a multiple-tiered set of rules might backfire on the EU, as the customer base for these AI tools might consider the more regulated systems to be the more trustworthy systems.
While the passage of the EU AI Act is now only delayed, with action now expected in February 2024, each delay risks running up against the EU 2024 elections and, with a very different set of MEPs, a very different set of priorities for AI may come to the forefront.