Menu

As artificial intelligence becomes more embedded in daily life, Europe now faces a quiet legal standoff.

On one side stands the AI Act (AIA), the EU’s first attempt at a comprehensive AI framework, built to ensure systems are trustworthy and fair. On the other is the General Data Protection Regulation (GDPR), the strictest data protection law in the world. Each law has its own logic, its own safeguards—but their interaction is anything but harmonious. The unpredictable nature of AI, combined with the GDPR’s strict limitations and ease of violation, creates a minefield of legal uncertainty—one that companies across Europe, and beyond, may already be walking into without realizing it.
A recent analysis by the European Parliamentary Research Service highlights this problem, pointing to a potential conflict—especially around algorithmic discrimination and the processing of sensitive personal data. As both laws apply globally, including in the Western Balkans, companies offering AI solutions and handling EU citizens’ data should closely monitor these developments to minimize risk.

GDPR and Sensitive Data

The GDPR places significant restrictions on the use of sensitive personal data—such as information about an individual’s race, health, or political views—due to the heightened risk it poses to fundamental rights and freedoms. In principle, organizations are prohibited from processing this type of data, unless very specific conditions are met. The most common of these include obtaining the individual’s explicit consent or having a clearly defined legal basis in public interest, provided that adequate safeguards are in place.

This strict approach is intentional. The GDPR is designed to prevent misuse of personal data, particularly in contexts where discrimination or harm could result. However, this also creates a practical challenge: in order to identify or correct bias in AI systems, developers often need access to the very types of data the GDPR tightly controls. As a result, companies may be legally unable to gather the information needed to test their AI systems for fairness—even when their intent is to prevent discriminatory outcomes.

Bias Detection under the AIA

The AI Act targets high-risk AI systems—such as those used in hiring, credit scoring, or healthcare—with the aim of reducing discrimination and unfair outcomes. To support this, it permits the use of sensitive personal data, like racial or health information, but only when it is strictly necessary for detecting and correcting bias, and subject to appropriate safeguards.

In practice, AI systems have already produced troubling results. Hiring algorithms have favored certain genders or health conditions. Credit scoring tools have disadvantaged individuals based on postcode or ethnicity, autonomous vehicles have shown reduced accuracy in detecting dark-skinned pedestrians, and generative AI has reproduced offensive or discriminatory content from biased training data.

All of these cases point to the same underlying issue: bias cannot be corrected if it cannot be measured. And measuring it often requires access to exactly the type of data the GDPR restricts.

This conflict is not only legal, but also philosophical. The GDPR reflects a traditionally European view—especially pronounced in France—that true equality is best preserved by avoiding the formal recognition and processing of sensitive data. The AI Act, by contrast, permits the use of such data for bias detection, following a logic more in line with U.S. legal thinking, where classification is treated as a necessary tool for addressing inequality. The result is a clash between two instincts that are difficult to reconcile—one approach seeks to prevent division, and the other begins by reinforcing it.

Possible Solution

While the AI Act explicitly states that it must not affect the application of the GDPR—and one might assume that GDPR compliance takes precedence—the actual legal relationship between the two remains unclear. Even the European Parliamentary Research Service acknowledges that it is uncertain how these laws will be interpreted when they come into conflict.

One possible solution, the authors suggest, is to treat the processing of sensitive data for bias detection as being in the substantial public interest, which could justify it under the GDPR. However, this approach is still legally untested and would require further clarification or reform.

Key Takeaways

The GDPR strictly limits the use of sensitive personal data, while the AI Act encourages its use for detecting bias in high-risk AI systems. This creates a situation where complying with one framework may risk violating the other. Both laws carry hefty fines for non-compliance, leaving companies caught between a rock and a hard place. Although the analysis comes from the European Parliamentary Research Service and is not legally binding, it highlights a real conflict that regulators will likely need to address. Amendments to the GDPR, the AI Act, or both are probable. In the meantime, companies targeting the EU proceed with caution and seek legal advice before using sensitive data for bias detection.

This website uses cookies to improve your experience.

Read More Accept