en

As part of the European Commission’s 2024 consultation on prohibitions and definitions under the AI Act, Politiscope submitted a detailed response highlighting critical gaps in the current regulatory approach to high-risk AI systems. Our contribution focused on ensuring that fundamental rights are protected, especially in the context of biometric surveillance and inference of sensitive characteristics.


We participated in this survey to advocate for a future where AI systems are designed and deployed in ways that respect human dignity, privacy, and democratic values. Below are some key positions from our submission:


Clarity on Remote Biometric Identification
We stressed that the use of AI for remote biometric identification (RBI) must be explicitly included within the scope of high-risk systems. Vague or overly technical language creates enforcement gaps and allows intrusive surveillance technologies to operate without accountability.
“We caution against the misuse of the term ‘biometric identification’, particularly where it may mask broader surveillance functions.”
This kind of terminology inflation risks watering down safeguards and creating loopholes that would enable widespread biometric tracking in public spaces.


No Room for Discriminatory Inference
We called on the Commission to clearly prohibit AI systems designed to infer or deduce sensitive traits—such as race, political beliefs, or sexual orientation. These inferences are often speculative, inaccurate, and deeply invasive, especially when deployed in law enforcement, education, or employment contexts.


Apply Protections Equally Across All Actors
We urged regulators to clarify that prohibitions and safeguards must apply to all actors, including public authorities and national security bodies. There is a worrying trend of exempting state surveillance under broad “public security” justifications, and we believe this undermines the spirit of the AI Act.


Strengthen the AI Act’s Acknowledgment of Biometric Risk
We supported the AI Act’s recognition that real-time biometric surveillance in public spaces carries significant risks, but pushed for stronger language and stricter enforcement mechanisms. Exceptions must remain truly exceptional, not a backdoor to normalization of biometric surveillance.


Reject ‘Legitimate Interest’ as a Loophole
Our response also flagged the misuse of legitimate interest as a legal basis for deploying intrusive biometric or inference-based AI systems. We argued that such a basis is not suitable for high-risk processing and called on the Commission to clearly discourage its use in this context.

At Politiscope, we believe that AI governance must be rooted in democratic accountability, legal clarity, and an unwavering commitment to human rights. Our work on this survey continues our broader mission: to ensure that AI technologies in Europe serve people—not power.