
Beneficiary: Politiscope
Support: European Digital Rights (EDRi)
Amount: €7,500
With the adoption of the EU Artificial Intelligence Act, countries face a key question: how to establish an institutional framework and implement the new rules in a way that truly protects people, rather than merely filling in formal checkboxes? The project NERA AI (National Enforcement and a Rights-based Approach to Artificial Intelligence) was launched with the idea of opening up this space for public debate in Croatia – for civil society and members of the media.
The educational seminar “AI – The Other Side of the Coin” was dedicated to the practical consequences of the AI Act and the risks that technology already poses to rights and freedoms. The discussion highlighted how important it is to shift attention away from abstract “Skynet scenarios” and towards very concrete, already present threats: discrimination, surveillance, and unfair algorithmic decisions. Speakers included Vanja Skorić (ECNL), Ella Jakubowska (EDRi), Jelle Klaas (PILP), Filip Milošević (SHARE Foundation), Nađa Marković (A11 Initiative), Marija Renić, and Tamara Zavišić (ETIK.AI).
After hearing from experts who deal with these issues in practice, the program continued with a short presentation of the EU Artificial Intelligence Act by Duje Prkut (Politiscope), which served as an introduction to the panel discussion on the upcoming Croatian enforcement law. Panel participants included Anamarija Mladinić (AZOP), Danilo Krivokapić (SHARE), Maja Cimerman (Danes je nov dan), and Duje Kozomara (Politiscope).
In the autumn, an online AI Act Navigator will be published in Croatian, a tool that will provide domestic audiences with a clearer overview of this complex legal text.
The project also supports advocacy and lobbying activities to open up policy and legislative processes on artificial intelligence, where civil society organizations and the perspective of fundamental rights protection are currently absent. During the public consultation process, Politiscope will submit its analyses and recommendations for establishing an effective legal and institutional framework to protect citizens from the harm that can result from irresponsible and non-transparent use of artificial intelligence.