The Brussels Privacy Hub, along with more than 150 esteemed academics from across Europe and beyond, is urging the inclusion of a fundamental rights impact assessment (FRIA) in the forthcoming EU Artificial Intelligence Regulation (AI Act). While the European Parliament's proposal hints at this requirement, there is concern it may lose its strength during the trialogue.
The signatories, recognized experts in AI, data protection, and fundamental rights, are advocating for maintaining the parliament's version, emphasizing the need for clear assessment parameters, transparent disclosure of assessment results, end-user involvement (especially those vulnerable), and engagement of independent public authorities in the assessment process and auditing mechanisms.
Among those endorsing this appeal is Niels van Dijk, the director of the d.pia.Lab (The Brussels Laboratory for Data Protection & Privacy Impact Assessments). His extensive expertise in data protection and privacy impact assessments lends further weight to the call for robust fundamental rights assessments within the AI Act. The inclusion of such assessments is vital in ensuring that the development and deployment of artificial intelligence technologies respect and safeguard the rights and privacy of individuals across Europe.