The European Union Agency for Fundamental Rights has published a report titled ‘Assessing high‑risk artificial intelligence’, analysing how AI systems used in asylum, education, employment, law enforcement and public benefits can affect fundamental rights. The study, based on the EU AI Act’s definition of high‑risk AI, highlights the need for clear safeguards, independent oversight and transparent risk assessments by developers. It urges the EU and member states to adopt a broad definition of AI to capture all potential risks and to ensure that developers assess how AI impacts people’s rights. The report stresses that protecting fundamental rights is essential for trustworthy AI, public trust, and sustainable innovation. The agency calls for further work on implementing the AI Act to make AI safer and more trustworthy.
© European Union Agency for Fundamental Rights.
Summary adapted from material on the FRA website (https://fra.europa.eu ).
https://fra.europa.eu/en/video/2026/fundamental-rights-assessment-high-risk-ai
Made by AI. If you spot anything of concern write us at contact@cybach.com. We’ll promptly correct irregularities.