The AI Act: Challenges for Justice and Democracy in the Deployment of AI-Based Systems

Autor

Abstrakt

The entry into force of the AI Act will have a significant impact on the practices of both private and public actors. The Act identifies areas where there is a particularly high risk of the violation of fundamental rights, including the administration of justice and democratic processes. This article analyses the provisions of the AI Act for the use of AI systems in these areas, outlines the framework for their use and identifies the main risks to human rights. It considers the most important challenges arising from the AI Act in relation to justice and democratic processes, as well as the difficulties in interpreting the Act in this regard. It also proposes an approach that EU Member States can follow to adjust their national legal systems to meet these new challenges. It suggests that these challenges require a coherent process for the digitisation of justice that distinguishes between systems subject to high-risk AI regulation and those that can be implemented without such burdens. Regarding democratic processes, Member States must implement regulations that, first, promote transparency in the use of AI tools and, second, encourage cooperation with online platforms in monitoring them.

Bibliografia

Abiodun, O., & Lekan, A. (2020). Exploring the potentials of artificial intelligence in the judiciary. International Journal of Engineering Applied Sciences and Technology, 5(8), 23–27.

Aini, G. (2020). A summary of the research on the judicial application of artificial intelligence. Chinese Studies, 9, 14–28.

Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes. arXiv. https://

doi.org/10.48550/arXiv.1904.02095

Binkowski, K. (2023). Sztuczna inteligencja a wykładnia prawa – propozycja zastosowania systemów AI do ustalania założenia o racjonalnym prawodawcy. ZeszytPrawniczy, U. A. M., 13, 7–17.

Bookman, P. K. (2021). Arbitral courts. Virginia Journal of International Law, 61, 179–184, 201–213.

Booth, R. (2026, 22 January). Experts warn of threat to democracy from ‘AI bot swarms’ infesting social media. The Guardian. https://www.theguardian.com/technology/2026/jan/22/experts-warn-of-threat-to-democracy-by-ai-bot-swarms-infesting-social-media

Brożek, B., Furman, M., Jakubiec, M., & Kucharzyk, B. (2024). The black box problem revisited: Real and imaginary challenges for automated legal decision making. Artificial Intelligence and Law, 32, 427–440.

Cyras, V., & Lachmayer, F. (2023). Essays on the visualisation of legal informatics. Springer International Publishing.

Day, P. (2020). Cambridge Analytica and voter privacy. Georgetown Law Technology Review, 4(2), 583–608.

Donohue, M. (2019). A replacement for Justitia’s scales? Machine learning’s role in sentencing. Harvard Journal of Law and Technology, 32(2), 657–678.

European Commission. (2022). Commission Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes Pursuant to Article 35(3) of Regulation (EU) 2022/2065 (Text with EEA Relevance) (C/2024/3014).

European Commission. (2024, 26 April). Communication from the Commission – Commission Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes Pursuant to Article 35(3) of Regulation (EU) 2022/2065, C/2024/2537 (O. J. C C/2024/3014, 26.04.2024).

European Commission. (2024a). Commission, online platforms and civil society increase monitoring during Romanian elections. https://ec.europa.eu/commission/presscorner/detail/en/ip_24_6243

European Commission. (2024b). Commission opens formal proceedings against TikTok on election risks under the Digital Services Act. https://ec.europa.eu/commission/presscorner/detail/en/ip_24_6487

European Parliament and the Council. (2022, 19 October). Regulation on a Single Market for Digital Services and Amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance) (2022/2065) (O. J. L 277, 27.10.2022, pp. 1–102).

European Parliament and the Council. (2024, 12 July). Regulation Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No. 300/2008 (EU) No. 167/2013 (EU) No. 168/2013 (EU) 2018/858 (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA Relevance) (2024/1689) (O. J. L, 2024/1689, 12.07.2024).

European Partnership for Democracy. (2024). The EU’s Artificial Intelligence Act and its impact on electoral processes: A human rights-based approach. https://epd.eu/content/uploads/2024/09/AIand-elections.pdf

Farantouris, N., & Pipis, T. (2025, October). AI in the democratic sphere and the electoral process. https://farantouris.eu/wp-content/uploads/2025/10/Research-Paper-AI-Disinformation-.pdf

FRA (2020). Getting the future right: Artificial intelligence and fundamental rights. Publications Office of the European Union.

FRA (2025). Assessing high-risk AI: Fundamental rights risks. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2025-assessing-high-risk-ai-fundamental-rights-risks_en.pdf

Fülöp, T., & Poindl, P. (2025). Article 27. In C. N. Pehlivan, N. Forgó, & P. Valcke (Eds.), The EU Artificial Intelligence (AI) Act: A commentary (pp. 553–573). Wolters Kluwer.

Graux, H., Garstka, K., Murali, N., Cave, J., & Botterman, M. (2025). Interplay between the AI Act and the EU digital legislative framework. European Parliament. https://www.europarl.europa.eu/ RegData/etudes/ATAG/2025/778577/ECTI_ATA(2025)778577_EN.pdf

Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., Scardapane, S., Spinelli, I., Mahmud, M., & Hussain, A. (2024). Interpreting black-box models: A review on explainable artificial intelligence. Cognitive Computation, 16, 45–74.

Hohmann, B., & Kollár, G. (2025). Reflections on the data protection compliance of AI systems under the EU AI Act. Cogent Social Sciences, 11(1). https://doi.org/10.1080/23311886.2025.2560654

Hussain, R., Raza, A., Siddiqi, I., Khurshid, K., & Djeddi, C. (2015). A comprehensive survey of handwritten document benchmarks: Structure, usage and evaluation. EURASIP Journal on Image and Video Processing, 2015(1), Article 46. https://doi.org/10.1186/s13640–015-0102–5

International Foundation for Electoral Systems. (2024). The Romanian 2024 election annulment: Addressing emerging threats to electoral integrity. https://www.ifes.org/publications/romanian-2024-election-annulment-addressing-emerging-threats-electoral-integrity

Itumeleng, M. M., & Esiefarienrhe, B. M. (2024). The impact of artificial intelligence, ethical implications and technologies on the electoral process. E-Journal of Humanities, Arts and Social Sciences, 5(16), 3211–3219. https://doi.org/10.38159/ehass.202451641

Judgment of the CJEU of 27 May 2019 on the case of Minister for Justice and Equality v. OG and PI, C 508/18.

Judgment of the CJEU of 27 May 2019 on the case of PF, C-509/18.

Juneja, P. (2024). Artificial intelligence for electoral management. International Institute for Democracy and Electoral Assistance. https://doi.org/10.31752/idea.2024.31

Kiejnich-Kruk, K. (2024). Lost in translation: Implementation of the right to a translator through the use of machine translators in the light of EU and Polish Law. Ruch Prawniczy, Ekonomiczny i Społeczny, 84(1), 61–81.

Kiejnich-Kruk, K. (2025). Building blocks – strategia cyfryzacji wymiaru sprawiedliwości. Perspektywa estońska. Przegląd Sądowy, 3, 86–100.

Kouroutakis, A. (2024). Rule of law in the AI era: Addressing accountability, and the digital divide. Discover Artificial Intelligence, 4, 115. https://doi.org/10.1007/s44163–024-00191–8

Krimmer, R., Rabitsch, A., Kužel, R., Achler, M., & Licht, N. (2022). Elections in digital times: A guide for electoral practitioners. The United Nations Educational, Scientific and Cultural Organization.

Levitina, A. (2025). Humans in automated decision-making under the GDPR and AI Act. Revista CIDOB d’Afers Internacionals, 138, 121–144.

Lupo, G. (2019). Regulating (artificial) intelligence in justice: How normative frameworks protect citizens from the risks related to AI use in the judiciary. European Quarterly of Political Attitudes and Mentalities, 8(2), 75–96.

Mentelero, A. (2024). The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template. Computer Law & Security Review, 54, article number: 106020.

Michałkiewicz-Kądziela, E. (2024). The impact of deepfakes on elections and methods of combating disinformation in the virtual world. Teka Komisji Prawniczej PAN Oddział w Lublinie, 17(1), 152–153.

Nikolich, A. (2025). A unified code of ethics and conduct for AI and trustworthy elections. In: B. Srivastava, A. Nikolich, A. Hickerson, & T. Koppel (Eds.), Promise: Promoting AI’s safe usage for elections

(pp. 279–290). Springer. https://link.springer.com/chapter/10.1007/978–3-031–89853-2_17

Padmanabhan, D., Simoes, S., & MacCarthaigh, M. (2023). AI and core electoral processes: Mapping the horizons. AI Magazine, 44(3), 218–239. https://doi.org/10.1002/aaai.12105

Perrodet, A. (2002). The public prosecutor. In M. Delmas-Marty & J. R. Spencer (Eds.), European Criminal Procedure (pp. 415–455). Cambridge University Press.

Perry, M. (2017). iDecide: Administrative decision-making in the digital world. Australian Law Journal, 91, 29–41.

Pinto, R., Mettler, T., & Taisch, M. (2013). Managing supplier delivery reliability risk under limited information: Foundations for a human-in-the-loop DSS. Decision Support System, 54(2), 1076–1084.

Rawte, V., Sheth, A., & Das, A. (2023). A survey of hallucination in large foundation models. arXiv. https://doi.org/10.48550/arXiv.2309.05922

Renaissance Numérique. (2025). Interactions and overlaps between the GDPR and AI Act, with Etienne Drouard. https://www.renaissancenumerique.org/en/publications/interactions-and-overlaps-between-the-gdpr-and-ai-act-with-etienne-drouard/

Risso, L. (2018). Harvesting your soul? Cambridge Analytica and Brexit. In C. Jansohn (Ed.), Brexit means Brexit? (pp. 75–85). Akademie der Wissenschaften und der Literatur.

Sarra, C. (2025). Artificial intelligence in decision-making: A test of consistency between the EU AI Act and the GDPR. Athens Journal of Law, 11(1), 45–62.

Sartor, G., & Lagioia, F. (2020). The impact of the General Data Protection Regulation (GDPR) on artificial intelligence. Publications Office of the European Union.

TIAL (2025). White paper #001: Safeguarding elections in the age of AI and synthetic content. https://tial.org/publications/white-paper-001-safeguarding-elections-in-the-age-of-ai-and-synthetic-content/

Ufert, F. (2020). AI regulation through the lens of fundamental rights: How well does the GDPR address the challenges posed by AI? European Papers, 5(2), 1087–1097.

Ultima Ratio (n.d.). Sztuczna inteligencja w Ultima Ratio. Czy roboty zastąpią arbitrów? Retrieved 5 February 2025, from https://ultimaratio.pl/blog/sztuczna-inteligencja-w-ultima-ratio-czy-roboty-zastapia-arbitrow

Vucheva, M., Rocha, M., Renard, R., & Stasinopolous, D. (2020). Study on the use of innovative technologies in the justice field – Final report. https://op.europa.eu/en/publication-detail/-/publication/4fb8e194-f634–11ea-991b-01aa75ed71a1/language-en

Weitkunat, R., & Bestle, M. (1990). Computerized Mackworth vigilance clock test. Computer Methods and Programs in Biomedicine, 32(2), 147–149.

Zienowicz, T. A. (2019). Artificial intelligence i singularity w procesie stosowania prawa, Prawo Mediów Elektronicznych, 2, 31–33.

Pobrania

Opublikowane

2026-03-31