Bias in Artificial Intelligence Systems
Słowa kluczowe:
AI discrimination, AI fairness, algorithmic bias, artificial intelligenceAbstrakt
Artificial intelligence systems are currently deployed in many areas of human activity. Such systems are increasingly assigned tasks that involve taking decisions about people or predicting future behaviours. These decisions are commonly regarded as fairer and more objective than those taken by humans, as AI systems are thought to be resistant to such influences as emotions or subjective beliefs. In reality, using such a system does not guarantee either objectivity or fairness. This article describes the phenomenon of bias in AI systems and the role of humans in creating it. The analysis shows that AI systems, even if operating correctly from a technical standpoint, are not guaranteed to take decisions that are more objective than those of a human, but those systems can still be used to reduce social inequalities.Bibliografia
Angwin J., Larson J., Mattu S. and Kirchner L., Machine Bias, ProPublica 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Barfield W. and Pagallo U., Advanced Introduction to Law and Artificial Intelligence, Cheltenham/Northampton 2020.10.4337/9781789905137
Barocas S. and Selbst A.D., Big Data’s disparate impact, “California Law Review” 2016, vol. 104, no. 2.10.2139/ssrn.2477899
Berendt B., Preibusch S., Toward accountable discrimination-aware data mining: The importance of keeping human in the loop – and under the looking-glass, “Big Data” 2017, vol. 5, no. 2.10.1089/big.2016.0055
Boden M.A., Sztuczna inteligencja. Jej natura i przyszłość, trans. T. Sieczkowski, Łódź 2020.
Borysiak W. and Bosek L., Komentarz do art. 32, (in:) M. Safjan and L. Bosek (eds.), Konstytucja RP. Tom I. Komentarz do art. 1–86, Warsaw 2016.
Brennan T., Dieterich W. and Ehret B., Evaluating the predictive validity of the COMPAS risk and needs assessment system, “Criminal Justice and Behavior” 2009, vol. 36, no. 1.10.1177/0093854808326545
Cataleta M.S. and Cataleta A., Artificial Intelligence and Human Rights, an Unequal Struggle, “CIFILE Journal of International Law” 2020, vol. 1, no. 2.
Coeckelbergh M., AI Ethics, Cambridge/London 2020.10.7551/mitpress/12549.001.0001
Cummings M.L., Automation and Accountability in Decision Support System Interface Design, “The Journal of Technology Studies” 2006, vol. 32, no. 1.10.21061/jots.v32i1.a.4
Danks D. and London A.J., Algorithmic Bias in Autonomous Systems, ‘Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017)’, https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf.10.24963/ijcai.2017/654
Davenport T. and Kalakota R., The potential for artificial intelligence in healthcare, “Future Healthcare Journal” 2019, vol. 6, no. 2.10.7861/futurehosp.6-2-94
Dymitruk M., Sztuczna inteligencja w wymiarze sprawiedliwości? (in:) L. Lai and M. Świerczyński (eds.), Prawo sztucznej inteligencji, Warsaw 2020.
European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)).
Fjeld J., Achten N., Hilligoss H., Nagy A. and Srikumar M., Principled Artificial Intelligence. Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, Cambridge 2020.10.2139/ssrn.3518482
Flasiński M., Wstęp do sztucznej inteligencji, Warsaw 2020.
Fry H., Hello world. Jak być człowiekiem w epoce maszyn, trans. S. Musielak, Krakow 2019.
German S., Bienstock E. and Doursat R., Neural networks and bias/variance dilemma, “Neural Computation” 1992, vol. 4, no. 1.10.1162/neco.1992.4.1.1
Hacker P., Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law, “Common Market Law Review” 2018, vol. 55.10.54648/COLA2018095
High-Level Expert Group on Artificial Intelligence (appointed by the European Commission in June 2018), A Definition of Artificial Intelligence: Main Capabilities and Scientific Disciplines, Brussels 2019.
High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, Brussels 2019.
Jernigan C. and Mistree B.F., Gaydar: Facebook friendships expose sexual orientation, “First Monday” 2009, vol. 14, no. 10.10.5210/fm.v14i10.2611
Kasperska A., Problemy zastosowania sztucznych sieci neuronalnych w praktyce prawniczej, „Przegląd Prawa Publicznego” 2017, no. 11.
Lattimore F., O’Callaghan S., Paleologos Z., Reid A., Santow E., Sargeant H. and Thomsen A., Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias. Technical Paper, Australian Human Rights Commission, Sydney 2020.
Massey G. and Ehrensberger-Dow M., Machine learning: Implications for translator education, “Lebende Sprachen” 2017, vol. 62, no. 2.10.1515/les-2017-0021
Michie D., Methodologies from Machine Learning in Data Analysis and Software, “The Computer Journal” 1991, vol. 34, no. 6.10.1093/comjnl/34.6.559
Neff G. and Nagy P., Talking to Bots: Symbiotic Agency and the Case of Tay, “International Journal of Communication” 2016, no. 10.
Ntoutsi E., Fafalios P., Gadiraju U., Iosifidis V., Nejdl W., Vidal M.-E., Ruggieri S., Turini F., Papadopoulos S., Krasanakis E., Kompatsiaris I., Kinder-Kurlanda K., Wagner C., Karimi F., Fernandez M., Alani H., Berendt B., Kruegel T., Heinze Ch., Broelemann K., Kasneci G., Tiropanis T. and Staab S., Bias in data-driven artificial intelligence systems – An introductory survey, “WIREs Data Mining Knowledge Discovery” 2020, vol. 10, no. 3.10.1002/widm.1356
O’Neil C., Broń matematycznej zagłady. Jak algorytmy zwiększają nierówności i zagrażają demokracji, trans. M. Z. Zieliński, Warsaw 2017.
Raji I.D., Buolamwini J., Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products, ‘Conference on Artificial Intelligence, Ethics, and Society’ 2019, https://www.media.mit.edu/publications/actionable-auditing-investigating-the-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products/.10.1145/3306618.3314244
Ribeiro M.T., Singh S. and Guestrin C., „Why Should I Trust You?” Explaining the Predictions of Any Classifier, “22nd ACM SIGKDD International Conference 2016, San Francisco”, https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf.10.1145/2939672.2939778
Rodrigues R., Legal and human rights issues of AI: Gaps, challenges and vulnerabilities, “Journal of Responsible Technology” 2020, vol. 4.10.1016/j.jrt.2020.100005
Roselli D., Matthews J., Talagala N., Managing Bias in AI, “Companion Proceedings of the 2019 World Wide Web Conference, San Francisco, CA, USA”, May 2019.10.1145/3308560.3317590
Rutkowski L., Metody i techniki sztucznej inteligencji, Warsaw 2012.
White Paper On Artificial Intelligence. A European approach to excellence and trust, COM(2020) 65 final, European Commission, Brussels 2020.
Yapo A. and Weiss J., Ethical Implications of Bias In Machine Learning, “Proceedings of the Annual Hawaii International Conference on System Sciences” 2018.10.24251/HICSS.2018.668
Zuiderveen Borgesius F., Discrimination, artificial intelligence and algorithmic decision-making, Council of Europe, Strasbourg 2018.