Couverture de JIE_044

Article de revue

Artificial Intelligence and Cognitive Biases: A Viewpoint

Pages 223 à 231

Versión en español

1 Not only academic journals, but also newspapers, TV shows, and social media are full of discussions about Artificial Intelligence (AI) and its impact on individuals, companies, societies, and the humanities at large. A common question here is if there are biases, and to date, there is common agreement that generative artificial intelligence applications have such biases. Not necessarily wanted, but they are there.

2 Hence, we thought it could be worth a try to involve generative AI here. Our starting point was a simple question to Chat GPT with the following query: “What are the cognitive biases of AI?

3 This was the answer from Chat GPT: “Cognitive biases in AI can stem from various factors, including how data is collected, labeled, and used to train AI models, as well as decisions made during the design and implementation of these systems. Here are some examples of common cognitive biases in AI:

  1. Data selection bias: The data used to train AI models can be biased due to how it is collected, leading to inaccurate or unfair predictions. For example, if the training data does not represent different populations or categories fairly, the AI may reproduce and amplify these imbalances.
  2. Representation bias: AI models can also be biased due to how data is represented. For instance, if certain features are underrepresented or misrepresented in the training data, the AI may struggle to generalize accurately.
  3. Algorithmic bias: The algorithms used in AI systems can also introduce biases, either through the machine learning techniques used or the design decisions made by developers. For example, some algorithms may be inherently more prone to producing biased results due to their internal workings.
  4. Confirmation bias: AI models can also be subject to confirmation bias, meaning they tend to favor and reinforce information that confirms existing hypotheses or biases present in the training data.
  5. Human interaction bias: Interactions between human users and AI systems can also introduce biases, whether through users’ interactions with the systems or how the systems’ decisions are interpreted and implemented by humans.

5 It’s important to recognize and monitor these biases to develop and use AI systems responsibly and ethically, ensuring they do not perpetuate or amplify biases and injustices present in society. Moreover, efforts to mitigate these biases include diverse and representative data collection, transparent AI development processes, and ongoing evaluation and auditing of AI systems for fairness and equity”.

6 This answer came as a surprise to us for several reasons. First of all, the answer is rather short. It does not define cognitive biases, which is surprising, since there are various available. Such a question could be answered with several pages of reports, if you discuss this in a regular class, for instance. The formulation “can stem” and Here are some examples indicates that ChatGPT does not even try to give a comprehensive answer. It just gives examples and general answers. On the contrary, no information is given where further information about such biases could be found. Since a lot of research on different kinds of biases is available on public platforms such as Google Scholar and beyond, a hint in this direction could have helped. Not surprisingly, it does not show references so that the reader can get an understanding of the background of the answers given. Furthermore, our question was very general, so another approach could have been to ask a question back. So for example, in which context we would like to know about these biases? Also, regional or cultural aspects could have been worth mentioning. Furthermore, our general question does not indicate which kind of AI we are referring to. This could be Generative AI, but also other variants such as Traditional AI.

7 So many more questions, which we could ask with further prompting. But instead of doing so, we will reflect upon biases in the following, and indicate ways to overcome them.

A reflection on Cognitive Biases in AI

8 Artificial intelligence systems’ decisions and outputs may be influenced by inclinations or systemic errors in thinking, which are referred to as human cognitive biases. Cognitive biases in humans relate to the outcome of utilizing heuristics, or short cuts, while thinking (Bazerman, Moore, 20139). Described in the groundbreaking work of Tversky and Kahneman (1974), cognitive biases are a systematic pattern of deviance from rationality in judgement or decision-making process in which people construct their own reality based on cognitive limitations, motivational factors, and/or adaptation to natural environments.

9 Stated differently, individuals who are faced with both basic and intricate cognitive tasks want to optimize the benefits they derive from their interactions with the surroundings. Throughout this process, all pertinent information is considered for issue resolution, while extraneous information is left out. Artificial Intelligence attempts to mimic this rationality presumption. However, rather than using logical calculations when making decisions that require processing and interpreting information, people frequently depend on mental models and shortcuts that are preexisting and based on assumptions and prejudices. They establish their own “subjective social reality” in this manner (Rastogi et al., 2022). Such paradigm implies that large-scale human-generated datasets used to train AI systems make them susceptible to reflecting human cognitive biases. One well-known cognitive bias in AI decision-making is the availability heuristic, which postulates that people are more likely to rely on data that confirms their ideas (also noted as confirmation bias, see, for instance, Çalikli, Bener 2013; Salman et al., 2019; Salawu et al., 2019). We frequently go towards the most enlightening or reasonable interpretation of the facts at hand when confronted with contradictory or unclear evidence. This may be a reasonable tactic in certain circumstances, but in many other situations, it might lead to an unending cycle of failure that is unavoidable. One well-known instance is memory leaks, which happen when models depend too much on heuristics to make decisions and wind up using outdated or irrelevant data. This insight casts doubt on the growing theory that AI is “smarter” than people or that its performance or intellect may outpace that of humans. This is not to argue that artificial intelligence models may become superhuman over time. Instead, it demonstrates how drastically different our actual course of action is from such aspirations and pledges. They frequently rely on mathematical-statistical patterns and surface-level linkages rather than deeper comprehension and reasoning. They also struggle with tasks that call for imagination, uniqueness, or originality. Additionally, they could find it difficult to adjust to novel or shifting circumstances or to absorb criticism or lessons from their blunders. Generally, decision-making processes are biased towards an initially presented value often more fully referred to as anchoring-and-adjustment (or anchoring bias, Tversky, Kahneman, 1974; Buijsrogge, 2014).

10 Learning on human data, AI systems are prone to ingesting not just our expertise but also our preconceptions and biases. These models are mirrors of the conversation we have as a society, not truth-tellers. They are not able to independently determine what is morally or factually accurate. Cognitive bias in AI systems has drawn significant criticism due to its potential to perpetuate and exacerbate existing societal inequalities and reinforce harmful stereotypes (Ashmore et al., 2019; Baeza-Yates, 2018; O’Neil, 2016). AI systems learning on biased training data will show prejudices in the final AI model. Discriminatory effects like biased employment procedures or unequal treatment in the criminal justice system may result from this (Soleimani et al., 2022; Caliskan et al., 2017; J. Angwin et al., 2022). AI systems may inadvertently increase biases held within the training data, especially those that rely on machine learning. An AI recruiting system educated on previous hiring data, for instance, may reinforce gender prejudice by giving preference to male candidates. Because AI systems can make judgments on their own or with little human supervision, it can be challenging to hold people responsible for skewed results (Kliegr et al., 2021). The effect of biased judgments on impacted individuals or groups may be worsened by this lack of responsibility (Omrani et al., 2022). A lot of AI models, particularly complex deep learning models, function as “black boxes” with less information available about how they make decisions (Wang and Siau, 2019). Due to this opacity, it is difficult to recognize, rectify, and justify biased judgments to stakeholders (Omrani et al., 2022). By making judgments based on traits like gender, ethnicity, or socioeconomic position, biased AI systems have the potential to perpetuate negative stereotypes. This has the potential to impede initiatives to promote diversity and inclusion and maintain systemic inequality.

11 The use of biased AI systems gives rise to moral questions of justice, fairness, and the effects on both society and people. Biased AI carries the potential to erode public confidence in technology and maintain societal divides. Handling bias in AI systems presents several legal and regulation issues. The absence of well-defined norms and protocols for detecting and reducing bias leaves institutions and decision-makers perplexed about approaching these problems (Omrani et al., 2022).

12 Cognitive biases in AI provide significant challenges that need to be addressed to ensure the fairness, transparency, and reliability of AI systems. Recognizing and understanding these biases is a crucial step toward mitigating their impact and improving the overall performance of AI technologies. The ability to work with AI and machine learning algorithms will improve with increased knowledge of cognitive bias (Schwartz, 2022). It will be interesting to see future research discoveries in this field, which is just about to start. An interesting avenue are empirical studies on algorithmic biases, which reveal dynamics for instance on fairness perceptions (Kordzadeh, Ghasemaghaei, 2022).

Ways to Overcome Cognitive Biases in AI

13 Algorithms and techniques that can help AI systems identify and reduce bias must be implemented. Also, standardized techniques to check AI systems for bias before implementation should be provided. This might entail developing assessment measures and benchmark datasets to gauge the accuracy and fairness of AI algorithms across various demographic categories. To find and fix biases at different phases of development and implementation, this entails carefully reviewing datasets, algorithms, and decision-making procedures (Harris, 2020). Realizing procedures for ongoing observation and input to identify and rectify prejudices in AI systems once they are put into use. This might entail getting user input, tracking system performance over time, and adjusting algorithms as new biases are found.

14 Though cognitive biases in AI are concerning, it’s crucial to recognize that these issues are reflections of the techniques and data used to train and create AI systems rather than fundamental defects in AI itself. The lack of diversity and representativeness in training data, the creation and application of algorithms, and the decision-making procedures guiding the creation and use of AI are all worthy of criticism. Ensuring that the training datasets are comprehensive, varied, and accurately reflect the populations they are meant to assist (Harris, 2020). This lessens the possibility that biases in the data may continue. Encouraging inclusivity and diversity in AI development teams to counteract groupthink and bring in a range of viewpoints (Zhang et al., 2019). Engaging stakeholders from various backgrounds throughout the development process to get their input and spot any potential biases early on. This is in line with Bazin (2024) who calls for making AI more sustainable. The established approaches of Responsible Innovation (Barlatier et al., 2024) or Responsible Research and Innovation could be appropriate frameworks to manage networked responsibilities (Timmermans et al., 2017), which we also see in the context of AI.

15 Another important key aspect consists of making AI systems transparent and explainable so that users may spot potential biases and learn how choices are made. AI systems may be made more trustworthy and accountable with the use of strategies like model interpretability and transparency tools (Balasubramaniam et al., 2023).

16 Moreover, legislators, ethicists, and society at large should all share accountability for resolving cognitive biases in AI, rather not just AI developers (Omrani et al., 2022). It is desirable to increase awareness and understanding of cognitive biases in AI among developers, users, and policymakers. Programs and materials for education can support ethical AI development procedures and foster discussion on the potential social effects of AI.

17 To guarantee that AI technologies are created and utilized responsibly, considering their possible biases and social repercussions, interdisciplinary collaboration and strong regulatory frameworks are essential. Solving cognitive biases in AI is a multifaceted challenge that requires a combination of technical, ethical, and regulatory approaches. Clearly define ethical guidelines and regulations that will control the development and deployment of AI systems. These guidelines ought to cover matters like fairness, accountability, transparency, and privacy to ensure that AI technologies are advantageous to the community at large.

18 These biases underscore the importance of careful design, evaluation, and ongoing monitoring of AI systems to mitigate their impact and ensure fairness, transparency, and reliability in decision-making processes (Russo-Spena et al., 2019).

19 Furthermore, while reducing cognitive biases in AI is important, eliminating them might not be possible or desirable. The efficiency or efficacy of AI systems may be jeopardized if certain biases are tried to be eliminated since some biases may be ingrained in decision-making processes. As a result, a sophisticated strategy that strikes a compromise between practical concerns and bias reduction is required.

20 In conclusion, even if cognitive biases in AI are a real problem, solving them calls for an all-encompassing strategy that incorporates experts from other fields and places an emphasis on moral issues in addition to technological fixes. By doing this, we can encourage the creation of AI technologies that are not just advanced but also ethical, equitable, and beneficial to society as a whole.

21 Based on a comprehensive, multidisciplinary, and multi-stakeholder approach encompassing legislators, AI developers, and users, a trusted ecosystem might be established. This strategy might potentially ensure that issues are identified, deliberated about, and settled collaboratively. A collaborative and interdisciplinary strategy of this kind is expected to provide the most favorable outcomes and create a comprehensive ecosystem for reliable artificial intelligence (Rossi, 2018).

22 However, one has also to acknowledge that cognitive biases are also in many other domains. So it is nothing new only for AI. For instance, earlier studies show such a bias toward innovations from locations that are geographically close (Brem, Nylund, 2021). So biases are everywhere, and they limit the advancement of knowledge in general (Thompson, Griffith, 2021). A machine has no knowledge per se, but comprises of data it has been trained with. So still our domain knowledge will be needed to determine if information is correct or not, if it contains biases or not, etc. (Brem, 2023).

23 We will need all societal actors to be involved in the future development of AI. It is a high risk to leave this only to a small number of companies with “black box” developments, where nobody really knows if they understand them by themselves.

Bibliographie

References

  • ANGWIN, J., LARSON, J., MATTU, S., KIRCHNER, L. (2022), Machine Bias, in Ethics of Data and Analytics, Auerbach Publications, 254-264.
  • ASHMORE, R., CALINESCU, R., PATERSON, C. (2019), Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges, ArXiv.
  • BAEZA-YATES, R. (2018), Bias on the Web, Communications of the ACM, 61(6), 54-61.
  • BARLATIER, P. J., GEORGET, V., PENIN, J., RAYANA, T. (2024), The Origin, Robustness, and Future of Responsible Innovation, Journal of Innovation Economics & Management, 43(1), 1-38.
  • BALASUBRAMANIAM, N., KAUPPINEN, M., RANNISTO, A., HIEKKANEN, K., KUJALA, S. (2023), Transparency and Explainability of AI Systems: From Ethical Guidelines to Requirements, Information and Software Technology, 159, 107197.
  • BAZERMAN, M. H., MOORE, D. A. (2013), Judgement in Managerial Decision Making, London, Wiley.
  • BAZIN, Y. (2023), Making Artificial Intelligence More Sustainable: Three Points of Entry into an Ethical Black Box, Journal of Innovation Economics & Management, (0), I-XVIII.
  • BREM, A. (2023), Artificial Intelligence in Engineering Management: An Editor’s Perspective, IEEE Engineering Management Review, 51(2), 6-8.
  • BREM, A., NYLUND, P. A. (2021), Home Bias in International Innovation Systems: The Emergence of Dominant Designs in the Electric Vehicle Industry, Journal of Cleaner Production, 321, 128964.
  • BUIJSROGGE, A. (2014), Bias in Interview Judgments of Stigmatized Applicants: A Dual Process Approach, Ph Dissertation, Ghent University.
  • CALIKLI, G., BENER, A. B. (2013), Influence of Confirmation Biases of Developers on Software Quality: An Empirical Study, Software Quality Journal, 21(2), 377-416.
  • CALISKAN, A., BRYSON, J.J., NARAYANAN, A. (2017), Semantics Derived Automatically from Language Corpora Contain Human-Like Biases, Science, 356 (6334), 183-186.
  • HARRIS, G. C. (2020), Mitigating Cognitive Biases in Machine Learning Algorithms for Decision Making, in Companion Proceedings of the Web Conference 2020, April 2020, 775-781.
  • KLIEGR, T., BAHNÍK, Š., FÜRNKRANZ, J. (2021), A Review of Possible Effects of Cognitive Biases on Interpretation of Rule-Based Machine Learning Models, Artificial Intelligence, 295, 103458.
  • KORDZADEH, N., GHASEMAGHAEI, M. (2022), Algorithmic Bias: Review, Synthesis, and Future Research Directions, European Journal of Information Systems, 31(3), 388-409.
  • OMRANI, N., RIVIECCIO, G., FIORE, U., SCHIAVONE, F., AGREDA, S. G. (2022), To Trust or Not To Trust? An Assessment of Trust in AI-Based Systems: Concerns, Ethics and Contexts, Technological Forecasting and Social Change, 181, 121763.
  • O’NEIL, C. (2016), Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing Group.
  • RASTOGI, C., ZHANG, Y., WEI, D., VARSHNEY, K. R., DHURANDHAR, A., TOMSETT, R. (2022), Deciding Fast and Slow: The Role of Cognitive Biases in AI-Assisted Decision-Making, Proceedings of the ACM on Human-Computer Interaction, 6 (CSCW1), 1-22.
  • ROSSI, F. (2018), Building Trust in Artificial Intelligence, J. Int. Aff., 72(1), 127-134.
  • RUSSO-SPENA, T., MELE, C., MARZULLO, M. (2019), Practising Value Innovation through Artificial Intelligence: The IBM Watson Case, J. Creating Value, 5(1), 11-24.
  • SALAWU, K., HAMMEDI, W. CASTIAUX, A. (2019), What about Passive Innovation Resistance? Exploring User’s Resistance to Technology in the Healthcare Sector, Journal of Innovation Economics & Management, 30, 17-37.
  • SALMAN, I., TURHAN, B., VEGAS, S. (2019), A Controlled Experiment on Time Pressure and Confirmation Bias in Functional Software Testing, Empirical Software Engineering, 24(4), 1727-1761.
  • SCHWARTZ, R., VASSILEV, A., GREENE, K., PERINE, L., BURT, A., HALL, P. (2022), Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, NIST special publication, 1270 (10), 6028.
  • SOLEIMANI, M., INTEZARI, A., PAULEEN, D. J. (2022), Mitigating Cognitive Biases in Developing AI-Assisted Recruitment Systems: A Knowledge-Sharing Approach, International Journal of Knowledge Management (IJKM), 18(1), 1-18.
  • THOMPSON, B., GRIFFITHS, T. L. (2021), Human Biases Limit Cumulative Innovation, Proceedings of the Royal Society B, 288 (1946), 20202752.
  • TIMMERMANS, J., YAGHMAEI, E., STAHL, B. C., BREM, A. (2017), Research and Innovation Processes Revisited: Networked Responsibility in Industry, Sustainability Accounting, Management and Policy Journal, 8(3), 307-334.
  • TVERSKY A., KAHNEMAN D. (1974), Judgment under Uncertainty: Heuristics and Biases, Science, 185, 4157, 1124-1131.
  • WANG, W., SIAU, K. (2019), Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda, J. Database Manag., 30, 61-79.
  • ZHANG, H., FEINZIG, S., RAISBECK, L., MCCOMBE, I. (2019), The Role of AI in Mitigating Bias to Enhance Diversity and Inclusion, IBM Smarter Workforce Institute Report, 15.

Date de mise en ligne : 17/05/2024.

https://doi.org/10.3917/jie.044.0223

bb.footer.alt.logo.cairn

Cairn.info, plateforme de référence pour les publications scientifiques francophones, vise à favoriser la découverte d’une recherche de qualité tout en cultivant l’indépendance et la diversité des acteurs de l’écosystème du savoir.

Avec le soutien de

Retrouvez Cairn.info sur

18.97.14.87

Accès institutions

Rechercher

Toutes les institutions