13 March 2025
Bilbao Campus
Lucía Vicente Holgado has been distinguished with the Ignacio Ellacuría Extraordinary Award for the best PhD thesis 2024 for “Inheritance of bias: Influence of artificial intelligence biases on human decisions”, a work directed by Helena Matute, within the doctoral program in Psychology.
Artificial intelligence learns from humans. However, until now no research had empirically studied whether humans can also acquire learning through their interaction with artificial intelligence. Given the increasing presence of this technology in various professional and domestic settings, the opportunities for the transmission of knowledge, beliefs and behaviors in human-machine interactions are multiplying.
This thesis is one of the first research that has aimed to examine the potential for people to acquire learning after a period of interaction with an AI. And, in particular, it has focused on exploring a potential risk hidden in this phenomenon: “people can acquire biases in their interaction with artificial intelligence”.
Artificial intelligence tools demonstrate an ability and accuracy that surpasses that of humans in many tasks. For this reason, the implementation of this technology in professional areas is increasing, under the premise that the collaboration between human and artificial intelligence will obtain better results than each agent working separately could achieve. Following this synergistic approach of mutual empowerment and improvement, a decrease in the risk of error in the decision-making processes of AI-assisted individuals is expected.
In practice, this means that more and more critical decisions, in areas such as medicine, justice or human resources, are made by teams consisting of a human and an AI. However, artificial intelligence is not infallible; it can also be biased and make mistakes. This research defines bias as a systematic error, an error that always occurs in the same direction and is predictable.
Artificial intelligence learns from data that is a historical record of past human decisions. Therefore, AI learns from humans and assimilates their biases, which implies that, like them, it can err in its recommendations. Previous research has demonstrated the ability of this technology to influence people's decisions. In fact, several studies have observed that people tend to show excessive trust in AI, uncritically accepting its advice. This evidence calls into question people's ability to monitor AI effectively and counteract its erroneous or biased decisions.
A biased AI could, therefore, in turn bias the decisions of humans who take advice from the machine. A serious consequence may arise from this phenomenon: “after interacting for a long time with a biased AI, people could end up reproducing this bias in their future decisions, even long after the end of their collaboration with the system. If there is a risk of a transmission of If there is a risk of bias transmission from AI to humans, it would be necessary to find out which strategies could mitigate this risk. The awarded PhD thesis tries to answer these two fundamental questions. Full abstract of the PhD thesis.