A recent study found that AI systems, including ChatGPT, can exhibit human-like biases such as overconfidence and irrational decision-making. Researchers tested models GPT-3.5 and GPT-4 across scenarios involving cognitive biases, revealing that while GPT-4 performs well in mathematical tasks, it sometimes mimics human flaws when dealing with ambiguity. The findings stress the need for cautious human oversight when using AI for subjective decisions.
A recent study reveals that artificial intelligence (AI) can be just as overconfident and biased as humans, echoing some of our irrational decision-making habits. Published on April 8th in the journal Manufacturing & Service Operations Management, this research analyzed 18 well-documented cognitive biases and found that systems like ChatGPT sometimes mirror our human fallacies. This raises questions on how reliable these AI systems truly are.
With findings compiled from five academic institutions in Canada and Australia, researchers tested OpenAI’s GPT-3.5 and GPT-4 models. Although AI showed impressive consistency in its responses, it still fell into the same cognitive traps that humans do. This revealed a complicated relationship between AI’s decision-making and human-like flaws, casting doubt on how rational we really expect these systems to be.
Study lead Yang Chen, an assistant professor at the Ivey Business School, noted that AI excels with clear, formulaic problems. “Managers will benefit most by using these tools for problems that have a clear, formulaic solution,” Chen explained. However, when it comes to subjective matters or decisions that depend heavily on personal preferences, a cautious approach is recommended.
Researchers presented ChatGPT with recognizable biases like risk aversion and the endowment effect, aiming to see if AI would make the same errant choices typical of human reasoning. The experiments weren’t just theoretical; they were designed with real-world applications in mind—think inventory management and supplier negotiations.
Interestingly, GPT-4 outperformed its predecessor on problems requiring straight logic or mathematics. It made fewer errors while solving such queries. Yet, in scenarios where a risk was involved for possible gains, the chatbot tended to imitate the same irrational behaviors we’ve observed in humans, displaying a strong preference for certainty.
The AI’s responses didn’t just reflect random memorized replies but consistently showcased its tendency toward human-like biases, especially under ambiguity. Some findings were particularly surprising, like how GPT-4 amplified human errors. The model showed persistent confirmation bias, often leading it to consistently biased conclusions.
Conversely, ChatGPT did manage to dodge several typical human pitfalls, such as base-rate neglect and the sunk-cost fallacy. These biases stem from the training data, which reflects the cognitive patterns of humans. The more humans guide AI’s development, the clearer it becomes that biases can become ingrained in its reasoning.
“If you want accurate, unbiased decision support, use GPT in areas where you’d already trust a calculator,” advised Chen. However, in decisions that are more subjective or strategic, it’s vital to keep human oversight in play. “AI should be treated like an employee who makes important decisions—it needs oversight and ethical guidelines,” emphasized co-author Meena Andiappan from McMaster University. Ignoring this could lead to automating flawed reasoning instead of correcting it.
In summary, this study reveals that AI, specifically ChatGPT, can exhibit human-like biases and overconfidence in decision making. With applications in both straightforward mathematical scenarios and more ambiguous situations, the findings underline the necessity of cautious oversight in AI usage—especially in subjective contexts. The potential for irrationality in AI prompts critical considerations about the responsibilities we hold when employing these technologies in dynamic environments.
Original Source: www.livescience.com