Loading Now

AI Engineer Expresses Distrust in Technology, Advocates for Neurosymbolic AI

Abstract illustration of AI models representing trust and transparency with symbolic reasoning elements in blue and green hues.

AI engineer voices skepticism over the reliability of artificial intelligence, citing persistent issues with hallucinations in Large Language Models (LLMs). Suggesting the use of symbolic reasoning, the article advocates for a hybrid model known as neurosymbolic AI to ensure trust and transparency, especially in regulated industries such as healthcare and law. This new approach could offer clearer and more reliable outputs, addressing the current limitations in AI.

Despite working as an AI engineer, I still find myself grappling with the trustworthiness of artificial intelligence. A major hurdle for AI, especially generative models like Large Language Models (LLMs), is the issue of hallucinations – where these systems produce inaccurate or nonsensical outputs. Developers are pouring significant resources into fixing these problems; however, recent models from OpenAI have been reported to hallucinate even more frequently than their predecessors. This raises some eyebrows, especially considering how crucial accuracy is in sectors like healthcare and law.

The unfortunate truth is that these AI systems often operate as ‘black boxes.’ This means even if they produce seemingly plausible answers, their hallucinations are invisible to those who aren’t deeply versed in the subject matter. As a result, the real challenge for industries that rely on accurate and explainable outputs becomes pronounced. With LLMs being fundamentally untrustworthy and getting worse, what’s the path forward?

LLMs have indeed transformed the landscape of artificial intelligence, leveraging predictive algorithms to spit out text-based responses. But the problem lies in their unpredictability; like a gambler at the racetrack, these AI models can misfire despite accounting for numerous variables. When they do produce incorrect responses, it’s referred to as a ‘hallucination’ – an alarming trend that seems baked into the current model fabric. In high-stakes applications, like law or medicine, hallucinations could lead to disastrous outcomes.

There’s a flicker of hope, though. OpenAI has hinted that current models might not possess the solution to these issues, but there’s another approach: symbolic reasoning. This older paradigm uses clear, logical rules to encode knowledge, avoiding the pitfalls of misinterpretation. Think of it like Excel—it performs calculations based on defined formulas, and you don’t have to second-guess its results. Unlike LLMs, whose outputs are uncertain, symbolic reasoning offers a clear path and eliminates hallucinations.

Switching gears to ’neurosymbolic AI’ could be a game-changer. This emerging hybrid marries the flexibility of LLMs with the rule-based logic of symbolic AI, allowing for nuanced processing of unstructured information while maintaining transparency. This might just bridge the trust gap, especially in heavily-regulated fields where understanding the logic behind decisions is critical. In applications like insurance, a neurosymbolic model could quickly evaluate claims and confidently escalate uncertain cases to human reviewers—something LLMs are less likely to do.

As we confront the limitations of current models, it’s clear that we can’t just keep hoping for improvements. Rather, we must adapt our approaches and explore new avenues—like neurosymbolic AI—that promise to restore trust in what is, ironically, a tech designed to empower us. The lessons learned along the way will hopefully lay the groundwork for a trustworthy future in artificial intelligence.

In summary, while the technology surrounding AI has made significant strides, the challenges—specifically hallucinations and a lack of transparency—continue to hinder trust in these systems. Symbolic reasoning presents a viable alternative to address these pitfalls. Combining it with LLMs into neurosymbolic AI could pave the way for a new era where AI is not only effective but also reliable and understandable, particularly in sectors where precision is not just important, but vital.

Original Source: www.techradar.com

Liam Kavanagh is an esteemed columnist and editor with a sharp eye for detail and a passion for uncovering the truth. A native of Dublin, Ireland, he studied at Trinity College before relocating to the U.S. to further his career in journalism. Over the past 13 years, Liam has worked for several leading news websites, where he has produced compelling op-eds and investigative pieces that challenge conventional narratives and stimulate public discourse.

Post Comment