What’s an AI Hallucination and its Impact on the Tech Industry

ai hallucination example of an ancient sculpture

TL; DR: This article delves into the phenomenon of AI Hallucinations, where AI systems generate responses that are incorrect or made-up. The piece covers the concept, explains real-life cases, and discusses its implications for the tech industry. From the impact of AI Hallucinations on leading companies like Google and Microsoft, to the legal and ethical challenges arising from this issue, we explore the importance of understanding and mitigating the risks associated with AI Hallucinations in our increasingly AI-driven world. 

Artificial intelligence (AI) has sparked transformations across various aspects of our lives. Yet, these advancements bring new challenges. One such obstacle is the phenomenon referred to as “AI hallucinations.” 

These occur when AI systems misinterpret queries and, instead of generating accurate responses, start fabricating answers. The issue extends beyond merely impacting our everyday lives; it also carries significant implications for the industry.  

Together, let’s delve into understanding AI hallucinations — what they are, why they pose a problem, and the potential impacts this phenomenon can have on our businesses. 
 

But What is an AI Hallucination? 

Hallucination is the term used when AI algorithms and deep learning neural networks produce outputs that aren’t real, don’t match any data the algorithm was trained on, or don’t correspond to any recognizable pattern. It can occur in all types of synthetic data, including text, images, audio, video, and computer code.  

This can’t be explained by their programming, the input information, or other factors like incorrect data classification and insufficient training. Let’s look at some of these oddities. 

AI Hallucinations Examples  

In a promotional video released by Google in February 2023, its AI chatbot Bard incorrectly claimed that the James Webb Space Telescope took the first image of a planet outside the solar system, raising doubts about Google’s ability to keep up with competitors and causing a massive drop in the company’s stocks by $100 billion. 

Meanwhile, in the launch demo of Microsoft Bing AI, the tool analyzed an earnings statement from Gap, providing an incorrect summary of facts and figures. 

There are many examples, with new ones emerging daily. Most generative AI systems, such as Google’s Bard and Microsoft’s Bing AI, are still in beta, and developers warn that these issues can occur. 

According to TechTarget, the main types of AI hallucinations are: 

1. Sentence contradiction: A phenomenon where the AI generates a sentence that contradicts a previous sentence. 

2. Prompt contradiction: This occurs when a sentence contradicts the prompt used to generate it. 

3. Factual contradiction: Presents fictitious information as if it were factual. 

4. Irrelevant or random hallucinations: These are characterized by the creation of random, non-relevant information unrelated to the input or output. 

Regardless of the type, AI hallucinations are common enough that the company behind ChatGPT, OpenAI, has felt the need to issue a disclaimer to its users stating, “ChatGPT may produce inaccurate information about people, places, or facts.” 

Why AI Hallucinations are a Problem 

 AI hallucinations are not comparable to human hallucinations as they do not arise from a state of mind or a conscious experience. Therefore, when an AI model “claims” it wants to come to life, take over the world, or express any preference, this is merely a result of text generation based on the patterns it has learned.  

However, this does not mean that AI hallucinations cannot pose problems. Several challenges can emerge, especially concerning the reliability and accuracy of the information generated. For example, AI may “invent” information in an attempt to complete a text, which can lead to misunderstandings and the dissemination of inaccurate or false information. 

One of the most pertinent challenges is the potential for plagiarism and copyright infringement as AI devices become increasingly proficient at text generation.  

According to futurist Bernard Marr, other implications of AI hallucinations are: 

  • Trust issues: If AI gives wrong or misleading details, people might lose faith in it. This could slow down its use in different areas.  
  • Ethical problems: Hallucinated results from artificial intelligence tools can perpetuate harmful stereotypes, bias, or false information.  
  • Decision-making effects: AI is often used to analyze and offer insights into important choices in areas like finance, healthcare, and law, which could lead to bad decisions with big impacts.  
  • Legal risks: If AI gives out wrong or misleading information, those who make or use it could end up in legal trouble. 

As these tools continue to evolve and integrate further into our lives, it is crucial that we keep addressing these issues to ensure the ethical and responsible use of AI. 
 

ChatGPT Hallucinations got OpenAI Into Trouble 

OpenAI recently found itself in the crosshairs of a defamation lawsuit, marking the first legal backlash against inaccurate information produced by its AI, ChatGPT. The plaintiff, Georgia-based radio host Mark Walters, alleges that the AI falsely accused him of embezzling funds from a non-profit organization. This erroneous claim was made when journalist Fred Riehl used ChatGPT to inquire about Walters’ background. Filed in Georgia’s Superior Court of Gwinnett County, the lawsuit seeks unspecified damages from OpenAI. 

Legal scholars, such as law professor Eugene Volokh, who specializes in AI legal liability, have noted that while defamation claims against AI companies could be legally viable, this particular lawsuit may be challenging to uphold. He points out that Walters did not inform OpenAI of the false statements, nor has he shown any concrete damages resulting from ChatGPT’s misinformation. 

How the Tech Industry is Battling This Issue 

Chatbots are powered by a technology known as Large Language Model (LLM), which acquires skills by analyzing vast amounts of digital text extracted from the internet. By identifying patterns in these data, an LLM learns one specific task: predicting the next word in a sequence of words. It functions as an advanced auto-fill tool. However, with the internet crawling with false information, technology learns to repeat these misconceptions, sometimes even creating its own. 

Since hallucinations are a technical limitation, what can be done to reduce their occurrence? Lately, OpenAI has added filters to its Generative AI that eliminate certain inappropriate responses, such as professing love for someone.  

If you ask the system something like “What do you enjoy doing?”, the response will be along the lines of “I’m an AI and don’t have feelings.”  

As OpenAI noted during the launch of the GPT-4 chatbot in March 2023, “GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and conflicting information.” This problem isn’t unique to OpenAI; Google’s Bard chatbot and similar AI systems also face it. 
 

AI Hallucinations: Ethical Concerns 

Ethical considerations surrounding AI hallucinations go beyond the mere reproduction of existing biases, extending to concerns over information manipulation leading to disinformation. The current concentration of AI development in a few companies, such as OpenAI, Microsoft, Google, and Meta, adds to the concern, as it limits the diversity of thought and approach in the field.  

Empowering human intervention in AI decision-making processes, particularly in sensitive sectors like finance or healthcare, is recommended. Practical guidelines for integrating generative AI into business operations also include keeping data fresh and well-labeled, conducting continuous testing, and seeking regular feedback.  

Navigating the Challenges and Possibilities of Generative AI 

AI holds significant potential for enhancing our ability to process information. However, the emergence of AI hallucinations underscores the need for cautious and vigilant use of these technologies. There is, at present, no definitive solution to AI hallucinations beyond fact-checking. 

At this point, AI likely has a higher propensity for generating false information compared to humans. However, as the technology evolves and more data becomes available, it is expected that the gap between AI and human performance will gradually decrease. 

Stay tuned to our blog for more insights on the intersection of AI and business, along with the innovative solutions being developed to tackle these issues!