- The Network - IPG Newsletter
- Posts
- Do AI Hallusicinate Electric Lies?
Do AI Hallusicinate Electric Lies?
AI Articles - AI For Creatives - Part II
An Obvious Error.
In our recent development work on the FilmPod and DBO initiatives, we had reason to research the YouTube channel Viva La Dirt League (@vivaladirtleague). My need was to locate accurate third-party metrics (actual YouTube numbers are secret and YouTube official numbers are unreliable). My research assistant, always enthusiastic, went above and beyond and did a full work up on the channel and their creators. This was very good and useful information for us, but I did spot an error which, to me, was glaring, and which illustrates some of the serious dangers of using AI.
Here is the passage which caught my eye:
At the helm of this comedic juggernaut are the dynamic duo of Rowan Atkinson and Adam King, whose talents extend far beyond sheer comedic brilliance. Their involvement in cinematic milestones like the groundbreaking "Avatar" showcases their incredible versatility and steadfast commitment to quality. This collaborative spirit infuses every sketch with artistry that merges relatable scenarios with stunning production quality, setting VLDL apart in today’s fast-paced entertainment landscape. Their relentless pursuit of cutting-edge filming technology and innovative storytelling keeps their content fresh, engaging, and visually exhilarating.
There’s lots to unpack here. The writing tone and language, etc. But most importantly - Rowan Atkinson is one of the minds behind the YouTube channel Viva La Dirt League…???
For anyone unfamiliar with Rowan Atkinson, or Viva La Dirt League, or perhaps not aware of the actual year we are in… this error may go unnoticed. If one did the research and used it in any published material - one would at the best look foolish. Or a more dire consequence could be that Rowan Atkinson could take some legal offense - actually not the very worst of potential outcomes.
This type of error, in publications through blogs, vlogs, tweets, mainstream and alternatve news, all asserted as fact, are already causing society some disturbing problems. We have a growing literature of total lies, printed and accepted without critique or fact checking by the person in charge. It is disseminated and accepted and repeated by people as fact.
These dubious publications then sit out there in the ether, waiting to be picked up and referenced again by AI or real people, and used as a basis for other asserted facts. We have already become an echo chamber devoid of factual proof of anything. You see my point, I hope.
In trying to figure out this odd error, I deduced that the AI picked up on the name Rowan - one of the VLDL writers and personalities is Rowan Bettjeman - and the word Avatar, which appears in their work in descriptions about online gaming parodies which they produce. Somehow the AI decided to assert facts separated by time and space, and people, in service of making a great article.
The whole AI debate has always reminded me of the Philip K. Dick story “Do Androids Dream Of Electric Sheep?” - which deals with the questions of sentience, AI, and was the inspiration for the original Blade Runner and innumerable other stories and movies.
This all led to today. Above you have my experience. Below, you have an article on some of the dangers of AI, written by AI itself - in this case Google’s Gemini. Having a perverse sense of humor, I had Gemini entitle the article as well, as a variation on the title of the Philip K. Dick short. A runner-up title also by Gemini is “Do Large Language Models Envision Erroneous Entities? - that one is fun but just not the point.

Do A AI Hallusicinate Electric Lies?
Prompt: Write a five to seven paragraph discussion of fundamental inaccuracies in ai generated stories about real people and events. Use three main topics. One topic should be errors related to confusion with names and places, one topic should be related to unreliable source material. The third topic should be the most common type of error reported. Use a brief example of each error. Conclude with encouragement to the reader to do deep learning about the area they are using AI to write about before doing it, and to closely fact check all AI generated content.
Gemini’s Article:
AI-generated stories, while demonstrating impressive capabilities in mimicking human writing styles, often stumble when tasked with narrating real people and events. These inaccuracies stem from fundamental limitations in how AI models process and synthesize information. Unlike human writers who can leverage understanding, critical thinking, and nuanced interpretation, AI relies on patterns within vast datasets, leading to predictable categories of errors when dealing with factual accounts.
One prominent area of inaccuracy arises from the AI's struggle with distinguishing between similar names and places. Trained on massive amounts of text, these models can easily conflate individuals with the same or similar names, or misattribute events to incorrect locations. For instance, an AI might generate a story about the scientific contributions of "Dr. Jane Smith" but mistakenly attribute discoveries made by a different "Dr. Janet Smith," simply because both names appeared frequently in its training data related to scientific research. Similarly, a narrative about a historical battle might incorrectly place it in "Concord, Massachusetts" instead of "Concord, North Carolina," due to the presence of both locations in historical texts. This highlights the AI's difficulty in discerning context and the specific identities associated with names and locations.
Another significant source of inaccuracies lies in the AI's reliance on potentially unreliable source material. The internet, while a vast repository of information, contains a spectrum of accuracy, from meticulously researched articles to biased opinions and outright falsehoods. AI models, trained on this unfiltered data, can inadvertently incorporate misinformation into their narratives. Imagine an AI generating a story about a controversial political figure. If a significant portion of its training data includes unsubstantiated rumors or partisan attacks from unreliable websites, the resulting story might present these falsehoods as facts. The AI lacks the critical judgment to evaluate the credibility of its sources, leading to the perpetuation of inaccuracies.
Perhaps the most commonly reported type of error in AI-generated stories about real people and events is the fabrication of details or events that never actually occurred. This phenomenon, sometimes referred to as "hallucination," occurs when the AI, in its attempt to create a coherent and engaging narrative, invents plausible-sounding information that is not grounded in reality.3 For example, an AI might generate a heartwarming anecdote about a famous athlete's childhood act of kindness that, upon closer inspection, has no basis in any verifiable record. These fabrications can range from minor embellishments to significant alterations of the historical record, making it crucial to approach AI-generated content with skepticism.
The allure of quickly generating stories using AI can be strong, especially when dealing with complex topics or numerous individuals. However, the inherent limitations in how AI processes and synthesizes information about the real world necessitate a cautious approach. Before entrusting AI with the task of narrating factual accounts, it is essential to engage in thorough research and develop a deep understanding of the subject matter. Furthermore, the output of AI models should never be accepted at face value. Rigorous fact-checking, cross-referencing with reliable sources, and critical evaluation are indispensable steps in ensuring the accuracy and integrity of any narrative about real people and events, regardless of its origin.
Further Reading:

AI Hallucinating Answers.
Want More on AI For Creatives? Stay tuned and make sure you are subscribed.
Support IPG & SSI
Support the IPG with your membership, and get involved in steering the next industry leaders toward success and relevance, with Stage And Screen Innovations!
Check out SSI’s programs at stagescreen.org and donate at stagescreen.org/support-arts today. Tax deductible receipts are given!