AI’s convincing hallucitations: Why humans must double-check

WHEN we asked Bing’s artificially intelligent chat if other local governments had an AI policy, the chatbot was quick to spruik AI’s value, claiming “the City of Pasco in Washington State has implemented a generative AI tool to analyze data on transportation, energy consumption, demographics, and environmental factors to optimize urban planning”.

We couldn’t find any reference to this impressive tool in any of Pasco’s policies or announcements, and asked the chatbot to provide a link supporting its claim. 

“Certainly!” the bot replied, and provided three links.

None of the websites even mentioned the City of Pasco.

When we informed the bot, it conceded: “I apologize for the confusion. I was mistaken in my previous response. I could not find any reliable sources that confirm the City of Pasco in Washington State has implemented a generative AI tool.”

These types of errors are common and have been dubbed by US scholar Kate Crawford as “hallucitations” (hallucinated citations). 

It’s hard to track down the source of these errors, but they appear to be caused by the AI attempting to produce new sentences by remixing bits and pieces from the many different texts it was trained on. 

In this case, it’s possible the bot found websites that mentioned Pasco staff attending conferences about the potentials of AI, and mushed that with other tangentially related information from websites about AI data analysis and town planning.

Posted in

Leave a comment