41 | AI Risk Primer — Hallucinations (Part 3)
We conclude this three-part series, discussing what’s being done in the AI world to address hallucinations. If you missed the prior two articles, check them out: Generative AI Hallucinations 1
Generative AI Hallucinations 2
Will It Always Be Like This?
In a word, NO.
And in fact, when it comes to generative AI, I don’t even need to know what “It” is — i.e., everything is changing in AI, so as I’ve mentioned previously:
The AI we have today is the worst AI we’ll ever experience.
What’s Changing?
The tech firms providing the major foundation languages—Open AI, Google, Anthropic, etc.—are working feverishly to reduce the hallucinations generated from their models. It’s also a major research area in academia. While the models still generate false responses with irritating regularity, the frequency is being reduced with each new language release.
So, we’re going to see a continued reduction in hallucinations, even if our prompting behaviors stay the same. (Refer back to article 2 in this series for some solid tips to reduce hallucinations by changing your prompting habits.)
Also, there are more tools being released that allow limiting the model to refer only to a specific set of data for its information source. For larger organizations that have resources to create their own AI systems, licensing the language to operate only on the organization’s information is available today and is becoming more affordable and easier to implement. But for smaller organizations, this is cost-prohibitive and is likely to stay that way for the foreseeable future.
However, even in the chatbots that allow uploading documents (e.g., ChatGPT 4, Claude), you can instruct them to do this with a prompt such as:
Please refer only to the PDFs I uploaded when answering any questions I ask. Do not make any assumptions or use any outside information in your responses.
As a note, if you’re using ChatGPT 4 with the Plus subscription, you can create a custom GPT that includes this instruction to prevent hallucinations in any use of your GPT.
Why Should We Care?
Concerns about hallucinations are blocking many organizations from trying AI, actual hallucinations are scaring people away from moving past early experiments, and there’s a reality that even experienced AI users can get tripped up by a hallucination if they’re in a hurry or not careful.
I’ve shared this information about hallucinations to arm you with knowledge that may help you overcome fear of trying AI, but also hopefully to enable you to move forward in your use with some better approaches.
Someday, it’s likely that this will be a relic of “the early days” of generative AI. Or perhaps I’m hallucinating myself … but we can hope!