LLM

LLMs don’t hallucinate, they make mistakes.

Do LLMs hallucinate? No, LLMs make mistakes but there are strategies to mitigate the impact of these mistakes.


LLM hallucinations are the feature, not the issue.

There has been a lot of talk about LLM hallucinations and we've even heard, “we can't put LLMs in production if they hallucinate, even 0.01% of the time." The thing to note here, is that the core feature of LLM is hallucination, that’s the only thing it does! It hallucinates the output based on the input submitted, trying to predict the most likely outcome, a word at a time, to satisfy the request.

First, what are LLM hallucinations?

Hallucinations describe the output of an LLM that is not coherent with the observable or known truth. For example if you were to ask, “When performing a bulk update operation in MongoDB, how do I get the list of actually modified documents?”, the LLM will yield something like, “call bulkUpdateResult.getModifiedDocuments()”. It looks good but this operation does not exist in MongoDB’s API, so the code will fail.

Another example, with early GPT4: “What is heaviest, a pound of lead or 2 pounds of feathers?”, lead to the answer “although they don’t look the same they are the same weight”, because a lot of training data contain the famous “what’s the heaviest: a pound of lead or a pound of feather?” to which the answer is obvious.

Hallucinations or mistakes?

We should call "hallucinations" by what they are: mistakes. LLMs make mistakes. And it is objectively a mistake, the highest probability answer to the query is the mistake it wrote. So it didn’t hallucinate more or less than any other answer, it’s just that this answer is not right. 

Why do LLMs hallucinate?

In many ways, LLMs "hallucinate" (or more accurately make mistakes) for the same reason humans sometimes do: The LLM believes it’s the right answer (to the extent it can believe anything) — and yes there are other reasons humans make mistakes or errors or even straight lies, but LLMs aren’t subject to these — that we know of. Any software can make a mistake. We call it a bug, and bugs make the software produce errors. We don’t say the software is hallucinating due to some bugs in the code, we call it a bug and we fix it.

Naming it a “mistake” or “error”, allows us to focus on minimizing the error rate, just like we do for humans. We'll use that term going forward in this article.

“We cannot deploy a hallucinating LLM, we need 100% trust in the entity”

We often hear that an organization won't deploy LLMs if they hallucinate (aka make mistakes). But we all deploy software that's not 100% perfect, 100% of the time! All software has bugs, all humans make mistakes, and yet we leverage both of these entities to manage critical processes. We do this by creating resilient systems with controls and fault-tolerant approaches.

Creating resilient LLM-powered systems

There are many ways to minimize the number of mistakes an LLM makes and mitigate the effect when it happens. In general, it means designing for errors and making sure there are multiple layers or checks and controls.

Here are a few strategies to build resilient, LLM-powered systems:

  • Add more context: inject more context into the query to give more context to the LLM. This approach is usually called Retrieval Augmented Generation (RAG) which allows the LLM to work based on observed truth (the context) instead of probability. It’s equivalent to a human reading documentation or a policy before writing some code or contract, or checking the budget and actuals before answering a finance question. This is the easiest and fastest way to get good answers and it needs advanced retrieval capabilities.
  • Multi-head: when not sure, use LLM-powered or human powered supervision to verify output of models. Just like a senior person on a team would check the work of junior team members (that’s what we enable with Multi-head Synthetic LLM). And then fine-tune the junior models using the corrected answers, to improve the skills of the junior models.
  • Label the output: make it clear that it might not be correct, not like a database query, so the user or system can make the correct decisions downstream.
  • Use output constraints: using libraries like .txt Outlines, or just prompt instructions, make the model follow specific output constraints so they stay (useful for generating structured data, code, or use a limited vocabulary of entities, like customer lists).
  • Specialize: train models on a set of related tasks, and fine-tune them, so they get better at this set of tasks and minimize errors. Just like you would expect a team to do, by getting better at the domain they are in charge of.

All of these strategies and tools aren’t exclusive and work best in combinations. The most important part is that the strategy to minimize LLM errors, is very much like we would do for teams of humans, or for systems: give them context, add controls and failover, specialize, and continuously improve by setting up a process enabling continuous improvement

LLMs aren’t magical

LLMs are a new type of very powerful software, allowing a new set of capabilities that need to be integrated into your organization’s processes and software. And for this, the traditional process design methods, resilience mechanism, continuous improvement, and fast feedback loop are critical to unlock the benefits of LLMs for your organization.

Similar posts

Get notified on new blog articles

Be the first to know about new blog articles from Vertesia. Stay up to date on industry trends, news, product updates, and more.

Subscribe to our blog