Overview
Large language models (LLMs) are a powerful tool to generate content. The generative capabilities of these LLMs come with various pros and cons. One of the major issues we often encounter is the factual correctness of the generated content. The models have a high tendency to hallucinate and sometimes generate non-existent and incorrect content. These generated contents are so impressive that they look like they are factually correct and viable. As developers, it is our responsibility to ensure the system works perfectly and generates concise content.
In this article, I will delve into two of the major methodologies that I employed to lower the hallucinations for applications developed using AWS Bedrock and other AWS tools and technologies.