Startup Tackles LLM’s Flaw with Innovative Fix
Large language models (LLMs) pose significant challenges due to erroneous outputs known as hallucinations. A new solution to this problem has been introduced by a startup named Lamini, which focuses on what they call Memory Tuning. With an impressive reduction of hallucinations by 95%, Memory Tuning embeds specific data within the model to create what is referred to as a Mixture of Memory Experts (MoME). This strategy proves more effective and cost-efficient compared to other existing methods, such as fine-tuning or the use of retrieval-augmented generation techniques.
The innovation introduced by Lamini not only advances the functionality of large language models but also prompts important considerations regarding computational demands and the potential for significant changes in computer architecture. The implementation of Memory Tuning by customers has already demonstrated a substantial decline in the occurrence of hallucinations, marking a significant stride forward in the field.
Read more:
OpenAI