2025 | Research
HKU Business School today released the "Large Language Model (LLM) Hallucination Control Capability Evaluation Report." The Report describes the evaluation of selected AI LLMs regarding their ability to control “hallucinations.” Hallucinations are when LLMs produce outputs that appear reasonable but are contradictory to facts or deviate from the context. Currently, LLMs are increasingly used in professional domains such as knowledge services, intelligent navigation, and customer service, but hallucinations have been limiting the credibility of LLMs.