Hallucinations occur when an AI system, especially one based on large language models, generates responses that sound plausible but are factually incorrect or misleading. In a contact centre setting, hallucinations could result in inaccurate answers to customer questions or flawed conversation summaries. Detecting and managing hallucinations is critical to ensuring reliability, maintaining customer trust and supporting agent decision-making.