Dealing with Hallucinations
Generative AI models hallucinate and it’s a problem. It’s the main reason holding back consumer facing implementations. Even GPT-4 hallucinates 3% of the time and that's on general knowledge, not specific to your use case. There are over 10 reasons why models hallucinate. Solving all of them is generally regarded as over engineering, much like implementing every security measure available on the market is. In this talk Jonathan Yarkoni will take us through the different ways and reasons models hallucinate. He will explain the cost value of each solution and showcase several through demos.
Join upcoming demo
Generative AI models hallucinate and it’s a problem. It’s the main reason holding back consumer facing implementations. Even GPT-4 hallucinates 3% of the time and that's on general knowledge, not specific to your use case. There are over 10 reasons why models hallucinate. Solving all of them is generally regarded as over engineering, much like implementing every security measure available on the market is. In this talk Jonathan Yarkoni will take us through the different ways and reasons models hallucinate. He will explain the cost value of each solution and showcase several through demos.