Discussion about this post

User's avatar
Claude Coulombe's avatar

For the use of AI chatbot based on LLMs in medicine or any life and death issues, we should be very cautious. There is no way (no matter the technique, convoluted prompt engineering, RAG, LoRA, etc.) to « guarantee » that an LLM's based AI system (not an AI, please let's avoid anthropocentrism) will not confabulate (the poorly named hallucinations which are sensory problems, but the generation of disheveled or insane texts which are confabulations).

Expand full comment
TheOtherKC's avatar

Forgot Elon Musk's prediction: if AI is able to perform arbitrary tasks at the level of an adult human of below-average intelligence by the end of next year, I'd call it a miracle.

Expand full comment
78 more comments...

No posts