phoneymouse@lemmy.world to People Twitter@sh.itjust.works · 2 days agoWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldimagemessage-square114fedilinkarrow-up11.07Karrow-down126
arrow-up11.04Karrow-down1imageWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldphoneymouse@lemmy.world to People Twitter@sh.itjust.works · 2 days agomessage-square114fedilink
minus-squareantonim@lemmy.dbzer0.comlinkfedilinkarrow-up5·2 days ago referencing its data sources Have you actually checked whether those sources exist yourself? It’s been quite a while since I’ve used GPT, and I would be positively surprised if they’ve managed to prevent its generation of nonexistent citations.
minus-squareUnderpantsWeevil@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·23 hours ago Have you actually checked whether those sources exist yourself When I’m curious enough, yes. While you can find plenty of “AI lied to me” examples online, they’re much harder to fish for in the application itself. 99 times out of 100, the references are good. But those cases aren’t fun to dunk on.
Have you actually checked whether those sources exist yourself? It’s been quite a while since I’ve used GPT, and I would be positively surprised if they’ve managed to prevent its generation of nonexistent citations.
When I’m curious enough, yes. While you can find plenty of “AI lied to me” examples online, they’re much harder to fish for in the application itself.
99 times out of 100, the references are good. But those cases aren’t fun to dunk on.