@anus@lemmy.world to Technology@lemmy.worldEnglish • 2 months agoA cheat sheet for why using ChatGPT is not bad for the environmentandymasley.substack.comexternal-linkmessage-square47fedilinkarrow-up119arrow-down1130
arrow-up1-111arrow-down1external-linkA cheat sheet for why using ChatGPT is not bad for the environmentandymasley.substack.com@anus@lemmy.world to Technology@lemmy.worldEnglish • 2 months agomessage-square47fedilink
minus-squareSaik0linkfedilinkEnglish3•2 months agoNo, not basically no. https://mashable.com/article/openai-o3-o4-mini-hallucinate-higher-previous-models By OpenAI’s own testing, its newest reasoning models, o3 and o4-mini, hallucinate significantly higher than o1. Stop spreading misinformation. The company itself acknowledges that it hallucinates more than previous models.
minus-square@anus@lemmy.worldOPlinkfedilinkEnglish1•2 months agoI stand corrected thank you for sharing I was commenting based on anecdotal experience and I didn’t know where was a test specifically for this I do notice that o3 is more overconfident and tends to find a source online from some forum and treat it as gospel Which, while not correct, I would not treat as hallucination
No, not basically no.
https://mashable.com/article/openai-o3-o4-mini-hallucinate-higher-previous-models
Stop spreading misinformation. The company itself acknowledges that it hallucinates more than previous models.
I stand corrected thank you for sharing
I was commenting based on anecdotal experience and I didn’t know where was a test specifically for this
I do notice that o3 is more overconfident and tends to find a source online from some forum and treat it as gospel
Which, while not correct, I would not treat as hallucination