Five@beehaw.org to Technology@beehaw.orgEnglish · 2 年前ChatGPT broke the Turing test — the race is on for new ways to assess AIwww.nature.comexternal-linkmessage-square131fedilinkarrow-up1196arrow-down10cross-posted to: [email protected][email protected]
arrow-up1196arrow-down1external-linkChatGPT broke the Turing test — the race is on for new ways to assess AIwww.nature.comFive@beehaw.org to Technology@beehaw.orgEnglish · 2 年前message-square131fedilinkcross-posted to: [email protected][email protected]
minus-squareMaestro@kbin.sociallinkfedilinkarrow-up5·2 年前How does ChatGPT do with the Winograd schema? That’s a lot harder to fake: https://en.m.wikipedia.org/wiki/Winograd_schema_challenge
minus-squareDroggl@lemmy.sdf.orglinkfedilinkarrow-up2·2 年前I dont remember the numbers but iirc it was covered by one of the validation datasets and GPT 4 did quite well on it
minus-squareMaestro@kbin.sociallinkfedilinkarrow-up2·edit-22 年前Yeah, but did it do well on the specific examples from the Winograd paper? Because ChatGPT probably just learned those since they are well known and oft repeatef. Or does it do well on brand new sentences made according to the Winograd scheme?
How does ChatGPT do with the Winograd schema? That’s a lot harder to fake: https://en.m.wikipedia.org/wiki/Winograd_schema_challenge
I dont remember the numbers but iirc it was covered by one of the validation datasets and GPT 4 did quite well on it
Yeah, but did it do well on the specific examples from the Winograd paper? Because ChatGPT probably just learned those since they are well known and oft repeatef. Or does it do well on brand new sentences made according to the Winograd scheme?