old profile: /u/[email protected]

  • 47 Posts
  • 438 Comments
Joined 1 year ago
cake
Cake day: August 16th, 2023

help-circle

  • Why do we expect a higher degree of trustworthiness from a novel LLM than we de from any given source or forum comment on the internet?

    The stuff I’ve seen AI produce has sometimes been more wrong than anything a human could produce. And even if a human would produce it and post it on a forum, anyone with half a brain could respond with a correction. (E.g. claiming that an ordinary Slavic word is actually loaned from Latin.)

    I certainly don’t expect any trustworthiness from LLMs, the problem is that people do expect it. You’re implicitly agreeing with my argument that it is not just that LLMs give problematic responses when tricked, but also when used as intended, as knowledgeable chatbots. There’s nothing “detached from actual usage” about that.

    At what point do we stop hand-wringing over llms failing to meet some perceived level of accuracy and hold the people using it responsible for verifying the response themselves?

    at this point I think it’s fair to blame the user for ignoring those warnings and not the models for not meeting some arbitrary standard

    This is not an either-or situation, it doesn’t have to be formulated like this. Criticising LLMs which frequently produce garbage is in practice also directed at people who do use them. When someone on a forum says they asked GPT and paste its response, I will at the very least point out the general unreliability of LLMs, if not criticise the response itself (very easy if I’m somewhat knowledgeable about the field in question). This is practically also directed at the person who posted that, such as e.g. making them come off as naive and uncritical. (It is of course not meant as a real personal attack, but even a detached and objective criticism has a partly personal element to it.)

    Still, the blame is on both. You claim that:

    Theres a giant disclaimer on every one of these models that responses may contain errors or hallucinations

    I don’t remember seeing them, but even if they were there, the general promotion and ways in which LLMs are presented in are trying to tell people otherwise. Some disclaimers are irrelevant for forming people’s opinions compared to the extensive media hype and marketing.

    Anyway my point was merely that people do regularly misuse LLMs, and it’s not at all difficult to make them produce crap. The stuff about who should be blamed for the whole situation is probably not something we disagree about too much.




  • antonim@lemmy.dbzer0.comto196@lemmy.blahaj.zoneImperial rule
    link
    fedilink
    arrow-up
    9
    arrow-down
    5
    ·
    2 days ago

    Well, just personally speaking, I know Russian, and reading Russian news sources (state-owned as well as those that have been banned by the Russian state) from time to time, and talking with Russians directly, hasn’t even remotely convinced me that the “Russian empire” is equally bad as the “western empire”.




  • I’ve already seen this exact same claim these days, so now I decided to try and find out what’s happening exactly.

    https://www.dw.com/en/indiadropsevolution/a-65804720

    Apparently, it happened last year, not just now, as you said, and I’m sure I’ve already seem someone else (maybe on Lemmy, maybe on reddit) also describe it as a very recent event.

    However I can’t find absolutely anything else regarding the topic. So I tried googling in Hindi instead, with the help of some machine translation.

    https://www.aajtak.in/education/news/story/pythagoras-theorem-has-vedic-has-roots-karnataka-panel-proposes-to-sanskrit-as-a-third-language-1496805-2022-07-10

    This is the only piece of news I’ve managed to find, again not very recent, and not nearly as dramatic as the DW article makes it out to be. Some official has described the Pythagorean theorem as ‘fake news’ because that same theorem had already been developed in India before Pythagoras, i.e. the point is that the name is a misnomer. They say nothing about removing the theorem.

    The reduction of teaching of the periodic table and evolution that DW mentions is also explained in the PDF that the article links as mere reorganisation of the topics due to the circumstances (difficulties in teaching during corona). They don’t suggest actual removal of the topics. (The PDF is an official explanation from the Indian “National Council of Educational Research and Training”.)

    I’m getting the impression DW is just fearmongering. Ideally there should be some article with exact and complete quotes in Hindi. I know that media freedom in India is not great (esp. considering the situation with Wikipedia), and it’s probably not easy to get to the bottom of it, but this story looks very suspicious.



  • The video is half an hour long and I really don’t feel like watching it all to find out something that could be said in one or two paragraphs of text, so I ignored it at first. As I expected, the video deals with a bunch of more or less relevant topics that you or OP didn’t mention at all. It actually is a bit interesting, I’ve watched a part of it, and I do have to admit that US fire trucks are bigger than those where I live. The problem is that their deadliness is a consequence of several other factors, and only indirectly of their size. What you and OP decided not to do is to communicate that point with any nuance, and all that I could read from your comments is that, by some logic, getting hit by a 10-metre truck is much safer than getting hit by a 15-metre truck. OP complained about the driver “right-hooking” the cyclist, you just said the trucks are too big, do I really have to watch a half an hour video to understand why your comments don’t sound nonsensical?