• burliman@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    3
    ·
    11 months ago

    Bad humans are prompting these AI engines. Still gotta fix that. You know, root of the problem. I can tell you as an older human, misinformation has been supercharged every election. But yeah let’s blame AI this time around so we don’t have to figure out the tough problem.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      edit-2
      11 months ago

      Correct. AI is simply a tool. People need to get their heads around this and stop perceiving it as some sentient magical entity with rogue prerogatives and uncontested liberties.

      Whenever AI does something whack, that was a human. Everything it knows and does comes from the knowledge and instructions of humans. It’s us. If AI produces misinformation, it’s simply doing what it was taught and instructed by someone, and there lies the source of bullshit.

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      11 months ago

      The problem isn’t the misinformation itself, it’s the rate at which misinformation is produced. Generative models lower the barrier to entry so anyone in their living room somewhere can make deepfakes of your favourite politician. The blame isn’t on AI for creating misinformation, it’s for making the situation worse.

    • HelloThere@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      Fallible humans are building them in the first place.

      No LLM - masquerading as AI - is free of biases.

      That’s not to say that ‘bad’ people prompting biased LLMs is not an issue, it very much is, but even ‘good’ people are not going to get objective results.