• Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Fight fire with fire? Make an AI that’s trained to recognize AI images? One for text? And probably soon one for video?

    The box can’t be shut once its opened. We either find a way or lose what is reality and what is fiction. At that point you might as well not engage with the internet.

    • voracitude@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      Unfortunately there’s not a reliable way to detect AI-generated content automatically, not without something like Google’s SynthID built into the model doing the generating. You’re bang-on that the box is open and can’t be shut, though, and I am spending more and more time thinking (anxiously) about what our collective adaptation to that fact is going to turn out to look like.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 months ago

        Its going to exacerbate what is already happening. People will subscribe to their own truths and have a slew of content that’s incomprehensible from fact and fiction. There will be fake politicians and candidates. Its going to be a nightmare and we are not on top of it. Plus, they’re already training the automatons to aim guns. Someone out there can script a trigger pull in a single line. We’re fucked doing what we are doing now.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Whichever technology you use to recognize AI-generated content will get repurposed quickly to help generate AI content that can’t be recognized. So it’s an arms race, and at some point the generation may get good enough that there is no effective way to recognize it.