Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    5 hours ago

    And will we get that technology to keep the Fediverse and free platforms safe? Probably not. All the predecessors have been kept away for sole use of the big players, despite populism always claiming we need to introduce total surveillance to keep the children safe…

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      49 minutes ago

      If everyone has access to the model it becomes much easier to find obfuscation methods and validate them. It becomes an uphill battle. It’s unfortunate but it’s an inherent limitation of most safeguards.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      5 hours ago

      I was going to say… Sure would be nice to have this feature in all the open source AI image generator tools but you’re absolutely right 😩

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 hours ago

        Yeah, unless someone publishes even a set of hashes of known bad content for the general public… I kind of doubt the true intentions are preventing CSAM to the benefit of everyone.

      • realcaseyrollins
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        You never know. @[email protected] / @npub1q3sle0kvfsehgsuexttt3ugjd8xdklxfwwkh559wxckmzddywnws6cd26p@momostr.pink tried getting Microsoft to provide their CSAM blocking filtering to the Fediverse in the past without success, but this is a different group.