WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’

  • generalpotato@lemmy.world
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    1
    ·
    1 year ago

    Systemic prejudices showing up in datasets causing generative systems to spew biased output? Gasp… say it isn’t so?

    I’m not sure why this is surprising anymore. This is literally expected behavior unless we get our shit together and get a grip on these systemic problems. The rest of it all is just patch work and bandages.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      I’d like to point out that not everything generative is a subset of all the ML stuff. So prejudices in datasets do not affect everything generative.

      That’s off the topic, just playing with such a thing as generative music now. Started with SuperCollider, but it was too hard (maybe not anymore TBF, probably recycling a phrase, for example, would be much easier and faster there than in my macaroni shell script) so now I just generate ABC, convert it to MIDI with various instruments, and use FluidSynth.

    • theyoyomaster@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      12
      ·
      1 year ago

      This isn’t anything they actively did though. The literal point of AI is that it learns on its own and comes up with its own response absent human interaction. Meta very likely specifically added code to try and prevent this, but it just fell short of overcoming the bias found in the overwhelming majority of content that led to the model associating Hamas with Palestine.

      • Valmond@lemmy.mindoki.com
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        1 year ago

        It’s not about “adding code” or any other bullshit.

        AI today is trained on datasets (that’s about it), the choice of datasets can be complicated, but that’s where you moderate and select. There is nothing “AI learns of its own” sci-fi dream going on.

        Sigh.

        • Serdan@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          It’s reasonable to refer to unsupervised learning as “learning on its own”.

        • Torvum@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          1 year ago

          Really wish the term virtual intelligence was used (literally what it is)

          • GiveMemes@jlai.lu
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            1 year ago

            We should honestly just take the word intelligence out of the mix for rn bc these machines aren’t “intelligent”. They can’t do things like critically think, form its own opinions, etc. They’re just super efficient data aggregation at the end of the day, whether or not they’re based on the human brain.

            We’re so far off from ‘intelligent’ machine learning that I think it really throws off how people think about it to call it intelligence of any sort.

            • Serdan@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 year ago

              LLMs can reason about information. It’s fine to call them intelligent systems.

            • Torvum@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              Techbros just needed to use the search engine optimization buzzword tbh.

          • ichbinjasokreativ@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            One of the many great things about the mass effect franchise is its separation of AI and VI, the latter being non-conscious and simple and the former being actually ‘awake’

        • theyoyomaster@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It is about adding code. No dataset will be 100% free of undesirable results. No matter what marketing departments wish, AI isn’t anything close to human “intelligence,” it is just a function of learned correlations. When it comes to complex and sensitive topics, the difference between correlation and causation is huge and AI doesn’t distinguish. As a result, they absolutely hard code AI models to avoid certain correlations. Look at the “[character] doing 9/11” meme trend. At the fundamental level it is impossible to restrict undesirable outcomes by avoiding them in training models because there are an infinite combinations of innocent things that become sensitive when linked in nuanced ways. The only way to combat this is to manually delink certain concepts; they merely failed to predict it correctly for this specific instance.

      • Tetsuo@jlai.lu
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        1 year ago

        It’s up to them to moderate the content generated by their app.

        And yes it’s almost impossible to have a completely safe AI so that will be an issue for all generative AIs like that. It’s still their implementation and content generated by their code.

        Also I highly doubt they had a specific code to prevent that kind of depiction of Palestinian kids.

        Even if they did, someone will come up with an injection prompt that overrides the code in question and the AI will again display biased or racist stuff.

        An AI generating racist stuff is absolutely not more acceptable because it got inspired by real racist people…

        • theyoyomaster@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          I imagine they likely have hardcoded rules about associating content indexed as “terrorist” against a query for a nationality. Most mainstream AI models do have specific rules built in to prevent stuff like this, they just aren’t all encompassing and can still happen if there is sufficient influence from the training data.

          While FB does have content moderators, needing human verification of every single piece of AI generated defeats the purpose of AI. If people want AI there is a certain amount of non politically correct results that will slip through the cracks. The bottom line is content moderation as we know it has extreme biases applied to fit the safest “viewpoint model” and any system based on objective data analysis, especially with biased samples such as openly available internet, is going to get results that do not fit the standard “curated” viewpoint.

          • Tetsuo@jlai.lu
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            It doesn’t matter. I don’t really care about moderation being impossible to do. Google decided most moderation should be done automatically on YT and there are constantly false positives. They are not being held accountable both for false positives and false negatives. No human is involved.

            And reading that type of comment I’m assuming we are heading the same way. Businesses not being accountable for something that is absolutely being generated by their code. If you choose to deploy a black box that generates random stuff you can’t understand how it was generated it shouldn’t make you not responsible for the damage done.

            I don’t think we should naively just accept apologies from AI owners and move on. They knew the risk of dangerous content being generated and decided it was acceptable.

            Also considering the damage that Facebook has done in the past and their careless attitude toward privacy, I cannot understand why you would find it likely that they took the time to add some kind of safeguard for nationality and terrorism to be wrongfully associated.

            Even then, the very concept of nationality is certainly not clear for an AI. For some Palestine is not a country. How would you think they would have coded a safeguard to prevent that kind of mistake anyway ?

            There is a contradiction also in saying that you can’t moderate every single AI output manually but that they manually added a moderation of sort to the AI specifically for Palestinians and terrorism. There is no way they got so specific. As you said it’s not a practical approach.

            The very important point for me to convey is that just because some black box generating text can randomly say racist stuff doesn’t and shouldn’t be more socially acceptable. That’s it.

            Then obviously I think these AI shouldn’t have been released before their owners have a very good understanding on how they work and on how to prevent 99.9999999999% of the dangerous outputs. Right now my opinion is that Whatsapp deployed this knowing a lot of racist stuff would be generated and they just decided they will figure it out along the way with the help of the users.

            It was either that or being late to the competition for the AI market.

            If an innocent user can generate that easily some racist output I would argue they did not responsibly released this AI.

        • JohnEdwa@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          The thing is, it’s almost impossible to perfectly prevent something like this before it happens. The data comes from humans, it will include all the biases and racism humans have. You can try to clean it up if you know what you want to avoid, but you can’t make it sterile for every single thing that exists. Once the AI is trained, you can pre-censor it so that it doesn’t generate certain types of images you know are “true” from the data but not acceptable to depict - e.g “jews have huge noses in drawings” is a thing it would learn because that’s a caricature we have used for ages - but again, only if you know what you are looking for and you won’t make it perfect.

          If the word “palestine” makes it generate children with guns, it’s simply because the data it trained on made it think those two things are correlated somehow, and that wasn’t known until now. It will get added to the list of things to censor next time.

    • pete_the_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      14
      ·
      edit-2
      1 year ago

      I forget if it was on here or Reddit, but I remember seeing an article a week or so ago where the translation feature on Facebook ended up calling Palestinians terrorists “accidentally”. I cited the fact that Mark is Jewish, and probably so are a lot of the people that work there. The US is also largely pro-Israel, so it was probably less of an accidental bug and more of an intentional “fuck Palestine”. I got downvoted to hell and called a conspiracy theorist. I think this confirms I had the right idea.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    This is the best summary I could come up with:


    In response to a prompt for “Israel army” the AI created drawings of soldiers smiling and praying, no guns involved.

    As the Israeli bombardment of Gaza continues, users say Meta is enforcing its moderation policies in a biased way, a practice they say amounts to censorship.

    Kevin McAlister, a Meta spokesperson, said the company was aware of the issue and addressing it: “As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems.

    In response to the Guardian’s reporting on the AI-generated stickers, the Australian senator Mehreen Faruqi, deputy leader of the Greens party, called on the country’s e-safety commissioner to investigate “the racist and Islamophobic imagery being produced by Meta”.

    “The AI imagery of Palestinian children being depicted with guns on WhatsApp is a terrifying insight into the racist and Islamophobic criteria being fed into the algorithm,” Faruqi said in an emailed statement.

    A September 2022 study commissioned by the company found that Facebook and Instagram’s content policies during Israeli attacks on the Gaza strip in May 2021 violated Palestinian human rights.


    The original article contains 788 words, the summary contains 184 words. Saved 77%. I’m a bot and I’m open source!

      • 0xSim@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        The kid that tried to kill 2 people by throwing them bricks, paint buckets, and broken glass is now a spokesperson for Facebook? How surprising.

  • Kusimulkku@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    8
    ·
    1 year ago

    Gun wielding children doesn’t sound very far off considering the situation and the population

  • mirror_slap@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    7
    ·
    1 year ago

    Israel perpetrates atrocities against Palestinians, creating terrorists, which Israel uses as an excuse to continue stealing land and killing Palestinians, creating more terrorists, which Israel… a few decades of that now. Would be quite happy to end that ancient war with nukes, and put up a monument on the green glowing glass - “This is what religion gets you”

  • Cringe2793@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    1 year ago

    Wow did y’all read to the bottom of the page? They ask for “support” making use of the Israel-Gaza conflict:

    I thought they were gonna donate some proceeds or something, but nah.

    • RGB3x3@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      The Isreal-Hamas war has really shaken all of us. That’s why I’ve started this GoFundMe. For $2 a month, you can support me in my grief over the conflict. The premium platinum diamond tier for $20 a month will get you access to exclusive photos of me looking pensive and sad about the whole thing.

  • S_204@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    22
    ·
    1 year ago

    The children of Gaza are indoctrinated in school, and taught how to use guns by their terrorist leaders. There is a ton of footage showing this. It’s not even debated is it?

    Ai is just using the information available to respond to the request. Facts are facts as tragic as they might be.

    • Mossheart@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      It doesn’t surprise me to read this, but it does surprise me you’d write it without citing sources. Got any you can share to help others educate themselves?