Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • alliswell33 @lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 months ago

    Insane. By this logic you could easily argue that nuking the US is the best way towards world peace. Doesn’t sound so good when it’s you who gets killed.

    • norbert@kbin.social
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      10 months ago

      Have you been around lemmy much? That wouldn’t be the wildest take I’ve seen.

    • theodewere@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      6
      ·
      edit-2
      10 months ago

      i think the LLM suggested nuking bad actors as a way to move politics forward in the world, and avoiding prolonged and pointless wars

      • forrgott@lemm.ee
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        3
        ·
        10 months ago

        No, it regurgitated the response that has the highest percentage of “approval”. LLMs do not think. They do not use logic.

        • theodewere@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          9
          ·
          edit-2
          10 months ago

          it calculates the productivity/futility of conversation with the various actors, and determines a best course… it’s playing a war game…

          it sees that both China and Russia are only emboldened to further mischief by anything less than force, so it calculates that applying overwhelming force immediately is the cheapest option, and best long term…

          • forrgott@lemm.ee
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            2
            ·
            10 months ago

            No, not at all. It doesn’t think! LLMs don’t calculate. They don’t take any factors into consideration. These algorithms are not AI. That’s a complete misnomer, which makes the insane costs of operation even more ludicrous.

            • theodewere@kbin.social
              link
              fedilink
              arrow-up
              3
              arrow-down
              9
              ·
              edit-2
              10 months ago

              it comprehends context incredibly well… this one played through scenarios and saw that both China and Russia are on a path to all-out war…

              • Jack Riddle@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                2
                ·
                10 months ago

                It produces the statistically most likely token based on previous data. It doesn’t “comprehend” anything, and it can’t “play through scenarios”. It is just a more advanced form of autocomplete.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            10 months ago

            Honestly if we ignore the ethical issues it is a logically consistent solution… until you consider retaliation.

          • norbert@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            10 months ago

            As others have said this is factually incorrect. ChatGPT is not WOPR running a million War Games and calculating the winning move. It’s just spitting out what it’s already read.

            • theodewere@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              edit-2
              10 months ago

              it routinely does things even its designers can’t explain, you cannot see into that thing’s thought processes and speak with certainty to its limitations