The comments come amid increased attention on a global AI race between the U.S. and China.

    • zante@slrpnk.net
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      2 days ago

      Right?

      If I was OpenAI, this exactly the kind of thing I’d want written about me, especially the day after the deepseek thing….just saying.

      • jrs100000@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        2 days ago

        Its probably part of the standard severance package. Hand in your laptop, sign an NDA, take your COBRA paperwork, and fill out the AGI terror press release.

      • jmcs@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        37
        arrow-down
        1
        ·
        2 days ago

        In the same way that if you start digging a hole in northwestern Spain you are heading towards New Zealand.

        • MidWestKhagan@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          That doesn’t sound right at all, comparing AGI to digging a hole from Spain to New Zealand is hyperbolic. Sounds like more like “electricity will never cover the whole world, maybe one day it’ll have an impact, but powering cars and homes? No way”. AGI and SGI is almost our only way to communism, with Deepseek and other open source models capitalists won’t be able to keep up, especially if AGI becomes available to your average person. In a few years, hell just one year alone, LLMs have made substantial progress that we can only assume it will continue to grow. Acting as if though AGI is like fusion generators is naive, unlike containing the sun, AGI is far more possible because it is. There’s no stopping it at this point, my professor told me that they have stopped trying to catch AI as a university because it’s impossible to do so now, unless you’re a child and just copy everything and it’s obvious. It’s time to stop assuming AGI will never come because it will, and it is.

        • Free_Opinions@feddit.uk
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          16
          ·
          edit-2
          2 days ago

          The difference here is that you’re never going to reach New Zealand that way but incremental improvements in AI will eventually get you to AGI*

          *Unless intelligence is substrate dependent and cannot be replicated in silica or that we destroy ourselves before we get there

          • Thorry84@feddit.nl
            link
            fedilink
            English
            arrow-up
            24
            arrow-down
            1
            ·
            2 days ago

            It’s very easy with an incremental improvement tactic to get stuck in a local maximum. You’ve then hit a dead end, every available option leads to a degredation and thus isn’t viable. It isn’t a sure thing incremental improvements lead to the desired outcome.

            • Free_Opinions@feddit.uk
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              12
              ·
              edit-2
              2 days ago

              I simply cannot imagine a situation where we reach a local maximum and get stuck in it for the rest of human history. There’s always someone else trying a new approach. We will not stop trying to improve our technology. Even just simply knowing what doesn’t work is a step in the right direction.

              We already know that General Intelligence is possible. The question that remains is wether it can be replicated artificially.

              • davidgro@lemmy.world
                link
                fedilink
                English
                arrow-up
                9
                arrow-down
                1
                ·
                edit-2
                1 day ago

                I can imagine it really easily for the foreseeable future, all that would need to happen is for the big corporations and well funded researchers to stick to optimizing LLMs and for that to be a dead end.

                Yeah that’s not the rest of human history (unless the rest of it isn’t very much) but enough to make concerns about AGI into someone else’s problem.

                (Edit, clarified)

                • Free_Opinions@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  5
                  ·
                  2 days ago

                  Like I said; I’ve made no claims about the timeline. All I’ve said is that incremental improvements will lead to us getting there eventually.

                  • davidgro@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    8
                    arrow-down
                    1
                    ·
                    2 days ago

                    In this scenario reaching the goal would require an entirely different base technology, and incremental improvements to what we have now do not eventually lead to AGI.

                    Kinda like incremental improvements to cars or even trains won’t eventually get us to Mars.

                  • jrs100000@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    arrow-down
                    2
                    ·
                    2 days ago

                    Just like incremental improvements in the bicycle will eventually allow for hypersonic peddling.

              • chonglibloodsport@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                1 day ago

                By saying this aren’t you assuming that human civilization will last long enough to get there?

                Look at the timeline of other species on this planet. Vast numbers of them are long extinct. They never evolved intelligence to our level. Only we did. Yet we know our intelligence is quite limited.

                What took biology billions of years we’re attempting to do in a few generations (the project for AI began in the 1950s). Meanwhile the amount of non-renewable energy resources we’re consuming has hit exponential takeoff. Our political systems are straining and stretching to the breaking point.

                And of course progress towards AI has not been steady with the project. There was an initial burst of success in the ‘50s followed by a long AI winter when researchers got stuck in a local maximum. It’s not at all clear to me that we haven’t entered a new local maximum with LLMs.

                Do we even have a few more generations left to work on this?

                • Free_Opinions@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  3
                  ·
                  1 day ago

                  I’m talking about AI development broadly, not just LLMs.

                  I also listed human extinction as one of the two possible scenarios in which we never reach AGI, the other being that there’s something unique about biological brains that cannot be replicated artificially.

                  • chonglibloodsport@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    edit-2
                    1 day ago

                    We could witness a collapse of our high tech civilization that effectively ends AI research without necessarily leading to extinction. Think of a global warming supercharged Mad Max post-apocalyptic future. People still survive but the population has crashed and there’s a lot of fighting for survival and scavenging among the ruins of civilization.

                    There’s gotta be countless other variations on this theme. Global dystopian techno-feudalism perhaps?

            • MidWestKhagan@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              What do you mean there’s no evidence? This seems like a difference of personal explanation of what AGI is where you can move the goal post as much as you want “it’s not really AGI until it can ___, ok just because it can do that doesn’t mean it’s AGI, AGI needs to be able to do _____”.

            • Free_Opinions@feddit.uk
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              5
              ·
              1 day ago

              No, it doesn’t assume that at all. This statement would’ve been true even before electricity was invented and AI was just an idea.

          • underscore_@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            2 days ago

            It is a common misconception that incremental improvements must equate to eventually achieving the goal, but it is perfectly possible that progress could be asymptotic and we never reach AGI even with constant “advancements”

            • Free_Opinions@feddit.uk
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              6
              ·
              2 days ago

              Incremental improvements by definition mean that you’re moving towards something. It might take a long time but my comment made no claims about the timescale. There’s only two plausible scenarios that I can think of in which we don’t reach AGI and they’re mentioned in my comment.

              • then_three_more@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                2 days ago

                That relies on the increments being the same. It’s much easier to accelerate from 0 to 60 mph than it is from 670,999,940 mph to C.

      • HakFoo@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        20
        ·
        2 days ago

        Would we know it if we saw it? Draw two eye spots on a wooden spoon amd people will anthromorphise it. I suspect we’ll have dozens of false starts and breathless announcements of AGI, but we may never get there.

        More interestingly, would we want it if we got it? How long will its creators rally to its side if we throw yottabytes of data at our civilization-scale problems and the mavhine comes back with “build trains and eat the rich instead of cows?”

        • Free_Opinions@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          2 days ago

          Would we know it if we saw it?

          That seems besides the point when the question is about wether we’re getting closer to it or not.

        • Free_Opinions@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          2 days ago

          But objectively measured no? Is there no progress happening at all, or are we moving backwards? Because it’s either of those two or then we’re moving towards it.