https://futurism.com/the-byte/government-ai-worse-summarizing

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that’s the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.

  • UlyssesT [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    3 months ago

    Stating that newer models that perform better than old models somehow implies that the newer models are completely living up to marketing hype, up to and including calling it “artificial intelligence” to begin with.

    And yes, it’s a known and established issue where some people that stan for these treat printers do see them as replacements for people, not tools. There’s already an entire startup industry of “AI companions” selling that belief, so what I said isn’t as absurd as you claim it is. Besides, I said “robot god of the future” there, not “AI” waifus, but there’s certainly a connection that some true believers make between the two concepts.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        3 months ago

        I didn’t mention marketing

        That’s too bad, because “AI” as it stands, and what is branded as “AI,” is not what it claims to be on the label. There are certainly scientific efforts underway to make rudimentary versions of that, but large language models and related technology simply isn’t it, and to believe otherwise is marketing, whether you accept it or not.

        Benchmarks designed to test the machine’s abilities to perform reasoning like humans. And they’re being improved on constantly

        Again, you’re believing in the marketing.

        https://bigthink.com/the-future/artificial-general-intelligence-true-ai/

        https://time.com/collection/time100-voices/6980134/ai-llm-not-sentient/

        Sorry if that rubs ya the wrong way.

        You’re not sorry, this isn’t /r/Futurology or /r/Singularity, and the smuglord closer to your post only makes it worse.

          • UlyssesT [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            10
            ·
            edit-2
            3 months ago

            You seem to have a kind of “head in the sand” approach to this

            Even more smuglord and there’s so much more text to read. Here we go.

            (I get it, we have to protect our egos)

            Maybe educate yourself on what some of the research in this field looks like.

            Maybe stop ignoring entire fields of research that, to this date, are still figuring out what biological brains are doing and how they are doing them instead of just nodding along to what you already want to believe from people that have blinders for anything outside of their field (computers, in this case). It’s a case of someone with a hammer seeing everything as a nail, and you buying into that.

            Honestly you sound scared about this stuff.

            More like tired. If you weren’t so religiously defensive about the apparent advent of whatever you’re hoping for, you’d know that I have on many occasions stated that artificial intelligence is possible and may even be achieved within current lifetimes, but reiterating and refining the currently hyped “AI” product simply isn’t it.

            It’s like if people were trying to develop rocketry to achieve space travel, but you and yours were smugly stating that this particularly sharp knife will cut the heavens open, just you wait.

                  • BodyBySisyphus [he/him]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    3 months ago

                    I’ve been thinking about this comment a lot over the last couple of days. I do my research in agriculture and food systems so I’ve had a lot of exposure to the “future is rural” philosophy, but it’s mainly in the context of climate change. It seems like anyone talking sense about the trajectory our society is on is quietly buying small plots of land for smallholder agriculture or posting about how farms are probably going to stop supplying food systems and start focusing on meeting their own needs as conditions get less hospitable. It’s interesting to consider that there’s a convergent response emerging as a result of automation.

                    Meanwhile I’m sitting here on my small expensive urban plot that couldn’t sustain more than some summer vegetables because I thought I’d get bored doing actual agriculture blob-no-thoughts

              • UlyssesT [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 months ago

                I respect you

                but

                butt

                You seem to have a kind of “head in the sand” approach to this

                Maybe educate yourself

                Honestly you sound scared about this stuff.

                You’ve already “respectfully” insulted me many times over because I’m not convinced that a sufficiently large language model is a 1:1 analogue to a biological brain no matter how much data (and energy, and water) goes in and how much carbon waste comes out of it.

                if this train keeps moving at it’s current pace, we’re in for a massive upheaval

                That’s already happened, and a lot of its unmitigated momentum (and the damage it’s already causing) is because of “THIS IS AI” hype marketing, which oversells the tools. The tools are potentially useful and quite powerful, yes, but they are not general artificial intelligence in the way that’s still being researched and developed before, during, and since you bought into the “AI” marketing label for LLMs.

                They are already here, they are already screwing over many people in the working class and they’re already doing massive environmental damage, and pretenses of personhood for the treat printers (or insulting living beings as “afraid” or whatever Redditisms may come) isn’t making them any more 1:1 biological-analogue sapient but it is certainly blurring actual scientific inquiry with “just like the cyberpunkerinos” wish fulfillment desires.

                  • UlyssesT [he/him]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 months ago

                    You keep bringing up stuff I didn’t even mention. It’s super annoying.

                    How do you think I feel when you keep conjuring up a pile of straw labeled “frightened superstitious Luddite who fears for their immortal soul” and smugposting toward it?

                    It’s like you’re arguing against a character in your head.

                    I don’t really wish to continue this conversation.

                    Then don’t. There’s plenty of subreddits such as /r/singularity and /r/futurology that will cheerfully agree with all of your internalized marketing beliefs.

                    But hey, actually. I apologize for my tone earlier

                    Then why did you use the same fucking tone in this post?

                    If you want to stop, just stop. If you want to fling more “respectful” insults my way, I can’t stop you.

            • soupermen [none/use name]@hexbear.net
              link
              fedilink
              English
              arrow-up
              9
              ·
              edit-2
              3 months ago

              Hey there, I’ve got no stakes here and I don’t want to speak for anyone but I think what happened here was QuillCrestFalconer and DPRK_Chopra were simply pointing out that the technology is rapidly evolving, that it’s capabilities even just a couple years ago were way less than now, and it appears that it will continue to develop like this. So their point would be that we need to still prepare and anticipate that it may soon advance to the point where employers will be more willing to try to replace real workers with it. I don’t think they were implying that this would be a good thing, or that it would be a smart or savvy move, just that it’s a possible and maybe even a likely outcome. We’ve already seen various industries attempt to start doing that with the limited abilities of “AI” already so to me it does seem reasonable to expect them to want to do that more as it gets better. Okay, thanks for reading. 👋

              • UlyssesT [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                Yeah, the technology is rapidly developing but I am not the only one unconvinced that just piling in more data in the exact same way as it is now is going to 1:1 match biological brains. I’m not saying it is impossible, far from it. I’m saying the current “just spend more energy and produce more carbon waste pile on the data” approach, powered by marketing, isn’t likely to produce a generalized artificial intelligence on its own.

                Marketing hype being what it is and how it’s both misused and even doing a disservice to actual nascent artificial intelligence research, I reject calling the current LLM technology “AI.”

                • soupermen [none/use name]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  3 months ago

                  Okay. I am under no illusion that current technology is anywhere near replicating digital brains. I don’t think that’s what QuillcrestFalconer or DPRK_Chopra were saying either. When we say “replace workers” we mean “replace the functions that those workers do for their employers”. We’re not talking about making a copy of your coworker Bob, but making a program that does many of the tasks that are currently assigned to Bob in a manner that isn’t too much worse than the real guy (from the warped perspective of management and shareholders of course), and anything the machine can’t do can be delegated to someone else who gets paid a pittance. That’s what we’re talking about, nothing about recreating human intellects. I put the term AI in scare quotes in my first comment because I too am well aware that it’s a misnomer. But it’s the term that everyone knows this technology by (via marketing and such like you said) so it’s easy fall back on that term. LLM, or “AI” in scare quotes, I don’t think the specific term really matters in this context because we’re not talking about true intelligence, but automation of task work that currently is done by paid human employees.

                  • UlyssesT [he/him]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 months ago

                    I put the term AI in scare quotes in my first comment because I too am well aware that it’s a misnomer. But it’s the term that everyone knows this technology by (via marketing and such like you said) so it’s easy fall back on that term.

                    My primary beef and the main thrust of my argument was exactly that: the primary triumph of “AI” is as a marketing term.

                    It does a disservice to research and development of generalized artificial intelligence (which I hope won’t be such a fucking massive waste of resources and such a massive producer of additional carbon waste and other pollution) by jumping the gun and prematurely declaring that “AI” is already here.

                    I don’t think the specific term really matters in this context

                    I think it does, unfortunately, if only because of how people already take that misleading label and ride it hard.

                    we’re not talking about true intelligence, but automation of task work that currently is done by paid human employees.

                    Valid discussion for sure, and I wish it could be pried away from the marketing bullshit because it’s really misleading a lot of people, including otherwise educated people that should know better.

            • impartial_fanboy [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              6
              ·
              3 months ago

              Maybe stop ignoring entire fields of research that, to this date, are still figuring out what biological brains are doing and how they are doing them instead of just nodding along to what you already want to believe from people that have blinders for anything outside of their field (computers, in this case).

              Well first, brains aren’t the only kind of intelligent biological system but they aren’t actually trying to 1 for 1 recreate the human brain, or any other brain for that matter, that’s just marketing. The generative side of LLM’s is what gets the focus in the media but it’s really not the most scientifically interesting or what will actually change that much all things considered.

              These systems are absolutely fantastic at finding real patterns in chaotic systems. That’s where the potential lies.

              It’s like if people were trying to develop rocketry to achieve space travel, but you and yours were smugly stating that this particularly sharp knife will cut the heavens open, just you wait.

              More like trying to go to the moon with a Civil War era rocket, it is early days yet. But progress is insanely quick.

              • UlyssesT [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                These systems are absolutely fantastic at finding real patterns in chaotic systems. That’s where the potential lies.

                No arguments there; my issue is the marketing bullshit that wants to call them 1:1 “artificial intelligence” which is an insult and a dismissal of actual ongoing artificial intelligence research projects.

                More like trying to go to the moon with a Civil War era rocket, it is early days yet. But progress is insanely quick.

                My metaphor was heavy handed, I know. Maybe I should have said it’s like trying to fire a bullet at the moon and just expecting more and more gunpowder to do the trick instead of considering a different approach using chemical propulsion.