https://futurism.com/the-byte/government-ai-worse-summarizing

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that’s the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.

  • soupermen [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    Okay. I am under no illusion that current technology is anywhere near replicating digital brains. I don’t think that’s what QuillcrestFalconer or DPRK_Chopra were saying either. When we say “replace workers” we mean “replace the functions that those workers do for their employers”. We’re not talking about making a copy of your coworker Bob, but making a program that does many of the tasks that are currently assigned to Bob in a manner that isn’t too much worse than the real guy (from the warped perspective of management and shareholders of course), and anything the machine can’t do can be delegated to someone else who gets paid a pittance. That’s what we’re talking about, nothing about recreating human intellects. I put the term AI in scare quotes in my first comment because I too am well aware that it’s a misnomer. But it’s the term that everyone knows this technology by (via marketing and such like you said) so it’s easy fall back on that term. LLM, or “AI” in scare quotes, I don’t think the specific term really matters in this context because we’re not talking about true intelligence, but automation of task work that currently is done by paid human employees.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I put the term AI in scare quotes in my first comment because I too am well aware that it’s a misnomer. But it’s the term that everyone knows this technology by (via marketing and such like you said) so it’s easy fall back on that term.

      My primary beef and the main thrust of my argument was exactly that: the primary triumph of “AI” is as a marketing term.

      It does a disservice to research and development of generalized artificial intelligence (which I hope won’t be such a fucking massive waste of resources and such a massive producer of additional carbon waste and other pollution) by jumping the gun and prematurely declaring that “AI” is already here.

      I don’t think the specific term really matters in this context

      I think it does, unfortunately, if only because of how people already take that misleading label and ride it hard.

      we’re not talking about true intelligence, but automation of task work that currently is done by paid human employees.

      Valid discussion for sure, and I wish it could be pried away from the marketing bullshit because it’s really misleading a lot of people, including otherwise educated people that should know better.