• DamarcusArt@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    3 months ago

    Haven’t read the article, but I’m guessing this new model enables them to do something computers could do 20 years ago, only far, far less efficiently.

    • gay_king_prince_charles [she/her, he/him]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      To me it seems like they added a preprocessor that can choose to tokenize letters or send directions to other systems as opposed to just using the llm. So this makes it far better at logical reasoning and math from natural language input. The achievement isn’t counting the R’s in strawberry, it’s working around the fundamental issues of llms that make that task difficult.