Honestly an AI firm being salty that someone has potentially taken their work, “distilled” it and selling that on feels hilariously hypocritical.

Not like they’ve taken the writings, pictures, edits and videos of others, “distilled” them and created something new from it.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    1 天前

    Thanks! I’m happy to answer questions too!

    I feel like one of the worst things OpenAI has encouraged is “LLM ignorance.” They want people to use their APIs without knowing how they work internally, and keep the user/dev as dumb as possible.

    But even just knowing the basics of what they’re doing is enlightening, and explains things like why they’re so bad at math or word counting (tokenization), why they mess up so “randomly” (sampling and their serial nature), why they repeat/loop (dumb sampling and bad training, but its complicated), or even just basic things like the format they use to search for knowledge. Among many other things. They’re better tools and less “AI bro hype tech” when they aren’t a total black box.

    • MeThisGuy@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 小时前

      Thx for your insight, very insightful!

      So a question: Where do you see this AI heading? Is it just chatbots for customer service, fully functional computer programming, or even fully functional 3D printing and CNC programs with just a few inputs? (for example: here’s a 3D model upload that I need for this particular machine with this material, now make me a program)

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        21 小时前

        Depends what you mean by “AI”

        Generative models as you know them are pretty much all transformers, and there are already many hacks to let them ingest images, video, sound/music, and even other formats. I believe there are some dedicated 3D models out there, as well as some experiments with “byte-level” LLMs that can theoretically take any data format.

        But there are fundamental limitations, like the long context you’d need for 3D model ingestion being inefficient. The entities that can afford to train the best models are “conservative” and tend to shy away from testing exotic implementations, presumably because they might fail.

        Some seemingly “solvable” problems like repetition issues you encounter with programming have not had potential solutions adopted either, and the fix they use (literally randomizing the output) makes them fundamentally unreliable. LLMs are great assistants, but you can never fully trust them as is.

        What I’m getting at is that everything you said is theoretically possible, but the entities with the purse strings are relatively conservative and tend to pursue profitable pure text performance instead. So I bet they will remain as “interns” and “assistants” until there’s a more fundamental architecture shift, maybe something that learns and error corrects during usage instead of being so static.


        And as stupid as this sounds, another problem is packaging. There are some incredible models that take media or even 3D as input, for instance… but they are all janky, half functional python repos researchers threw up before moving on. There isn’t much integration and user-friendliness in AI land.

        • MeThisGuy@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 小时前

          I suppose you are right… they are “learning” models after all.
          I just think of the progress with slicers, dynamic infill, computational gcode output with CNC and all the possibilities thereof. There are just so many variables (seemingly infinite). But so are there with LLMs, so maybe there is hope.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 小时前

            Basically the world is waiting for the Nvidia monopoly to break and training costs to come down, then we will see…