Honestly an AI firm being salty that someone has potentially taken their work, “distilled” it and selling that on feels hilariously hypocritical.

Not like they’ve taken the writings, pictures, edits and videos of others, “distilled” them and created something new from it.

  • MeThisGuy@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    21 hours ago

    Thx for your insight, very insightful!

    So a question: Where do you see this AI heading? Is it just chatbots for customer service, fully functional computer programming, or even fully functional 3D printing and CNC programs with just a few inputs? (for example: here’s a 3D model upload that I need for this particular machine with this material, now make me a program)

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      21 hours ago

      Depends what you mean by “AI”

      Generative models as you know them are pretty much all transformers, and there are already many hacks to let them ingest images, video, sound/music, and even other formats. I believe there are some dedicated 3D models out there, as well as some experiments with “byte-level” LLMs that can theoretically take any data format.

      But there are fundamental limitations, like the long context you’d need for 3D model ingestion being inefficient. The entities that can afford to train the best models are “conservative” and tend to shy away from testing exotic implementations, presumably because they might fail.

      Some seemingly “solvable” problems like repetition issues you encounter with programming have not had potential solutions adopted either, and the fix they use (literally randomizing the output) makes them fundamentally unreliable. LLMs are great assistants, but you can never fully trust them as is.

      What I’m getting at is that everything you said is theoretically possible, but the entities with the purse strings are relatively conservative and tend to pursue profitable pure text performance instead. So I bet they will remain as “interns” and “assistants” until there’s a more fundamental architecture shift, maybe something that learns and error corrects during usage instead of being so static.


      And as stupid as this sounds, another problem is packaging. There are some incredible models that take media or even 3D as input, for instance… but they are all janky, half functional python repos researchers threw up before moving on. There isn’t much integration and user-friendliness in AI land.

      • MeThisGuy@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        I suppose you are right… they are “learning” models after all.
        I just think of the progress with slicers, dynamic infill, computational gcode output with CNC and all the possibilities thereof. There are just so many variables (seemingly infinite). But so are there with LLMs, so maybe there is hope.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          Basically the world is waiting for the Nvidia monopoly to break and training costs to come down, then we will see…