• ArmokGoB@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    4
    ·
    10 months ago

    In my experience as a game designer, the code that LLMs spit out is pretty shit. It won’t even compile half the time, and when it does, it won’t do what you want without significant changes.

    • DSTGU@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      edit-2
      10 months ago

      The correct usage of LLMs in coding imo is for a single use case at a time, building up to what you need from scratch. It requires skill both in talking to AI for it to give you what you want, knowing how to build up to it, reading the code it spits out so that you know when it goes south and the skill of actually knowing how to build the bigger picture software from little pieces but if you are an intermediate dev who is stuck on something it is a great help.

      That or for rubber ducky debugging, it s also great in that

      • jkrtn@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        Technically it is, but I agree that is imprecise and nobody would say so IRL. Unless they are being a pedantic nerd, like I am right now.

      • Traister101@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        You should refine your thoughts more instead of dumping a stream of consciousness on people.

        Essentially what this stream of consciousness boils down to is “Wouldn’t it be neat if AI generated all the content in the game you are playing on the fly?” Would it be neat? I guess so but I find that incredibly unappealing very similar to how AI art, stories and now video is unappealing. There’s no creativity involved. There’s no meaning to any of it. Sentient AI could probably have creativity but what people like you who get overly excited about this stuff don’t seem to understand is how fundamentally limited our AI actually is currently. LLMs are basically one of the most advanced AI things rn and yet all it does is predict text. It has no knowledge, no capacity for learning. It’s very advanced auto correct.

        We’ve seen this kind of hype with Crypto with NFTs and with Metaverse bullshit. You should take a step back and understand what we currently have and how incredibly far away what has you excited actually is.

      • HeavyDogFeet@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        I don’t mean to be dismissive of your entire train of thought (I can’t follow a lot of it, probably because I’m not a dev and not familiar with a lot of the concepts you’re talking about) but all the things you’ve described that I can understand would require these tools to be a fuckload better, on an order we haven’t even begun to get close to yet, in order to not be super predictable.

        It’s all wonderful in theory, but we’re not even close to what would be needed to even half-ass this stuff.