The delusions of grandeur required to think your glorified auto complete is going to turn into a robot god is unreal. Just wish they’d quit boiling the planet.
A little while ago there was a thread about what people are actually using LLMs for. The best answer was that it can be used to soften language in emails. FFS.
alternatively, the delusions of grandeur required to think your opinion is more reliable than that of many of the leaders in the field
they’re not saying that LLM will be that thing; they’re saying that in the next 30 years, we could have a different kind of model - we already have the mixture of experts models, that that mirrors a lot of how our own brain processes information
once we get a model that is reliably able to improve itself (and that’s, again, not so different from adversarial training which we already do, and MLP to create and “join” the experts together) then things could take off very quickly
nobody is saying that LLMs will become AGI, but they’re saying that the core building blocks are theoretically there already, and it may only take a couple of break-throughs in how things are wired for a really fast explosion
The delusions of grandeur required to think your glorified auto complete is going to turn into a robot god is unreal. Just wish they’d quit boiling the planet.
Oh man 100% this.
A little while ago there was a thread about what people are actually using LLMs for. The best answer was that it can be used to soften language in emails. FFS.
alternatively, the delusions of grandeur required to think your opinion is more reliable than that of many of the leaders in the field
they’re not saying that LLM will be that thing; they’re saying that in the next 30 years, we could have a different kind of model - we already have the mixture of experts models, that that mirrors a lot of how our own brain processes information
once we get a model that is reliably able to improve itself (and that’s, again, not so different from adversarial training which we already do, and MLP to create and “join” the experts together) then things could take off very quickly
nobody is saying that LLMs will become AGI, but they’re saying that the core building blocks are theoretically there already, and it may only take a couple of break-throughs in how things are wired for a really fast explosion
I know the infinite conversation has only gotten better and better! 🤪
I also wish they’d put up or shut up, goddamn lol. Hopefully DeepSeek has lit some fires under some asses 🍑🔥
I love how Jon Stewart put it: AI is losing its job to AI.
I was super disappointed with his take this week in general though (which I see is reflected in the YouTube comments).
Context: https://www.youtube.com/watch?v=Byg8VZdKK88