alternatively, the delusions of grandeur required to think your opinion is more reliable than that of many of the leaders in the field
they’re not saying that LLM will be that thing; they’re saying that in the next 30 years, we could have a different kind of model - we already have the mixture of experts models, that that mirrors a lot of how our own brain processes information
once we get a model that is reliably able to improve itself (and that’s, again, not so different from adversarial training which we already do, and MLP to create and “join” the experts together) then things could take off very quickly
nobody is saying that LLMs will become AGI, but they’re saying that the core building blocks are theoretically there already, and it may only take a couple of break-throughs in how things are wired for a really fast explosion
alternatively, the delusions of grandeur required to think your opinion is more reliable than that of many of the leaders in the field
they’re not saying that LLM will be that thing; they’re saying that in the next 30 years, we could have a different kind of model - we already have the mixture of experts models, that that mirrors a lot of how our own brain processes information
once we get a model that is reliably able to improve itself (and that’s, again, not so different from adversarial training which we already do, and MLP to create and “join” the experts together) then things could take off very quickly
nobody is saying that LLMs will become AGI, but they’re saying that the core building blocks are theoretically there already, and it may only take a couple of break-throughs in how things are wired for a really fast explosion
I know the infinite conversation has only gotten better and better! 🤪