also this is all horseshit so I know they haven’t thought this far ahead, but pushing a bit on the oracle problem, how do they think they solved these fundamental issues in their proposed design?
if verifying answers are correct is up to the miners, how do they prevent the miners from just generating any old bullshit using a much less expensive method than an LLM (a Markov chain say, or even just random characters or an empty string if nobody’s checking) and pocketing the tokens?
if verification is up to the requester, why would you ever mark an answer as correct? if you’re forced to pick one correct answer that gets your tokens, what’s stopping you from spinning up an adversarial miner that produces random answers and marking those as correct, ensuring you keep both your tokens and the other miners’ answers?
if answers are verified centrally… there’s no need for the miners or their models, just use whatever that central source of truth is.
and of course this is avoiding the elephant in the room: LLMs have no concept of truth, they just extrude plausible bullshit into a statistically likely shape. there’s no source of truth that can reliably distinguish bad LLM responses from good ones, and if you had one you’d probably be better off just using it instead of an LLM.
edit cause for some reason my brain can’t stop it with this fractally wrong shit: finally, if their plan is to just evenly distribute tokens across miners and return all answers: congrats on the “decentralized” network of /dev/urandom to string converters you weird fucks
another edit: I read the fucking spec and somehow it’s even stupider than any of the above. you can trivially just spend tokens to buy a majority of the validator slots for a subnet (which I guess in normal cultist lingo would be a subchain) and use that to kick out everyone else’s miners:
Only the top 64 validators, when ranked by their stake amount in any particular subnet, are considered to have a validator permit. Only these top 64 subnet validators with permits are considered active in the subnet.
a third edit, please help, my brain is melting: what does a non-adversarial validator even look like in this architecture? we can’t fucking verify LLM outputs like I said so… is this just multiple computers doing RAG and pretending that’s a good idea? is the idea that you run some kind of unbounded training algorithm and we also live in a universe where model overfitting doesn’t exist? help I am melting
I’d say we should start calling this computer science affinity fraud shit “O(0) algorithms”, but knowing the space it’ll be like 2 months before crypto twitter starts using it ironically and maybe 6 months if we’re lucky before it shows up in a whitepaper cause the affinity grifters realized it’d make mediocre engineers buy more fraudcoins
also this is all horseshit so I know they haven’t thought this far ahead, but pushing a bit on the oracle problem, how do they think they solved these fundamental issues in their proposed design?
and of course this is avoiding the elephant in the room: LLMs have no concept of truth, they just extrude plausible bullshit into a statistically likely shape. there’s no source of truth that can reliably distinguish bad LLM responses from good ones, and if you had one you’d probably be better off just using it instead of an LLM.
edit cause for some reason my brain can’t stop it with this fractally wrong shit: finally, if their plan is to just evenly distribute tokens across miners and return all answers: congrats on the “decentralized” network of
/dev/urandom
to string converters you weird fucksanother edit: I read the fucking spec and somehow it’s even stupider than any of the above. you can trivially just spend tokens to buy a majority of the validator slots for a subnet (which I guess in normal cultist lingo would be a subchain) and use that to kick out everyone else’s miners:
a third edit, please help, my brain is melting: what does a non-adversarial validator even look like in this architecture? we can’t fucking verify LLM outputs like I said so… is this just multiple computers doing RAG and pretending that’s a good idea? is the idea that you run some kind of unbounded training algorithm and we also live in a universe where model overfitting doesn’t exist? help I am melting
You call it a problem. I call it a O(1) mining algorithm.
I’d say we should start calling this computer science affinity fraud shit “O(0) algorithms”, but knowing the space it’ll be like 2 months before crypto twitter starts using it ironically and maybe 6 months if we’re lucky before it shows up in a whitepaper cause the affinity grifters realized it’d make mediocre engineers buy more fraudcoins
number go up
what if we made the large language model larger? it’s weird nobody has attempted this
I thought the era of scaling was over. We’re in the era of ??? now. Presumably profit comes later.