it is fucking priceless that an innovation that contained such simplicities as “don’t use 32-bit weights when tokenizing petabytes of data” and “compress your hash tables” sent the stock exchange into ‘the west has fallen’ mode. I don’t intend to take away from that, it’s so fucking funny
This is not the rights issue, this is not the labor issue, this is not the merits issue, this is not even the philosophical issue. This is the cognitive issue. When not exercised, parts of your brain will atrophy. You will start to outsource your thinking to the black box. You are not built different. It is the expected effect.
I am not saying this is happening on this forum, or even that there are tendencies close to this here, but I preemptively want to make sure it gets across because it fucked me up for a good bit. Through Late 2023–Early 2024 I found myself leaning into both AI images for character conceptualization and AI coding for my general workflow. I do not recommend this in the slightest.
For the former, I found that in retrospect, the AI image generation reified elements into the characters I did not intend and later regretted. For the latter, it essentially kneecapped my ability to produce code for myself until I began to wean off of it. I am a college student. I was in multiple classes where I was supposed to be actively learning these things. Deferring to AI essentially nullified that while also regressing my abilities. If you don’t keep yourself sharp, you will go dull.
If you don’t mind that or don’t feel it is personally worth it to learn these skills besides the very very basics and shallows, go ahead, that’s a different conversation but this one does not apply to you. I just want to warn those who did not develop their position on AI beyond “the most annoying people in the world are in charge of it and/or pushing it” (a position that, when deployed by otherwise-knowledgeable communists, is correct 95% of the time) that this is something you will have to be cognizant of. The brain responds to the unknowable cube by deferring to it. Stay vigilant.
I’ve run some underwhelming local LLMs and done a bit of playing with the commercial offerings.
I agree with this post. My experiments are on hold, though I’m curious to just have a poke around DeepSeek’s stuff just to get an idea of how it behaves.
I am most concerned with next generation devices that come with this stuff built in. There’s a reactionary sinophobe on youtube who produced a video with some pretty interesting talking points that, since the goal is to have these “AI assistants” basically observe everything you do with your device (and are blackboxes that rely on cloud hosted infrastructure) that this effectively negates E2E encryption. I am convinced by these arguments and in that respect the future looks particularly bleak. Having a wrongthink censor that can read all your inputs before you’ve even sent them and can flag you for closer surveillance and logging, combined with the three letter agencies really “chilling out” about eg Apple’s refusal to assist in decrypting iPhones, it all looks quite fucked.
There are obviously some use cases where LLMs are sort of unobjectionable, but even then, as OP points out, we often ignore the way our tools shape our minds. People using them as surrogates for human interaction etc are a particularly sad case.
Even if you accept the (flawed) premise that these machines contain a spark of consciousness, what does it say about us that we would spin up one-time single use minds to exploit for a labor task and then terminate them? I don’t have a solid analysis but it smells bad to me.
Also China’s efforts effectively represent a more industrial scale iteration of what the independent hacker and opensource communities have been doing anyway- proving that the moat doesn’t really exist and that continuing to try and use brute force (scale) to make these tools “better” is inefficient and tunnel visioned.
Between this and the links shared with me recently about China’s space efforts, I am simply left disappointed that we remain in competition and opposition to more than half of the world when cooperation could have saved us a lot of time, energy, water, etc. It’s sad and a shame.
I cherish my coding ability. I don’t mind playing with an LLM to generate some boilerplate to have a look at, but the idea that people who cannot even assess the function of the code that is generated are putting this stuff into production is really sad. We haven’t exactly solved the halting problem yet have we? There’s no real way for these machines to accurately assess code to determine that it does the task it is intended to do without side effects or corner cases that fail. These are NP-hard problems and we continue to ignore that fact.
The hype driving this is clear startup bro slick talk grifting shit. Yes it’s impressive that we can build these things but they are being misapplied and deferred to as authorities on topics by people who consider themselves to be otherwise Very Smart People. It’s… in a word… pathetic.
The gigawatts of wasted electricity :(
We can only laugh or we’d never stop crying.
I was surprised the response wasn’t “okay, China made this on 1/50 the budget, so if we do what they did but threw double our budget at it, we can make something 100 times better, and we’ll be so far advanced that we’ll be opening Walmarts on Ganymede next spring, we just need more Quadros, bro”