Seems like researchers are getting uncomfortably good at mimicking consciousness- so much so that it’s beginning to make me question the deepest parts of my own brain. Perhaps the difference between AI and myself is that I am only prompted by outward stimuli? It seems as though that is what makes the intelligence “artificial”
Christ even as I type this, my thought processes mimic, for example, deepseek’s exposed deep think capabilities. Fuck idk if I’ll be able to unsee it. Seems like the only thing we have on it right now is emotion and even that seems to be in danger
My final takeaway from all of this is that we are in hell
“”“”“”“”““AI””“”“”“”“” is not in any conceivable way actually intelligent. “Generative” AI is a just a very elaborate and extremely, extremely bloated reverse compression algorithm.
Think of how an image can be stored as a jpeg, which “compresses” the image’s information using various mathematical tricks, but in doing so causes the result to lose information. An approximation of the original image can be created from the compressed data, because the jpeg algorithm can extend from the stored data to make a ‘guess’ at the data lost in compression (actually a mathematical process which, given the same input, has the same results each time). If the image undergoes too much compression, it results in the classic blocky jpeg artifacts look, because too much information has been lost.
But in theory, if you examined a bunch of compressed data, you could design an algorithm that could create “new” images that blended different pieces of compressed data together. The algorithm is still purely deterministic, there is no thinking or creativity going on, it’s just reassembling compressed data along strictly mathematically driven lines. If you then change how much value (“weight”) it puts on different pieces of compressed data, and added a way to “prompt” it to fine-tune those values on the fly, you could design an algorithm that seemingly responds to your inputs in some sort of organic fashion but is actually just retrieving data in totally mathematically specified ways. Add in billions and billions of pieces of stolen text to render down into compressed slop, store and run the “weights” of the algorithm on some of the largest data centers ever built using billions of watts of power, run it again and again while tuning its weights by trial and error until it spits out something resembling human writing, and you have an LLM.
It seems to “generate” sentences by just spitting out a sequence of the mathematically most probably words and phrases in order, the way a jpeg algorithm assembles blocks of color. Nothing inside it is “creating” anything new, it’s just serving up chopped up pieces of data, which were scraped from the real world in the first place. And any gaps in actual data cause by the compression, it papers over with a glue of most likely approximations, which is how you get those classic totally absurd non-sequitur answers that are frequently posted, or how you end up with a family getting poisoned after relying on an AI-generated mushroom identification guide.
To be absolutely clear, this is not even in the category of objects that could exhibit sentience. A metal thermostat connected to a heater, that maintains the temperature of a room by expanding and shrinking enough to close or open an electrical circuit, is closer to an intelligent lifeform than this. Any single celled organism is infinitely closer to sentience than this. Just because it can drool out a slurry of text your brain perceives as human, remember: 1. It’s decompressing an amalgamation of billions of real humans’ written text and 2. Your brain perceives this -> :) <- as a face. It’s really easy to trick people into thinking inanimate things are sentient, humans invented gods to explain the seasons and the weather after all.
The term “generative” is actually just a technical term to do with statistical models.