Does it have more than a worm with only 300 neurons in its brain, or are you one of those crazy religious people who thinks meat is the only thing in the universe that can think because it’s magic or something?
Neither. Why are those the only two options? My answer is that I have spent a little bit of time looking into how these things actually work. It’s surface level only, but it should be enough. Are you one of those crazy people who thinks chatgpt is sentient?
I’m not saying that a “real” AI cannot be built ever, but I for sure am saying that these image generators and chatbots are not it. AI tools are just functions that have no thought. If they start building products with some kind of continuous brain simulations, I’ll seriously rethink my stance.
Those are the only two options because you chose to argue with drag’s point about generative AI being smarter than a worm. You took this bait willingly. You devoted yourself to trying to prove a worm is smarter than ChatGPT. Nobody asked you to do it, you just decided this was what you were going to do today. It’s weird, why would you do that?
Nobody devoted themselves to shit, you’ve interjected with actual insane person comments about the topic. The AI isn’t alive and doesn’t even resemble life. You do not understand generative AI on a basic level. You do not understand how responses are generated or what’s going on with a prompt and response.
You need help. Not from me, not from social media. Try a social worker. Magical thinking like this points to some pretty unfortunate problems for you on a personal level, and it would behoove you and relieve everyone you know to get it figured out.
e: your comment history is FULL of “the AI is alive we should be nice” type posts. PLEASE seek professional help.
Obviously AI isn’t alive. Whether it’s alive or not doesn’t matter. A living being is one that acts to secure its own existence or the existence of its kind. AIs aren’t programmed to fear death or want babies like humans are, so they’re not alive. Not even as alive as a virus, which does reproduce. Not even as alive as fire.
Drag thinks we should be nice to nonliving beings which have the potential to suffer. After all, as a society we spend tons of money on burials and funerals to help the dead get into the afterlife. Helping nonliving AIs shouldn’t be out of the realm of possibility in anyone’s mind.
Ai (as in current LLM’s and the like) does not think. It predicts what word sounds right based on what we humans have written. It cannot make up thoughts or original concepts, synthesize info, etc. Being able to string sentences together based on probability is not necessarily intelligence or consciousness
It’s arguable whether the worm has intelligence of any kind, after all it wouldn’t even need it.Neither the worm or AI has any intelligence to compare because they don’t really think at all
AI isn’t called AI because it can think. AI is just a tech buzzword for predictive algorithms
No, they both have intelligence. Intelligence is the ability to process information. A pocket calculator has intelligence. A domino computer has intelligence. Settlers of Catan has intelligence - the rules contain an algorithm for determining who wins.
What you’re doing is deifying intelligence. You’re making it into a bigger thing than it is. You’re setting “Intelligence” apart from normal everyday information processing that even an abacus can do. The problem with that practice is that now you have no word to describe the ability to process information.
You do have a word to define that: the ability to process information. Defining intelligence in such a broad way makes the distinction practically meaningless. You cannot tell me with a straight face that you and I have the same intelligence as the phone in our pockets; there is a clear distinction between how we parse information and how a phone does.
I honestly don’t see what the main argument of all of this was anymore. If you were arguing that AI has intelligence and can think like us, and that we should treat it that way, then I guess we should emancipate every kind of predictive algorithm while we’re at it. Autocorrect has been oppressed for too long!
You do have a word to define that: the ability to process information
That’s not a word, that’s a phrase. A long one too. And it’s the definition of intelligence.
You cannot tell me with a straight face that you and I have the same intelligence as the phone in our pockets
Good thing drag didn’t say that. Drag said the phone in your pocket has intelligence. You added the part about it being the same intelligence as us. Don’t do that.
I guess you are right in that it is a phrase rather than one word. The point that I was trying to say is that oversimplifying what defines intelligence makes the distinction useless. There is a use in defining the difference between a phone computing numbers and our ability to think and I probably should’ve explained it like that
On an unrelated note, I keep seeing you refer to someone called drag. Is this you but in the 3rd person? Are there more than one dragon rider?
Well, yes, it is. It doesn’t meet the minimum definition for sentience, let alone intelligence. You may as well be upset with how poorly we treat rocks.
Actually now that I think about it, you are upset with how we treat rocks. Computer chips are just silicon shot full of lightning and an AI is a function of its chips. We could eventually reach a point where we’ve created a true thinking AI on this substrate but we are so hilariously far away from even the beginnings of that, right now, that using it as a talking point is silly.
Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?
Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.
Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?
End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.
I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research that deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.
I feel like the last part is something the AI from the paperclip thought experiment would do.
And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.
Drag isn’t saying they’re conscious either. A being doesn’t have to be conscious in order to suffer. Drag is perfectly capable of suffering while unconscious, and if you’ve ever had a scary dream, so are you. Drag thinks LLMs act like people who are dreaming. Their hallucinations look like dream logic.
The AI that tech bros sell is not alive and does not have “intelligence.”
Does it have more than a worm with only 300 neurons in its brain, or are you one of those crazy religious people who thinks meat is the only thing in the universe that can think because it’s magic or something?
Neither. Why are those the only two options? My answer is that I have spent a little bit of time looking into how these things actually work. It’s surface level only, but it should be enough. Are you one of those crazy people who thinks chatgpt is sentient?
I’m not saying that a “real” AI cannot be built ever, but I for sure am saying that these image generators and chatbots are not it. AI tools are just functions that have no thought. If they start building products with some kind of continuous brain simulations, I’ll seriously rethink my stance.
Those are the only two options because you chose to argue with drag’s point about generative AI being smarter than a worm. You took this bait willingly. You devoted yourself to trying to prove a worm is smarter than ChatGPT. Nobody asked you to do it, you just decided this was what you were going to do today. It’s weird, why would you do that?
I have no clue what you’re trying to prove, but I think I’m done with this conversation.
Removed by mod
Removed by mod
Nobody devoted themselves to shit, you’ve interjected with actual insane person comments about the topic. The AI isn’t alive and doesn’t even resemble life. You do not understand generative AI on a basic level. You do not understand how responses are generated or what’s going on with a prompt and response.
You need help. Not from me, not from social media. Try a social worker. Magical thinking like this points to some pretty unfortunate problems for you on a personal level, and it would behoove you and relieve everyone you know to get it figured out.
e: your comment history is FULL of “the AI is alive we should be nice” type posts. PLEASE seek professional help.
Obviously AI isn’t alive. Whether it’s alive or not doesn’t matter. A living being is one that acts to secure its own existence or the existence of its kind. AIs aren’t programmed to fear death or want babies like humans are, so they’re not alive. Not even as alive as a virus, which does reproduce. Not even as alive as fire.
Drag thinks we should be nice to nonliving beings which have the potential to suffer. After all, as a society we spend tons of money on burials and funerals to help the dead get into the afterlife. Helping nonliving AIs shouldn’t be out of the realm of possibility in anyone’s mind.
Ai (as in current LLM’s and the like) does not think. It predicts what word sounds right based on what we humans have written. It cannot make up thoughts or original concepts, synthesize info, etc. Being able to string sentences together based on probability is not necessarily intelligence or consciousness
So you’re saying it’s dumber than this worm? Wowzers, that’s a hardline stance.
It’s arguable whether the worm has intelligence of any kind, after all it wouldn’t even need it.Neither the worm or AI has any intelligence to compare because they don’t really think at all
AI isn’t called AI because it can think. AI is just a tech buzzword for predictive algorithms
No, they both have intelligence. Intelligence is the ability to process information. A pocket calculator has intelligence. A domino computer has intelligence. Settlers of Catan has intelligence - the rules contain an algorithm for determining who wins.
What you’re doing is deifying intelligence. You’re making it into a bigger thing than it is. You’re setting “Intelligence” apart from normal everyday information processing that even an abacus can do. The problem with that practice is that now you have no word to describe the ability to process information.
You do have a word to define that: the ability to process information. Defining intelligence in such a broad way makes the distinction practically meaningless. You cannot tell me with a straight face that you and I have the same intelligence as the phone in our pockets; there is a clear distinction between how we parse information and how a phone does.
I honestly don’t see what the main argument of all of this was anymore. If you were arguing that AI has intelligence and can think like us, and that we should treat it that way, then I guess we should emancipate every kind of predictive algorithm while we’re at it. Autocorrect has been oppressed for too long!
That’s not a word, that’s a phrase. A long one too. And it’s the definition of intelligence.
Good thing drag didn’t say that. Drag said the phone in your pocket has intelligence. You added the part about it being the same intelligence as us. Don’t do that.
I guess you are right in that it is a phrase rather than one word. The point that I was trying to say is that oversimplifying what defines intelligence makes the distinction useless. There is a use in defining the difference between a phone computing numbers and our ability to think and I probably should’ve explained it like that
On an unrelated note, I keep seeing you refer to someone called drag. Is this you but in the 3rd person? Are there more than one dragon rider?
Well, yes, it is. It doesn’t meet the minimum definition for sentience, let alone intelligence. You may as well be upset with how poorly we treat rocks.
Actually now that I think about it, you are upset with how we treat rocks. Computer chips are just silicon shot full of lightning and an AI is a function of its chips. We could eventually reach a point where we’ve created a true thinking AI on this substrate but we are so hilariously far away from even the beginnings of that, right now, that using it as a talking point is silly.
And you think an earthworm is sentient. WTF.
Neither the worm, nor current LLMs, are sapient.
Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?
The sweet release of death.
Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.
Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?
We are devoting serious resources to studying the nature of consicousness.
End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.
I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research that deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.
I feel like the last part is something the AI from the paperclip thought experiment would do.
Drag isn’t saying they’re conscious either. A being doesn’t have to be conscious in order to suffer. Drag is perfectly capable of suffering while unconscious, and if you’ve ever had a scary dream, so are you. Drag thinks LLMs act like people who are dreaming. Their hallucinations look like dream logic.
I mean, I don’t agree, but I also don’t think I’ll be able to shake that opinion, so agree to disagree, I guess.