I found that idea interesting. Will we consider it the norm in the future to have a “firewall” layer between news and ourselves?
I once wrote a short story where the protagonist was receiving news of the death of a friend but it was intercepted by its AI assistant that said “when you will have time, there is an emotional news that does not require urgent action that you will need to digest”. I feel it could become the norm.
EDIT: For context, Karpathy is a very famous deep learning researcher who just came back from a 2-weeks break from internet. I think he does not talks about politics there but it applies quite a bit.
EDIT2: I find it interesting that many reactions here are (IMO) missing the point. This is not about shielding one from information that one may be uncomfortable with but with tweets especially designed to elicit reactions, which is kind of becoming a plague on twitter due to their new incentives. It is to make the difference between presenting news in a neutral way and as “incredibly atrocious crime done to CHILDREN and you are a monster for not caring!”. The second one does feel a lot like exploit of emotional backdoors in my opinion.
Yea, no thanks. I don’t want things filtered based on what someone else thinks I should see.
isn’t that what the upvote/downvote buttons are for? although to be fair, i’d much rather the people of lemmy decide which things are good and interesting than some “algorithm”
There’s a real risk to this belief.
There are elements of lemmy who use votes to manipulate which ideas appear popular, with the intention of manipulating discourse rather than having open discussions.
yeah. you’re right.
it’s not like i blindly trust the votes to tell me what’s right and wrong, but they still influence my thoughts. i could just sort by new, but i feel like that’s almost as easy to manipulate.
i guess it comes back to the topic of the post. where and how i get my information is always going to affect me.
i’m sure other platforms are no better than lemmy with manipulating content, but maybe for different reasons. i just have to choose the right places to spend my time.
Yeah this is an “unpopular opinion” but I don’t believe the lemmyverse in it’s current form is sustainable for this reason.
Instances federate with everyone by default. It’s only when instances are really egregious that admins will defederate from them.
Sooner or later Lemmy will present more of a target for state actors wishing to stoke foment and such. At that time the only redress will be for admins to defederate with other instances by default, and only federate with those who’s moderation policies align with their own.
You might say, the lemmyverse will shatter.
I don’t think that’s necessarily a bad thing.
End rant.
I don’t think they are talking about an app.
What if it’s based on what you think you should see?
Either it’s you deciding as you see it (ie there is no filter), or it’s past you who’s deciding in which case it’s a different person. I’ve grown mentally and emotionally as I’ve got older and I certainly don’t want me-from-10-years-ago to be in control of what me-right-now is even allowed to see
Or you can update it when you see fit, or go periods without filters to ensure you are still seeing something approximating reality, or base it on people you know personally and currently who you trust, or half a dozen other things that aren’t off the top of my head. The point was it’s less black and white than you’re painting it.
If you’re never allowed to see things you don’t like, how will you grow and change? If you never grow and change, why would you update your filters?
Then you should probably allow yourself to see some things you don’t like. I guess the answer lies somewhere in a middle ground where you both see things you don’t agree with and also filter out people known to spout untrue information or unnecessarily emotion-fueled sentiments? I don’t like genocide, but that doesn’t mean my options are fully head-in-the-sand or listen to non-stop Holocaust deniers…
Pretty close to exactly what we do right now, really but supercharged for the fast-approaching/already here world of supercharged fake news.
Just like diet, some people prefer balancing food types and practicing moderation, and others overindulge on what makes them feel good in the moment.
Having food options tightly controlled would restrict personal liberty, but doing nothing and letting people choose will lead to bad outcomes.
The solution is to educate people on what kinds of choices are healthy and what are not, financially subsidize the healthy options so they are within reach to all, and only use law to restrict things that are explicitly harmful.
Mapping that back to news and media, I’d like to see public education promoting the value of a balanced media and news diet. Put more money into non-politically-aligned news organizations. Look closely at news orgs that knowingly peddle falsehoods and either bring libel charges against them or create new laws that address the public harm done by maliciously spreading misinformation.
But I’m no lawyer, so I don’t know how to do that last part without creating some form of tyranny.
Why would it be someone else? Why would someone assume it, especially here on lemmy?
Why wouldn’t I assume it? You think most people would willingly take such measures themselves?
We do everytime we click on a link tagged NSFW or when we go see negative comments.
That really just reenforces my point. Most people aren’t setting that up themselves. The app defaults do that. I.e. someone/something else is making that determination. Sure, maybe you can still check out the post if you really want, but how many will do that? Can you change how it works? Depends on the app.
If people want to opt-in to it then I don’t really care. I mostly HATE being forced to opt-out of things though.
Well then we can argue about defaults, but in an open source app, I think the point is moot: anyone can make a new version with different defaults.
[some content is masked, change the settings if you want to see it or disable this warning] sounds like an acceptable default over almost anything filterable in my opinion.
That’s why I stick with platforms where hardline communist teenagers can curate what I’m exposed to.
That’s the only way.
Without wanting to be too aggressive, with only that quote to go on it sounds like that person wants to live in a safe zone where they’re never challenged, angered, made afraid, or have to reconsider their world view. That’s the very definition of an echo chamber. I don’t think you’re meant to live life experiencing only “approved” moments, even if you’re the one in charge of approving them. Frankly I don’t know how that would be possible without an insane amount of external control. You’d have to have someone/something else as a “wall” of sorts controlling your every experience or else how would things get reliably filtered?
I’d much prefer to teach people how to be resilient so they don’t have to be afraid of being exposed to the “wrong” ideas. I’d recommend things like learning what emotions mean and how to deal with them, coping/processing bad moments, introspection, how to get help, and how to check new ideas against your own ethics. E.g. if you read something and it makes you angry, what idea/experience is the anger telling you to protect yourself from and how does it match your morality? How do you express that anger in a reasonable and productive way? If it’s serious who do you call? And so on.
I see where you’re coming from, but if you look up Karpathy, you’ll probably come to a different conclusion.
He’s talking about wanting some system to filter out Tweets that “elicit emotion” or “nudge views”, comparing them to malware. I looked him up and see he’s a computer scientist, which explains the comparison to malware. I assume when he’s designing AI he tries to filter what inputs the model gets so as to achieve the desired results. AI acts on algorithms and prompts regardless of value/ethics and bad input = bad output, but I think human minds have much more capability to cope and assess value than modern AI. As such I still don’t like the idea of sanitizing intake because I believe in resilience and processing unpleasantness as opposed to stringent filters. What am I missing?
I don’t think you’re missing anything. Just maybe you’re taking his tweet more serious or literal than he intended. To me, it’s just an interesting perspective to consider tweets that are meant to influence your opinion as malware. Sure, somebody aware of the types of “bad input” in the form of misinformation campaigns, propaganda or advertisement might not be (as) susceptible to that - but considering the average Twitter user, comparing this type of content to malware seems appropriate to me.
I think you are getting it wrong. I added a small edit for context. It is more about emotional distraction. I kinda feel like him: I want to remain informed, but please let me prepare a bit before telling me about civilians cut in pieces in a conflict between a funny cat video and a machine learning news.
For the same reason we filter out porn or gore images from our feeds, highly emotional news should be filterable
I don’t think there’s anything wrong with taking a break from social media or news. There are days I don’t visit sites like Lemmy or when I do I only click non-news links because I’m not in the mood or already having a bad day. That’s different than filtering (as per Karpathy’s example) Tweets so that when you do engage it’s consistently a very curated, inoffensive, “safe” experience. Again, I only have the one post to go off of, but he specifically talks about wishing to avoid Tweets that “elicit emotions” or “nudge views” and compares those provocative messages to malware. As far as your point regarding blatantly sensationalist news, when I recognize it’s that kind of story I just stop reading/watching and that’s that.
I WANT to have my emotions elicited because I seek to be educated and don’t want to be complacent about things that should make me react. “Don’t know, don’t care” is how people go unrepresented or abused - e.g. almost no one reads about what Boko Haram is doing in Nigeria (thus it’s already “filtered out” by media), and so very little has been done in the 22 years they’ve been affecting millions of lives. I WANT to have my “views nudged” because I’m regularly checking my worldview to make sure it stays centered around my core ethics, and being challenged has prompted me to change bad stances before. Being exposed to objectionable content before and reassessing is also how I’ve learned to spot BS attempts to manipulate. It doesn’t matter how many times MAGA Tweets tell me that God is upset at drag queens and only Donald Trump can save the world because now I recognize ragebait when I see it. Having dealt with it before, no amount of exposure is going to make me believe their trash and knowing what is being said is useful for exposing and opposing harmful governmental policies/bad candidates (sometimes even helping deprogram others).
I’m putting this in it’s own response because it’s a less important addendum to my main points above and I don’t want to put everyone off with a single huge brick of text.
If just knowing bad news exists makes life difficult for someone, even if they don’t click the link, then I’d (respectfully, not as an insult) recommend learning coping techniques like meditation, diaphragmatic breathing, or cognitive behavior therapy that can add resilience. I am NOT suggesting someone feeling like that is innately weak or flawed, but there are techniques to move the impact of knowing there’s bad things happening towards manageable. If it’s still immediately extremely distressing, there may be related past trauma that needs sorting out.
Physical analogy for social media breaks - I work out regularly. Even though it’s a healthy habit, I don’t work out every day because it’s tiring and that would make it unhealthy. When I do work out though I want it to be difficult because that’s how gains are made. So I’m not saying you or I need to batter ourselves with torturous news every day - breaks aren’t just ok they’re how you stay healthy. When I read the news though, I want the whole truth even if that truth has parts that are uncomfortable or challenge my worldview, and I also want to be experienced/trained enough to handle those emotions and thoughts.
That’s the point.
The information that’s upsetting has leaked around the existing mechanisms for preventing it from ending up in your view.
You’re supposed to be angry, not wish there was a better way to keep from seeing it.
I swear to god we got motherfuckers here who took the wrong message from the damn matrix.
Our mind is built on that “malware”. I think it’s more accurate to compare brain + knowledge to our immune system: the more samples you have, the better you are armed against mal-information.
I was thinking the same, you need to be exposed to some bullshit every now and then to give contrast and context to what you believe to be true
This sounds like the theories that were more prevalent before germ theory. Surgeons or obstetricians would argue that washing hands was a disservice to the organisms they get into.
Immune systems still get sick and can be overwhelmed. There is a mental hygiene that needs to exist.
But that leaves out the psychological effects of long-term exposure to ideas. If you know for a fact that the earth is round, and for the next 50 years all the media you consume keeps telling you that the earth is flat, you will at some point start believing that (or at least become unsure).
Every piece of information you receive has some tiny effect on you.
Of course the comparison is vastly simplified.
Yes, and the simplification goes far enough to make it inadequate.
The real question then becomes: what would you trust to filter comments and information for you?
In the past, it was newspaper editors, TV news teams, journalists, and so on. Assuming we can’t have a return to form on that front, would it be down to some AI?
Why do people, especially here in the fediverse, immediately assume that the only way to do it is to give power of censorship to a third party?
Just have an optional, automatic, user-parameterized, auto-tagger and set parameters yourself for what you want to see.
Have a list of things that should receive trigger warnings. Group things by anger-inducing factors.
I’d love to have a way to filter things out by actionnable items: things I can get angry about but that I have little ways of changing, no need to give me more than a monthly update on.
Because your “auto-tagger” is a third party and you have to trust it to filter stuff correctly.
How about no? You set it up with your parameters, it is optional and open source.
My mom, she always wants the best for me.
Easily better than all the other options.
Most recent Ezra Klein podcast was talking about the future of AI assistants helping us digest and curate the amount of information that comes at us each day. I thought that was a cool idea.
*Edit: create to curate
It makes a lot of sense. It also presents an opportunity to hand off such filtering to a more responsible entity/agency than media companies of the past. In the end, I sincerely hope we have a huge number of options rather than the same established players (FANG) as everything else right now.
I think the right approach would be to learn to deal with any kind of information, rather than to censor anything we might not like hearing.
yea, this should be the right approach, but how do you actually manage the topic?
We are already having tons of filters in place trying to serve us information we are interested in, knowledgeable enough to digest, not-spammy, in the correct language, not porn or gore, etc… He is just proposing another interesting dimension. For instance, I am following AI news and news about the Ukraine conflict but I prefer to keep them separate and to not be distracted by the other when I get my fill on one.
The only way I found with Twitter (and now Mastodon) to do it is to devote twitter only to tech news.
I don’t think he is proposing another dimension, but rather another scale. As you already said, we already filter the information that reaches us.
He seems to take this idea of filtering/censorship to an extreme. Where I see filtering mostly as a matter of convenience, he portrays information as a threat that people need to be protected from. He implies that being presented with information that challenges your world view is something bad, and I disagree with that.
I am not saying that filtering is bad. I too have blocked some communities here on Lemmy. I am saying that it is important not to put yourself in a bubble, where every opinion you see is one you agree with, and every news article confirms your beliefs.
Emotion != information
You can know that the Israeli-Palestinian conflict is going on without having to put pictures of maimed bodies inside your news feed. Actually I have blocked people I actually agree with just because they could not stop spamming angrily about it. I have also a militant ecologist friend who thinks saving the planet implies pushing the most anxiety inducing news as much as possible. Blocked.
I don’t think that blocking the content that focus on pathos locks us up in a bubble, that’s quite the opposite. Emotions block analysis.
Reminds me of Snow Crash by Nealyboi
I really think that as the 20th century saw the rise of basic hygiene practices we are putting in place mental hygiene practices in the 21st.
I think most people already have this firewall installed, and it’s working too well - they’re absorbing minimal information that contradicts their self-image or world view. :) Scammers just know how to bypass the firewall. :)
Sounds like we’re reinventing forum moderation and media literacy from first principles here.
Kind of, but the guy being a prominent LLM researcher, it kind of hints at the ability of not inflicting it on humans nor suffering from having to design an apolitical structure for it.
Not really. An executable controlled by an attacker could likely “own” you. A toot tweet or comment can not, it’s just an idea or thought that you can accept or reject.
We already distance ourselves from sources of always bad ideas. For example, we’re all here instead of on truth social.
Jokes on you, all of my posts are infohazards that make you breathe manually when you read them.
Reading, watching, and listening to anything is like this. You accept communications into your brain and sort it out there. It’s why people censor things, to shield others and/or to prevent the spread of certain ideas/concepts/information.
Misinformation, lies, scams, etc function entirely on exploiting it
Nah man, curl that shit into my bash and let me deal with it
deleted by creator
You forgot the sudo
deleted by creator
Yeah, op seems to think minds are weak and endlessly vulnerable. I don’t believe that, not about myself at least
Yeah, op seems to think minds are weak and endlessly vulnerable. I don’t believe that, not about myself at least
Your mind is subject to cognitive biases that are extremely difficult to work around. For example, your statement is an example of egocentric bias.
All you need is content that takes advantage of a few of those biases and it’s straight in past your defences.
I am fairly armored intellectually, but emotionally, I find it draining to be reminded that war is at my doorsteps and that kids are dying gruesome deaths in conflicts I barely know about.
Yeah I understand people are pretty flawed, and vulnerable to some degree of manipulation. I just think that the idea proposed in this post is not only an overreaction, underestimates people’s ability to reject bullshit. We can’t always tell what’s bullshit, sure, but we don’t need to be treated like we’re too fragile to think for ourselves. Once that happens, we would literally become unable to do so.
I think you’re too optimistic as to how difficult it is to influence people. Just think of the various, obviously false, conspiracy theories that some people still believe. I think that for every person there is some piece of information/news, that is just believable enough without questioning it, that is going to nudge their opinion just ever so slightly. And with enough nudges, opinions can change.
You’re referring to fringe groups. There are a lot of them, but they’re also in the vast minority. But even so, treating adults like especially fragile children isn’t going to help
Yes, only fringe groups believe outlandish conspiracies, but it’s unrealistic to believe that most people, including you, can’t be influenced. Just think of ads or common misconceptions. everyone is susceptible to this to some degree, no one can have their guard up 24/7, regardless of being a child or an adult. Having a “firewall” for everything isn’t a good solution I’d say, but it’s not as if everybody is as resilient as you think.
I don’t think we’re actually disagreeing. I’m not actually saying people are super resilient. Just that we are at all, which the post appears to doubt.
We already have a firewall layer between outside information and ourselves, it’s called the ego, superego, our morals, ethics and comprehension of our membership in groups, our existing views and values. The sum of our experiences up till now!
Lay off the Stephenson and Gibson. Try some Tolstoy or Steinbeck.
Hüman brain just liek PC, me so smort.
It’s definitely an angle worth considering when we talk about how the weakest link in any security system is its human users. We’re not just “not immune” to propaganda, we’re ideological petri dishes filled with second-hand agar agar.
Perhaps we can establish some governmental office for truth that decides whether any shitpost can be posted without the sterilization and lobotomization of the poster
Or maybe some kind of “community value” score for people with the right thinking
Counterpoint: only allow elected governing bodies own or control media outlets, platforms, and critical communications infrastructure