Summary

Australia has passed a groundbreaking ban on social media use for children under 16, the strictest of its kind globally.

Platforms like X, Facebook, Instagram, TikTok, Snapchat, and Reddit have one year to implement the age limit, with fines up to AU$50M for non-compliance.

Supporters cite mental health concerns, while critics argue the ban risks isolation for marginalized youth, lacks proper research, and excludes harmful platforms like 4chan.

Privacy concerns surround proposed age-verification methods. Opponents, including parents, scholars, and tech companies, argue the legislation is rushed and poorly designed, potentially exacerbating existing issues.

  • yeahiknow3@lemmings.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    4
    ·
    edit-2
    19 days ago

    First of all, fifteen year olds don’t give a shit about political “news.” Secondly, is your argument that Facebook shouldn’t be banned because something else just as bad as Facebook exists (which could also be banned)?

    • Voroxpete@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      19 days ago

      My point is this; unless you’re proposing simply banning teenagers from the Internet entirely (we’ll get to the problems with that in a moment), you now have the task ahead of you of a) deciding which websites are age restricted and b) enforcing that age restriction.

      So, you will have to constantly identify new “social media” sites, including any mastodon instance, lemmy instance, or just plain old school BBS that someone decides to spin up. Then you will have to force them to comply, which isn’t easy. By the time you do, a new site will have popped up, and users will have migrated there. Deciding what is or isn’t a “social media” site will also be a fun little legal challenge.

      The big sites, the ones with some degree of moderation, the ones that at least make basic efforts to remove predators and the like when they’re reported, they’ll all comply. But the sites with the least moderation, the least protections for users, those are the ones you’ll have to drag kicking and screaming.

      So what you’ve done now is made the web actively more dangerous for teens by forcing them out of the well policed areas and into the parts you least want them to be in.

      As mentioned previously, you could contemplate blocking the entire web for teens. But putting aside how much schools have come to rely on it, and how essential it is for many aspects of normal daily life, you also have to consider the direct harm to at risk teens. Many at-risk teenagers rely on the internet for information that can save their life. Information about abuse, rape, pregnancy, drugs, suicide prevention. Queer teens often rely on the internet to safely explore their identity. Teens in many families rely on the internet for access to sex education that their parents refuse to provide (and that’s not even getting into the fact that many access that kind of information through social media).

      So how does this work? How do you pull this off in a way that doesn’t cause more harm than good?