WhyEssEff [she/her]

I do the emotes lea-caramelldansen

  • 242 Posts
  • 3.21K Comments
Joined 5 years ago
cake
Cake day: July 25th, 2020

help-circle




  • a thing some of don’t like

    which we’re not allowing on this forum. we’re not free-speech radicals, this is a site that embodies a politic. we have real political stances which we enforce as a general standard of conduct here based on broader consensus among ourselves. we’re also taking an iron fist to, say, suggestions that forceful imposition of “western values” is the solution to reactionary tendencies in peripheral countries–an idea that a notable amount of self-identified ‘progressives’ support, but we don’t tolerate on this forum. you’re talking about it as if LLMs are in an apolitical vacuum and don’t exacerbate real labor problems and real environmental problems and real exploitation around the world.

    this isn’t a very-intelligent you have iphone yet you exist situation–you are making a conscious choice to use it and you can stop at any time. it is a service. it provides no real value that cannot be filled with human thought. if we find that real value, then it merely has that and none more. it is a service that we have lived without until 2022, and–likely–a plurality, if not majority, continue to do as such. it is built on the non-consensual theft of the labor of all who have been preserved on the internet and is maintained by exploitation of the poor in the periphery. it is being used as justification to shepherd in draconian natsec clamps and chauvinist trade policies, and its use has festered a notable acceleration of environmental damage due to its inefficiencies and compute power necessary. the development of it is bankrolled by individuals that seek to use it as a springboard to have a final cutting of ties with the rest of humanity from their profit mode. it is notoriously unreliable and has an entire industry-established term for its tendency for misinformation. consistent usage of it results in the degradation/atrophying of internal processing, prior-held skills, and critical thinking (and once again, to note w/rt this, it has notoriously unreliable output) due to said functions being outsourced to it over time. it also fucking sucks at writing and its output is annoying to read when viewed by anyone who has a functional internal metric for it, no matter if they do detect its ‘author.’ its use is not mandated neither by broader consensus among the general population nor literally mandated in any capacity. just because you personally deem these acceptable doesn’t mean we have to tolerate you nor any other subjecting us to it.

    your arguments seem to be coming from the fact that you cannot comprehend the disconnect between your position and the site’s position here, but we are not changing the site’s position merely because you refuse to engage with the multitude of points people are bringing up and just want it that way. tough shit, I guess.


  • >springboards with a real example i should be able to do this rule-breaking thing because i’m honest about it and it’s for good reasons
    >okay, here’s what you could do in this real example to not do that and still fulfill those good reasons
    >here’s how you can ignore how i’m doing that
    >no, you shouldn’t be doing that, we’re not going to allow it and we’ll keep enforcing it
    >if you don’t allow it, everyone else is going to do it, secretly, so allow it if we’re open about it
    >here is a real example of something we don’t allow and how we enforce it and that strategy seems to work better
    >why are you comparing my thing to that really bad thing
    >hey, you still haven’t engaged with my first point, here’s how not to do that, can you do that
    >actually this is a broader point for hypothetical situations on principle (validating llm usage [cool, good, fine])


  • hey even though I’ve emphasized it again, you still haven’t responded to my last point. i have to ask:

    1. why can’t you write the summaries yourself, it’s a minute at most if you’re reading the article before you post it
    2. why can’t you copy the byline if you refuse to put in the minute of work to summarize the article you’ve read
    3. even assuming both are impossible, not happening, why do you assume that the demographic of “people who want AI summaries of articles in their social media posts” do not know where and how to access the chatbots that can summarize them themselves. does it have to be in the post itself?

  • this is just the argument libertarians use for why you can’t ever regulate anything? this is not a free-speech radical forum. we’re not making market solutions for content here. in the same vein in which we both have an automatic slur filter, remove blatant racism, and attempt to weed out subtle racism, the solution isn’t normalizing the open racism–the solution is stamping it out with an iron fist whenever it’s caught. yes–things slip through the cracks, it’s imperfect–but it’s infinitely better than Twitter despite its imperfections, and it wards away the people who are incentivized by its normalization. I would personally like this site to strive to be a space free from this slop. There are numerous ethical, labor, environmental and health issues with its normalization and usage, and I’d like to be in a space carved away from indulgence in it in an open and unabashed manner. I feel uncomfortable with the encouragement of usage or reliance on it in any capacity or degree of separation, especially systematically. Again:

    just write the summary yourself. I assume you’ve read the article. It can be a paragraph. let’s say you don’t want to. we can access the text. we can access these chatbots. if we’re so inclined, we can toss the article at the chatbots on our own time.





  • WhyEssEff [she/her]@hexbear.nettoaskchapo@hexbear.netAI Summaries of Articles
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    edit-2
    1 day ago

    honesty is only a virtue unalloyed. the goal is to eradicate AI slop in this space. why would we allow it under the pretense of ‘at least they admit it?’ that’s not the goal. the goal is to remove it entirely. when it’s detected, it should be gone.

    it is also not at all an accessibility aid. as the exact demographic of person (rather severe presentation of ADHD) who would be supposedly most aided by this, as well as being a data science major, I wholeheartedly reject the idea that it in any way meets an acceptable standard for constituting that. the average person genuinely doesn’t know the sheer amount of subtle fuckups and misinformation these diceroll plagiarism boxes output even when provided the exact text they are supposed to paraphrase. rather, its main effect–due to them ‘seeming right’–is a disinformative capacity, encouraging people to skip the article and defer to the generated ‘summary.’ I simply do not think this is a sound argument.

    just write the summary yourself. I assume you’ve read the article. It can be a paragraph. let’s say you don’t want to. we can access the text. we can access these chatbots. we can toss the article at the chatbots on our own time. I don’t want AI slop on this forum at all and oppose the normalization of it, especially under flimsy pretenses such as this.