• 2 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: February 9th, 2024

help-circle
  • You might be interested in Zygmunt Bauman’s analysis in his book Modernity and the Holocaust

    From the linked wiki summary:

    “Rather, he argued, the Holocaust should be seen as deeply connected to modernity and its order-making efforts. Procedural rationality, the division of labour into smaller and smaller tasks, the taxonomic categorisation of different species, and the tendency to view obedience to rules as morally good, all played their role in the Holocaust coming to pass.”

    A sociologist friend broke it down for me a long time ago, and, basically, rationalizing everything into a number helped to dehumanize people and paved the way for Nazi atrocities.

    That said, I don’t think “technology” on its own is fascist — technology itself is dependent on how people use it, as others in this thread have pointed to the existence of FOSS as a foil to the use of technology as a method of control by those with power.




  • In no particular order, I listen to all of them regularly:

    • Omnibus - general obscure history hosted by indie rocker John Roderick and Jeopardy’s golden boy Ken Jennings

    • The Dollop - (mostly) American history with a leftist bent. One comedian reads a story the other hasn’t heard before.

    • Not Another D&D Podcast - apologies for the first episode, but great world- and character-building. Really shows how great cooperative storytelling can be

    • Last Podcast on the Left - comedy/horror. Conspiracies, cults, UFOs, and other weird shit. Their historical deep dives are awesome.

    I listen to these regularly, but there’s a limited series podcast I like to recommend called S-Town. It’s excellent, especially if you’re from the southern US or grew up in a rural area. If you aren’t from the south or a rural area, it’ll probably be an extra-wild ride!


  • I’m the production manager and audio engineer for an independent venue, but I also do enough extracurricular, 1099 work that I needed to start spending money to write off on my taxes.

    So, I bought a nice PC a few years ago, started using a friend’s old laptop (that I just replaced with my recent, copilot-infected purchase) to take multitrack recordings for local artists at work, and have been making my way into the mixing and mastering world at home. I figured getting some experience on the studio side would improve my live sound skills and give me something of a fallback, just in case.

    Not quite sure how that’s panning out, but I have learned a few things and have gotten some decent sounds just recording with standard, live audio gear!



  • Maybe I misunderstood OP?

    I don’t think I’ve ever read The Jargon File or The New Hacker’s Dictionary, but I definitely read Heinlen for fun in college. My educational background is in the social sciences and humanities.

    Good point about his lack of context though!

    I just rewatched a show called Devs with a friend. One of the striking moments was when one of the characters recites some poetry and the techy boss didn’t seem to care about how literature can inform and enrich our lives.


  • I’ve heard that Carla is the way to go, but how much more overhead will it cost when basically all the plugins I use are vst3? At least one project on my tower pc is pretty much maxed out as it is with them running natively on Windows.

    My other issue is simply time: this is already side project stuff that I do for a little extra money/learning/career development, and at this point, I simply don’t have time to try alternatives.

    If I was just researching and writing papers like I did back in grad school, Windows would be gone, but as it stands, the path of least resistance for the audio work I’m doing is just to deal with what I’ve got.


  • goosehorse@lemmy.worldtoMicroblog Memes@lemmy.worldShare and Enjoy!
    link
    fedilink
    English
    arrow-up
    58
    ·
    edit-2
    1 month ago

    Got a new laptop recently. Copilot pops up, so I asked it how to permanently disable Copilot.

    It gave me a wordy non-answer, along with a “fun fact” about my local area — totally relevant and not creepy at all.

    Then, after I demanded it tell me how to permanently disable itself, Copilot gave me a completely wrong answer.

    After specifying the “app or service” I’m using (Windows, you fucking clueless piece of shit), it then gave me a half-baked answer that called commands which weren’t installed by default.

    I then used duckduckgo to figure out how to install the configuration tool copilot said to use but that Windows had decided to hide from me.

    Good job completely wasting my time, you ai-loving fucks at Microsoft. I don’t need new reasons to nuke your shitty software and install Linux, but now I have them. If Linux had native vst3 support, I wouldn’t have even booted into Windows.

    Edit: Stranger in a Strange Land is a great book, and being the sci-fi novel backgrounding hippie culture, I wouldn’t have expected Musk to have read it.








  • My TV is insulting like that. It technically has an EQ, but it makes no perceivable difference no matter what I do in it.

    What the hell!

    But assuming it worked, wouldn’t doing that strictly with sound frequencies cause issues? Like, okay, most voices are louder because I boosted their frequency, but now that one dude with a super low voice is quieter, plus any music in the show is distorted. Or something like that.

    Not necessarily. Regardless of vocal range, around 400hz-2000hz makes up the body of what you hear in human speech, or the notes for instryments carrying a melody. Below that, say, 160-315hz is going to be the “warmth” and “fullness” of the sound, while 2.5khz-8khz is going to be the enunciation and clarity (think ch-sounds, ess-es, tee-s, etc).

    Sure, if you start really going hard on an EQ, you could absolutely throw everything out of balance — if you cut out 12db at 250hz, all the warmth will be gone and everything will sound thin. If you scoop a bunch of 400hz-1.6khz, it will sound like a walkie-talkie, and if you make a large boost around 3khz-8khz, then everything will probably sound harsh and scratchy.

    This is where, the listening environment becomes important to consider. Do you live near a busy highway or do you have a loud air conditioner? You don’t need to answer these questions in public, but those kinds of ambient sounds can compete with the enunciation frequencies, or add to the buildup of “mud” in the lower part of the spectrum.

    The size, shape, material properties etc. of your room and furniture also play a role here. For example, a bunch of bare walls and hard surfaces will cause a lot of the high frequencies to bounce around, potentially causing a buildup of harshness. This is why recording studios and your high school band hall probably have those oddly-shaped, cloth-covered wall “decorations” that serve to neutralize the cavernous sound you’d get in a large, bare room.

    Overall, compensating for the environment is where you should probably aim your EQ. That is, even if source material varies wildly, it’s probably best to try to EQ to the room you’re in rather than each, individual program.

    The way to do it is to find a song you know by heart, that you know how it sounds in the best way possible (there are a few that, to me, sound great in my car and on my favorite pair of headphones, so I use those), and play that through your TV. Then, fiddle with the EQ until it’s as close to the ideal sound in your head as you can get it.


  • I would bet there is one mix created in surround sound (7.1 or Dolby Atmos or whatever), and then the end-user hardware does the down-mixing part, i.e. from Atmos with ~20 speakers to a pair of airpods.

    In the music world, we usually make stereo mixes. Even though the software that I use has a button to downmix the stereo output to mono, I only print stereo files.

    It’s defintely good practice to listen to the mix in mono for technical reasons and also because you just never know who’s going to be listening on what device—the ultimate goal being to make it sound as good as possible in as many listening environments as possible. Ironically, switching the output to mono is a great way to check for balance between instruments (including the vocals) in a stereo mix.

    At any rate, I think the problem of dynamics control—and for that matter, equalization—for fine-tuning the listening experience at home is going to vary wildly from place to place and setup to setup. Therefore the hypothetical regulations should help consumers help themselves by requiring compression and eq controls on consumer devices!

    Side tip: if your tv or home theater box has an equalizer, try cutting around 200-250hz and bring the overall volume up a tad to reduce the muddiness of vocals/dialogue. You could also try boosting around 2khz, but as a sound engineer primarily dealing with live performances, I tend to cut more often than I boost.


  • Audio compression is much older than 20 years! Though you’re probably right about it becoming available on consumer A/V devices more recently.

    And you’re definitely correct that “pre-applying” compression and generally overdoing it will fuck up the sound for too many people.

    The dynamic ranges that are possible (and arguably desirable) to achieve in a movie theater are much greater than what one could (or would even want to) achieve from some crappy TV speakers or cheap ear buds.

    From what I understand, mastering for film is going to aim for the greatest dynamic range possible, because it’s always theoretically possible to narrow the range after the fact but not really vice-versa.

    I think the direction to go with OP’s suggested regulation would be to require all consumer TV sets and home theater boxes to have a built-in compressor that can be accessed and adjusted by the user. This would probably entail allowing the user to blow their speakers if they set it incorrectly, but in careful hands, it could solve OP’s problem.

    That said, my limited experience in this world is exclusive to mixing and mastering music and not film, so grain of salt and all that.



  • I have to back into a parking spot in a shitty, shared driveway. If I don’t throw my (automatic transmission) car into neutral and coast into place, my car will decide I’m too close to the curb and just slam the fuck out of the brakes while still several feet away from where I intend to be. It sounds awful and it scared the absolute shit out of me several times before I internalized the workaround.

    Good thing I’m not a fan of the backup camera in general, or this problem would be even more irritating, since the camera turns off when I go from reverse to neutral.


  • I started on a small instance that fortunately gave a heads up when they decided to shut down. When I moved to a second, small instance where I ported all my community subscriptions, it shut down with no warning. It’s a shame, because both instances were topically-focused and small enough to avoid defederation drama.

    I love the idea of decentralized infrastructure, but now I’m on .world because I just don’t have the time or willpower to move every few months, and I definitely don’t have the wherewithal to run my own instance.