• 0 Posts
  • 301 Comments
Joined 11 months ago
cake
Cake day: December 16th, 2023

help-circle

  • Yea a surprisingly small number of people don’t know a git remote can literally be any folder outside of your tree, over almost any kind of connection.

    I thought about doing a forge but realized that if I was the only one working on this stuff then I could do the same thing by setting my remote to a folder on my NAS.







  • It’s interesting axios specifically calls out the Homicide vs Violent crime statistic reliability:

    The big picture: Homicides are more straightforward to compare year-to-year from pre-2021 to the present because the criteria for classifying them have remained the same while police have changed their methods of recording other violent crimes. Beginning in 2021, the FBI and police departments started shifting to the National Incident-Based Reporting System (NIBRS) from the decades-old Summary Reporting System (SRS). That allowed law enforcement agencies to submit more details on crimes like aggravated assaults but resulted in reported surges in violent crime in cities like Chicago and Minneapolis.

    I read a (weirdly antagonistically conspiracy-themed) article about the FBI recently having to revise their statistics in 2022: https://www.realclearinvestigations.com/articles/2024/10/16/stealth_edit_fbi_quietly_revises_violent_crime_stats_1065396.html

    It makes me wonder if we need to go back and revisit the last few years of crime statistics since they switched reporting structure to get a better idea of what’s going on…




  • Reminder that all these Chat-formatted LLMs are just text-completion engines trained on text formatted like a chat. You’re not having a conversation with it, it’s “completing” the chat history you’re providing it. By randomly(!) choosing the next text tokens that seems like they best fit the text provided.

    If you don’t directly provide, in the chat history and/or the text completion prompt, the information you’re trying to retrieve, you’re essentially fishing for text in a sea of random text tokens that seems like it fits the question.

    It will always complete the text, even if the tokens it chooses minimally fit the context, it chooses the best text it can but it will always complete the text.

    This is how they work, and anything else is usually the company putting in a bunch of guide bumpers to reformat prompts into coaxing the models to respond in a “smarter” way (see GPT-4o and “chain of reasoning”)





  • Zelensky was elected in 2019 specifically on an anti-corruption platform and in the past 4 years even with the war going on he’s made (somewhat) steady progress towards that end.

    Admittedly the pressure from the US to clean up the administration in exchange for weapons has given Zelensky a lot more political leeway to oust corrupted officials.

    Also Statista is notorious for cherry-picking data and not presenting the whole story. If you dig deeper into The actual report you’ll see that Ukraine has been making steady gains against corruption since 2013. The organization even Specifically commends the country for making inroads into corruption:

    Although it still scores low, war-torn Ukraine (33) is one of few significant improvers on the CPI, having gained eight points since 2013. The country has long struggled with systemic abuse of power, but has taken important steps to improve oversight and accountability.