• 2 Posts
  • 613 Comments
Joined 1 year ago
cake
Cake day: September 24th, 2023

help-circle





  • Why? I’ve worked in two companies where IT allows Linux as an option and people are constantly having issues (including me). And these are highly technical people. Two people who are not stupid managed to break their laptops by uninstalling Python 2 which Gnome depended on.

    Yes that’s technically a UX issue, but there are plenty of good old bugs too, e.g. if you remove a VPN connection that a WiFi network autoconnects to then that WiFi network will entirely stop working with no error messages to speak of. Took me a long time to figure that out. Or how about the fact that 4k only works at 30fps over HDMI, but it works fine over DisplayPort or Thunderbolt3. The hardware fully supports it and it works for other people with the same OS and laptop. I never figured that out.

    That’s just a taster… I almost never have issues like that on Windows or Mac.

    Windows may cost more than “free” but the additional support costs for Linux are very far from free too.

    Maybe something like Chromebooks makes sense if everything is in the cloud.







  • I think you’re being way too harsh.

    1. His recommendations to disable debug info and PIC are not “bad”. He isn’t suggesting that should be the default. He actually only suggested that split debug info should be made the default on Linux which is a sensible suggestion.
    2. There are gazillions of other posts talking about codegen-units, cranelift and so on. I don’t think we need a repeat (though he could have linked to them).

    The focus on linking was because this post is introducing his liker project.

    OP ignore this naysayer.



  • would not be considered bugs but maybe change requests.

    That’s just playing with semantics. They are clearly bugs. They are literally called “defect reports”.

    Without a spec how would you argue that a system/product is safe?

    1. Lots of testing, including randomised testing and ideally formal verification.
    2. Comprehensive test coverage - both code coverage (lines, branches) and functional coverage (hand written properties).
    3. Functional safety features (ECC, redundancy, error reporting & recovery, etc.)
    4. Engineering practices known to reduce the chance of bugs (strong static types, version control, CI & nightly tests, rigorous engineering processes - requirement tracking and so on, and yes ideally well written specifications for all the tools you are using).

    There are many aspects to safety and it’s definitely a good idea to have a spec for a language, but it doesn’t automatically mean safety is impossible without it.

    Software in itself cannot be safe or unsafe because without hardware it cannot do anything.

    The nice thing about abstraction is that you can talk about software without considering the hardware, more or less. If one says “this software is safe”, it means it’s safe assuming it’s running on working hardware.

    It doesn’t always hold up - sometimes the abstraction leaks, e.g. for things like spectre and rowhammer. And there are sometimes performance concerns. But it’s pretty good.



  • It’s not because we have tested this program extensively on every C++ compiler, but because the language rules of C++ say so.

    Debatable. Saying things in a prose specification doesn’t magically make them happen. Tests and reference models can though.

    I also don’t really agree with the SIL requirements that languages need to have rigorous specifications to be safe. Clearly it’s better if they do, but would your rather fly on a rocket controlled by C code or Rust code?

    IMO a specification would be really nice to have, but it main purpose is to tick a certification checkbox, which is why the only one that exists was written specifically for that purpose.