• LordKitsuna@lemmy.world
    link
    fedilink
    arrow-up
    56
    arrow-down
    2
    ·
    edit-2
    7 days ago

    While I do agree that there is some very problematic maintainers that are basically blocking progress just because they are old farts that don’t like change. I do also agree with Linus that immediately running to social media to drum up drama is not the correct solution to getting it fixed.

    I am ultimately on the side of the rust maintainers here however there were definitely mistakes on both sides, I am not a fan of the trend over the past few years where anytime something doesn’t go your way you try and drum up as much drama on social media as possible. I am also not a fan of policing how people choose to word things, I don’t think there was anything wrong with the context that cancer was used. They were saying that the code would eventually grow uncontrollably and in a way that was unmanageable which is the literal definition of cancer, a cell that grows in an uncontrolled and unmanageable way.

    Regardless of whether or not I agree with them that it would become a problem like that, I don’t see any problem with using the word like that.

    There’s also some fault with torvald here, he needs to step up and either say that rust is okay or that it’s not because this wishy-washy game is bullshit and is not helping anybody. He originally accepted it into the kernel but has been letting random maintainers create roadblocks for it he either needs to tell them to back the fuck off and get over it or he needs to get rid of the notion that rust is accepted.

    He mentioned turning a technical argument into drama and while I am not anywhere near as knowledgeable at these people I didn’t see much technical debate, I saw a maintainer that clearly said they just hated rust and we’re going to do everything they could to block it and not work with anyone on it. Which doesn’t sound like a very technical based argument to me, there were a couple concerns raised but they appeared to be addressed by multiple people quite thoroughly as there was both misunderstandings and even further potential compromises offered

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      6 days ago

      I do also agree with Linus that immediately running to social media to drum up drama is not the correct solution to getting it fixed.

      So what is the correct solution?

      • LordKitsuna@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        6 days ago

        The key word there is immediately, they called upon Linus but also made the media drama at the exact same time. Call upon Linus, wait for his response, if you don’t get one after a decent chunk of time then go to social media.

        It’s hard to know what line is would have done had there not been the drama that exploded before he got to the thread that ultimately upset him he may have stepped in to try and smooth things over he may not have but it was pretty much guaranteed he wouldn’t when it turned into a public event as his opinion on that has been clear for a long long time

        • FizzyOrange@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          Yeah they probably should have tried that. On the other hand this isn’t the first time, and Linus didn’t do anything then either.

    • kopasz7@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      24
      ·
      7 days ago

      NACK-ing rust at version 8 of the patchset is kind of a dick move. The 7 others were fine before? Ridiculous that Linus didn’t step in in a definitive way.

      • Kissaki@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 days ago

        Without having looked into it, I find it plausible that it could take several patchsets to come to an assessment of consequences and conclusion. Especially as they happen alongside assessments and discussion. The patchset number may also be largely irrelevant depending on what was changed.

        • bitcrafter@programming.dev
          link
          fedilink
          arrow-up
          11
          ·
          7 days ago

          There are definitely legitimate situations where that is the case, but I do not think this is one of them. To quote the reason for the rejection (from here):

          I accept that you don’t want to be involved with Rust in the kernel, which is why we offered to maintain the Rust abstraction layer for the DMA coherent allocator as a separate component (which it would be anyways) ourselves.

          Which doesn’t help me a bit. Every additional bit that the another language creeps in drastically reduces the maintainability of the kernel as an integrated project. The only reason Linux managed to survive so long is by not having internal boundaries, and adding another language complely breaks this. You might not like my answer, but I will do everything I can do to stop this. This is NOT because I hate Rust. While not my favourite language it’s definitively one of the best new ones and I encourage people to use it for new projects where it fits. I do not want it anywhere near a huge C code base that I need to maintain.

          These do not sound like the words of someone who had been on the fence but was finally pushed over to one side by the last patchset in a sequence.

  • 0x0@programming.dev
    link
    fedilink
    arrow-up
    27
    ·
    7 days ago

    opposition could ease over time as veteran C maintainers step back and Rust skills become more common

    So wait for opposition to retire… interesting.

    • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      11
      ·
      edit-2
      7 days ago

      Progress?

      Just curious - when’s the last time you compiled the kernel yourself? Do you remember how long it took? And that was all just C, which - while not exactly fast - is at least an order of magnitude faster to compile than Rust.

      I’m seriously concerned that if Linux rally slowly does become predominantly Rust, development will stop, because nobody without access to a server farm will be able compile it in any reasonable amount of time.

      Rust would be better suited to a micro kernel, where the core is small and subsystems can be isolated and replaced at run time.

      Edit: adding a more modern language isn’t a bad idea, IMHO, I just think it should be something like Zig, which has reasonable compile times and no runtime. Zig’s too young, but by the time it’s considered mature, Rust will either be entrenched, or such a disaster that it’ll be decades before kernel developers consider letting a new language in.

      • GarlicToast@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        I used Gentoo in ancient time when kernel updates took a whole day. A modern computer can rebuild in an hour, a good one, even faster. I’m not a kernel developer, but I don’t think they need to rebuild the whole kernel for every iteration.

        And as for Rust, I’m doing bioinformatics in Rust because our iteration time is order of magnitudes longer than a kernel build, and Rust reduced the number of iterations required to reach the final version.

        • Rust run times are excellent. And statically linked binaries are the superior intellect.

          Runtime performance counts for me only some specific cases, and there are many programs I have installed that I recompile because of updates far more frequently than I run them; and when I do run them, rarely use performance an issue.

          But you have a good point: performance in the kernel is important, and it is run frequently, so the kernel is a good use case for Rust - where Go, perhaps, isn’t. My original comment, though, was that Zig appears to have many of the safety benefits of Rust, but vastly better compile times.

          I really do need to write some Zig projects, because I sound like an advocate when really my opinions are uninformed. I have written Rust, though, and obviously have opinions about it, and especially how it is affecting my system update times.

          I’ll keep ripgrep, regardless of compile times. Probably fd, too.

          • GarlicToast@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            4 days ago

            It is easier to safely optimize Rust than C, but that was not the point. The point was on correctness of code.

            It is not unheard of for code to run for weeks and months. I need the code to be as bug free as possible. For example, when converting one of our tools to Rust we found out a bug that will lead to the wrong results on big samples. It was found by the Rust compiler! Our tests didn’t cover the bug because it will only happen on very big sample. We can’t create a test file of hundreds of GB by hand and calculate the expected result. Our real data would have triggered the bug. So without moving to Rust we would have gotten the wrong results.

            • So, a couple of thoughts. You can absolutely write safe code that produces wrong results. Rust doesn’t help - at all - with correctness. Even Rustaceans will agree on that point.

              I agree that Rust is safer than C; my point is that if correctness and safeness is the deciding criteria, then why not use Haskell? Or Ada? Both are more “safe” even than Rust, and if you’re concerned about correctness, Haskell is a “provable” language, and there are even tools for performing correctness analysis on Haskell code.

              But those languages are not allowed in the kernel, and - indeed - they’re not particularly popular; certainly not in comparison to C, Go, or Rust. There are other factors than just safety and correctness; otherwise, something like OCaml would probably be a dominant language right now.

      • onlinepersona@programming.dev
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        7 days ago

        Of course compiling something without checks is safe. If that’s your standard, we should write the kernel in JS, Python, Ruby, LUA or any other dynamically typed language since there’s no compilation time.

        Progress means I don’t have to read blog posts in order to compile the kernel. Progress means I have a sane toolchain that lets me run, test, debug, manage dependencies, and even distribute my code and artefacts (documentation, compile output, …) easily. Progress means catching many more bugs at compile-time instead of runtime.

        Anti Commercial-AI license

        • You’re throwing the baby out with the bath water with the reductio ad absurdum argument. Rust may very well be less secure than Ada - if so, then does that make it not good enough?

          I say it’s not worth trading some improvement in safety for vastly longer compile times and a more cognitively complex - harder - language, which increases the barrier of entry for contributors. If the trade were more safety than C, even if not as good as Rust, but improved compile times and a reasonable comprehensibility for non-experts in the language, that’s a reasonable trade.

          I have never written a line of code in Zig, but I can read it and derive a pretty good idea of what the syntax means without a lot of effort. The same cannot be said for Rust.

          I guess it doesn’t matter, because apparently software developers will all be replaced by AI pretty soon.

          • onlinepersona@programming.dev
            link
            fedilink
            English
            arrow-up
            7
            ·
            7 days ago

            I have never written a line of code in Zig, but I can read it and derive a pretty good idea of what the syntax means without a lot of effort. The same cannot be said for Rust.

            That’s you dawg. You probably have a different background, because I can follow zig code, but have no idea what a bunch of stuff means.

            See samples

            pub fn enqueue(this: *This, value: Child) !void {
                        const node = try this.gpa.create(Node);
                        node.* = .{ .data = value, .next = null };
                        if (this.end) |end| end.next = node //
                        else this.start = node;
                        this.end = node;
                    }
            

            pub fn enqueue(this: *This, value: Child) !void { , !void? It’s important to return void? Watch out void is being returned? Does that mean that you can write !Child ? And what would that even mean?

            const node = try this.gpa.create(Node); what does try mean there? There’s no catch, no except. Does that mean it just kills the stack and throws the exception until it reaches a catch/except? If not, why put a try there? Is that an indication that it it can throw?

            node.* = .{ .data = value, .next = null }; excuse me what? Replace the contents of the node object with a new dict/map that has the keys .data and .next?

            if (this.end) |end| end.next = node // what’s the lambda for? And what’s the // for ? A forgotten comment or an operator? If it’s to escape newline, why isn’t it a backslash like in other languages?

            start: ?*Node. Question pointer? A nullable pointer? But aren’t all pointers nullable? Or does zig make a distinction between zero pointers and nullable pointers?

            pub fn dequeue(this: *This) ?Child {
                        const start = this.start orelse return null;
                        defer this.gpa.destroy(start);
            

            this.start orelse return null is this a check for null or a check for 0 or both?

            However when I read rust the first time, I had quite a good idea of what was going on. Pattern matching and move were new, but traits were quite understandable coming from Java with interfaces. So yeah, mileage varies wildly and just because you can read Zig, doesn’t mean the next person can.


            Regardless, it’s not like either of us have any pull in the kernel (and probably never will). I fear for the day we let AI start writing kernel code…

            Anti Commercial-AI license

            • Granted, everyone is different. The cognitive load of Rust has been widely written about, though, so I don’t think I’m am outlier.

              Regardless, it’s not like either of us have any pull in the kernel (and probably never will). I fear for the day we let AI start writing kernel code…

              Absolutely never, in my case. This isn’t what concerns me, though. If Rust is harder than C, then fewer people are going to attempt it. If it takes several hours to compile the kernel on an average desktop computer, even fewer are going to be willing to contribute, and almost nobody who isn’t creating a distribution is ever going to even try to compile their own kernel. It may even dissuade people from trying to start new distributions.

              If, if, if. Maybe it seems as if I’m fear-mongering, but as I’ve commented elsewhere, I noticed that when looking for tools in AUR, I’ve started filtering out anything written in Rust unless it’s a -bin. It’s because at some point I noticed that the majority of the time spent upgrading software on my computer was spent compiling Rust packages. Like, I’d start an update, and every time I checked, it’d be in the middle of compiling Rust. And it isn’t because I’m using a lot of Rust software. It has had a noticeable negative impact on the amount of time my computer spends with the CPU pegged upgrading. God forgive me, I’ve actually chosen Node-based solutions over Rust ones just because there was no -bin for the Rust package.

              I don’t know if this is the same type of “cancer” in the vitriolic Kernel ML email that led to the second-to-last firestorm, but this is how I’ve started to feel about Rust - if there’s a bin, great! But no source-based packages, because then updating my desktop starts to become a half-day journey. I’m almost to the point of actively going in and replacing the source-based Rust tools with anything else, because it’s turning updating my system into a day-long project.

              Haskell is already in this corner. Between the disk space and glacial ghc compile times, I will not install anything Haskell unless it’s pre-compiled. And that’s me having once spent a year in a job writing Haskell - I like the language, but it’s like programming in the 70’s: you write out your code, submit it as a job, and then go do something else for a day. Rust is quickly joining it there, along with Electron apps, which are in the corner for an entirely different reason.

          • N.E.P.T.R@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 days ago

            Zig is designed as a successor to C, no? So i assume it does syntax and things quite similarly. Rust is not a C-like language, so i dont think this a fair comparison at all.

            But in the end, learning syntax isnt the hard part of a new language (even if it is annoying sometimes).

            • learning syntax isnt the hard part of a new language

              No, it’s not, and that’s worse, not better. Understanding the pitfalls and quirks of the language, the gotchas and dicey areas where things can go wrong - those are the hard parts, and those are only learned through experience. This makes it even worse, because only Rust experts can do proper code reviews.

              TBF, every language is like this. C’s probably worse in the foot-gun areas. But the more complex the language, the harder it is for people to get over that barrier of entry, and the fewer that will try. This is a problem of exclusion, and a form of gate keeping that’s designed - unintentionally - into the language.

      • N.E.P.T.R@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        7 days ago

        All the different tests ive seen comparing Rust and C put compile times in the same ballpark. Even if somehow every test is unrepresentative of real-world compile times, I doubt it is “order[s] of magnitude” worse.

        I remember watching someone test the performance of host a HTTP webpage and comparing the performance of Zig, Rust w/ C HTTP library, and Rust native. Rust native easily beat them out and was able to handle like 10s of thousands more client connections. While I know this isnt directly relevant to Kernels, the most popular C HTTP library is most likely quite optimized.

        Memory related vulnerabilities are consistently in the top reported vulnerabilities. It is a big deal, and no, you can’t just program around it. Everyone makes mistakes, has a bad day, or something on their mind. Moments of human fallibility. Eliminating an entire class of the vulnerabilites while staying competitive with C is a hard task, but entirely worth doing.

        • Just FYI, because I think this was in a different thread:

          https://midwest.social/comment/15427570

          I was curious, so I ran a not-very-scientific test based on packages built from source on my desktop. The short version of a very long post (in the repos) is that the median build time for Rust packages is, indeed, pretty close to an order of magnitude greater than that of the C packages.

          One caveat is that Rust (at least, through cargo) imports dependencies at build time, and that download/compile is included whereas with C dependencies are already installed in other packages or are included. Go behaves the same as Rust, and yet the median build times are even shorter than C, so it can’t all be blamed on dependency downloads, but it also can’t be ignored.

          For my purposes, it makes no difference because upgrading software on my computer always includes this build time penalty for Rust programs - and this is why Rust programs, while being a fraction (50/800) of all from-source packages installed on my system consume a disproportionately large amount of the time it takes for me to do updates from AUR. And that’s why I’ve started ignoring packages that depend on cargo or rustc.

        • I’m not not doing this; I wanted to spend my free time playing Factorio the next day, and I just haven’t gotten back to scripting and running this.

          I’m going to do it and post the results; it’s just taking me a little longer to get to it than I expected.

        • No, but just for you I spent time today extracting a list of ~250 packages installed from source on my computer, and tomorrow, I’m going to clean re-install all of them, timed, and post the results.

          There’s a mix of languages in there, and many packages have multiple language dependencies, but I’m going by the “Make Deps” package requirements and will post them.

          There will probably be too many variables for a clean comparison, but I know I have things like multiple CSV and json CLI toolkits in different languages installed, so some extrapolations should be possible.

          C is hard, because a lot of packages that must depend on gcc don’t include it in the make dependencies; they must assume everyone has at least one C compiler installed. A couple of packages explicitly depend on clang, so I’ll have that at least.

          • fruitycoder@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            6 days ago

            Honestly sounds great! Look forward to the results. I do think Linux compile times matters personally, and the time save on development because the compiler is doing checks as well isn’t a perfect one to one for this project, because people like myself compile the kernel way more than we dev for kernel. Adding and removing stuff to trim it down for various platforms.

            During your compiling it would be interesting if you can find some rust flags that might disable checks to speed things up. Maybe there is a conf that skips the things downstream users can assume the actual devs ran?

            • CC @[email protected]

              K, I spent way more time on this than I wanted to, but it’s here. There’s a lot to read, but here’s the graph:

              The readme explains everything I did to try to make it reasonably fair.

              Note that the graph X-axis is logarithmic; the median compile time for Rust packages is an order of magnitude more than C or Go.

              The sample size is pretty close to 50 packages in each language, and I made my best attempt to ensure each package included used only one compiled language. Without a lot more work, there wasn’t much more I could do to get an apples-to-apples comparison.

              One thing to note is that I downloaded all package sources outside of the timing step. Rust, specially with cargo packages, downloads many dependencies in the build() phase, whereas with C they’re mostly already downloaded. So a significant amount of Rust build time it’s actually downloading and compiling dependencies, which it has to do for each virgin build. Whether that make this an unfair comparison is debatable; I will point out that Go, however, does exactly the same thing: library dependencies are downloaded and compiled at build time, same as Rust. This makes the Go median even more impressive, but has no bearing on the Rust v. C discussion.

              A final note: entirely unintentionally, I apparently have no from-source Zig programs installed (via AUR). I don’t know what to make of that. Is it really that far behind in popularity?

              Anyway, all of the source and laborious explanation is there; if you’re running Arch, you could perform the same analysis and most of the work is already done for you. You just need 4 pieces of non-standard software, two of which are probably regardless installed on your machine. Be aware, however: on my desktop it took 12 hours to re-download and clean build the 276 qualifying AUR packages in my system, so it’s a long metric to run.

              • biscuitswalrus@aussie.zone
                link
                fedilink
                arrow-up
                3
                ·
                14 hours ago

                Wow I read through the blog post and though I’m not a developer I’ve compiled and built Linux packages and operating systems in the past so now I want to fly home and give your script a go myself.

                I enjoyed your write up. I can’t comment on programming, but I enjoy a good journey and story.

                My final takeaway is your image. I’ll keep it in mind. Interesting!

                • You read through all that? Wow. Good on you! Even I didn’t re-read it, so there are probably typos all over.

                  Yeah, the code isn’t interesting. It’s just a bunch of zsh hacked together; I wouldn’t be surprised if you encounter issues running it. The only thing I’m pretty sure of is that it won’t break anything.

                  Good luck. If you do run it and get a graph, please post it. I’m interested to see results from other systems. Note that the script generates an svg, so you’ll need to convert it to png to post it, or just go on and edit the csvtk graph command and change the svg suffix to png and it’ll create a png for you.

                  Also, I meant to do this in the README: a huge shout-out to the author of csvtk. It’s a fantastic tool, and I only just discovered the graph command which does so much. It has a built-in, simple pivot table function (a group argument) that replaced a whole other tool and process step. Seriously nice piece of software for working with CSV.

          • biscuitswalrus@aussie.zone
            link
            fedilink
            arrow-up
            2
            ·
            6 days ago

            Make a YouTube on it and I’ll watch it. I’m not a coder though. But benchmarking and debunking is interesting. Either way it goes. Clear or complex the results come out it’ll be interesting.