The basic infrastructure for writing drivers in Rust is upstream but there's nothing upstream using it yet. Others have mentioned stuff that may eventually make its way upstream, but from what I've seen there's pretty heavy resistance from maintainers whenever (re)writing something in Rust gets brought up.
Before Rust starts getting used "for real" in the kernel there's a lot of barriers to overcome. Most maintainers aren't hugely proficient in Rust and absolutely do not have the time to learn. Needing yet another toolchain is annoying (especially when you need bits that aren't in stable yet), anything you write in Rust probably won't get built in distro kernels for a long time, and anything you work on today is hugely uncharted territory since it's all so new.
I think one thing a lot of people don't understand from "outside" the kernel development sphere is that standards for getting stuff upstream are typically pretty high for most subsystems, a lot higher than most open source projects. There are a lot of open questions about Rust in Linux that don't have clear answers, and Linux really struggles with consensus. Yes it has a dictator, but that dictator very rarely dictates anything that hugely moves the needle.
I do think "real" kernel bits written in Rust will get upstream and will get used, but it will be a very slow burn.
It might be helpful to consider the Firefox situation, too. That is also a relatively large, established, and actively-developed software system that began to include some Rust code later on in its existence.
If I'm not mistaken, the first stable Firefox releases including notable usage of Rust code came out in late 2017. So that's about 6 years of Rust in Firefox.
From a Firefox user's perspective, I can't say that I'm impressed by it.
Building Firefox from source is more involved than it was before. As you mentioned, additional tooling for Rust is required in such a situation.
I haven't really noticed any real Firefox feature, functionality, or performance improvements that I could directly attribute to the use of Rust.
It also doesn't seem like using Rust has allowed the Firefox developers to be significantly more efficient or productive than they were before.
Maybe an argument could be made that some security issues, for example, have been avoided thanks to Rust, but that's not really provable in any meaningful way.
After seeing the Firefox situation, my expectations for the use of Rust in the Linux kernel are pretty low.
> Maybe an argument could be made that some security issues, for example, have been avoided thanks to Rust, but that's not really provable in any meaningful way.
As for Firefox, the CSS engine was rewritten to be parallel, among other improvements (new wasm compiler, new parallel compositor) that would have been challenging without Rust: https://wiki.mozilla.org/Oxidation
Its a bit of strange argument that you as a users can't 'feel' it. The people picking Rust for more and more project inside of Mozilla probably know what they are more productive in (and will end up with less bugs). Just guessing at it from the outside seems like a really bad and unfair way to judge.
> but that's not really provable in any meaningful way.
Can you prove that Rust didn't make developers more productive?
Also, that there are less memory bugs in Rust then in C++ is just common sense and pretty generally accepted. It doesn't need to be proven again for every project.
> After seeing the Firefox situation
The situation where more and more code is written in Rust because most people prefer it?
I remember seeing a blog post by Mozilla explaining how the new css engine, which is measurably faster, would never have been implemented in C++ because the engineers were not confident they could handle the tricky concurrency issues in that language.
Multi-core CSS engine and GPU-accelerated rendering in Firefox are in Rust. Without them Firefox was really behind in performance (if you're unimpressed now, imagine it being way worse).
The proficiency required to make sure the LLM is not screwing up the code in some subtle way, is probably even higher than the proficiency required to rewrite the code yourself. So probably not.
Basically, you have to review code generated by an LLM the same way that you would review code written by a human programmer, which requires (at least) the same level of proficiency as writing code yourself - if not, as you wrote, higher proficiency.
I'd argue you have to review code generated by an LLM to an even greater degree than that generated by a human programmer because you can to some degree presume that the human writer intends the code to be functional. Whereas, with an LLM, it might just spit out some garbage that it doesn't even intend to be compile-able. In my experience, LLM-written code doesn't pass a threshold where I believe it's ready to force another human being to review it (via Pull Request review.) The LLM generates what it generates, I rewrite what it generated, and then I create the Pull Request. If I can't understand what the LLM generated, as might be the case where the human programmer is not fluent with the language, then having the LLM generate code was a pointless step anyway.
Code written in Rust for LLM would likely be easier to review than C code due to its strong type system and the inclusion of the borrow checker. However, I anticipate there might be more 'unsafe' code, especially at the language boundaries, compared to typical Rust application code. Thus, the benefits might be somewhat diminished.
The whole point of being in kernel space is to write unsafe code -- either unsafe in a memory sense or in a "I am just blasting characters at this pcie port" sense.
Yes and no. It's true that there's some portion of the kernel space code that can't be written provably safely. But there's good reason to believe that a lot of it can and that belief is what's driving this integration.
Whether Rust in the kernel succeeds or not will likely be determined by whether or not a sufficiently clear boundary can be drawn between the bit that must be unsafe (in the Rust sense) and the rest. And how much code is in the latter. I don't think we know the answer yet but some knowledgable people are willing to run the experiment on the basis that the probability seems quite high that a safe subset can be determined.
LLMs rewriting what? A) Rust modules to C so that it can be reviewed and merged by old school kernel hackers? B) C to unsafe Rust so that you lose any advantage of using Rust and get the worst of both world? C) Or rewrite the kernel maintainer's brains while they sleep so they become magically proficient in Rust when they wake up?
If you're talking about option A) or B), we already have things like mrustc and c2rust. These are though problems, LLMs aren't _that_ smart yet.
Others have spoken to the technical merits but I can say straight up that the community would absolutely despise this idea, even if the generated code was excellent. LLM generated code would go over about as well as implementing a blockchain in the kernel.
I’m just a layman when it comes to these kind of things. But here’s my take anyway: AFAICT, the issue isn’t so much about rewriting the code. However, if we would use automation for that, we would probably need proofs that the new code is better than the old one. And creating suchs proofs seems incredibly hard as soon as we walk out of “hello world” territory. People has spent years on trying to make suchs systems, but it’s still not widely used. So until then, we need people who read and analyze the code, and also have wast knowledge of the kernel’s inner workings, and today that seems to be the bottleneck. Those people doesn’t really grow on trees.
Actually this would be a very good first candidate for a Rust module - it's a significant newly developed piece of software, so nobody can say that they "(re)wrote it in Rust" just for the heck of it, and its impact is relatively isolated (only used on Apple Silicon Macs), so if something did break, it would mostly only hurt Asahi Linux, which is currently experimental anyway.
Well it's less that they rewrote it in Rust for the heck of it and more that they wrote it in Rust for the heck of it. Seems like it's going OK for them, though.
Well no, actually they explain their reasons for doing it in the article others linked here (https://asahilinux.org/2022/11/tales-of-the-m1-gpu/, section "A new language for the Linux kernel"), and it sounds pretty convincing - if you start a basically new and very complex driver project, and if there is a language more modern than C that might help you and is officially supported by the kernel, then why not use it?
Well for one it won't be heading upstream anytime soon, so they could have really picked any language they wanted. Rust definitely wasn't a bad choice but they did actively choose it because they wanted to.
I only read their homepage (https://asahilinux.org/) - it says "The Asahi Linux alpha release is out!". If there was another (more stable) release available, I assume they would mention it there.
That version is majorly out of date, their website indeed does not mention how out of date it is (that said I have no idea as to whether the devs consider the latest releases stable/RC).
Right... actually, according to their latest blog post (from beginning of August), they expected to release the "Fedora Asahi remix" by end of August. Looks like that didn't quite work out, but a release (I guess they mean a RC or production release, not alpha or beta?) seems to be imminent...
If you click on that post it goes to an announcement that is more then 1.5 years old, many releases have followed since. That link simply hasn't been updated.
Yeah, I’d say Hector Martin’s got at least the most complete example of Rust code that can go into the kernel. Though he’d be far better off working alongside actual distro maintainers as he focuses on the M1 driver, rather than bandaid-fixing his distro because he can’t handle GRUB.
No, the only rust code accepted into any released kernels is basic framework infrastructure so that someday, maybe, in the future, real functionality could be written in rust.
There are many out-of-tree examples of rust kernel code, but as of right now, none have been merged.
You can write arbitrary kernel modules in Rust, I think there are a few floating around.
Core kernel will never have Rust in it, and that is a correct decision. Linux and C have a long history of just working, and there is value in making sure that C code you write is correct and explicitly thinking about memory and what gets modified where. Correct coding is more than just memory safety, and compilers can't check everything for you.
The above thread is really strange that they picked such a niche thing in networking to contribute to, as sockets don't have many uses within the kernel itself other than NFS or dhcp or stuff like that. They should have aimed to add things to the QoS layer, qdisc, classifier, whatever, there's tens of those and you can easily add another. Start small.
The problem really is that the people who want to see Rust in the linux kernel have next to no linux kernel background and have a poor understanding of how Linux works and how things are generally done.
Also the people who contribute big things to the linux kernel usually have a clear business case behind them and a big company paying them to work on it full time.
For example I could imagine Microsoft eventually contributing in this regard, say they want some more Hyper-V virtual hardware drivers in Linux and they decided to do that in Rust. They've been lately using Rust in the Windows kernel so it's not that unrealistic to think it may happen.
A filesystem written in Rust would be cool... but it's probably going to have to be a new one rather than a rewrite, I can't imagine people going through the trouble of rewriting btrfs in Rust. And big names like Facebook could actually back that effort.
I really love using Rust for middleware things, however right now I can't convince myself to use it on low level things like OSes or microcontrollers since it seems to be a lot of trouble with getting Rust to play nice with FreeRTOS for example, and there seems to be no production ready rtos written in Rust either.
I don’t see how either post supports your points. The first is about OOT vs in-tree modules and the second isn’t about anything specific to Rust. You just picked a random post from someone learning about kernel development who happened to be using Rust.
The people doing core Rust for Linux work (not random mailing list participants) are actually very qualified and very knowledgeable about Linux. I don’t think your characterization of them is fair at all.
Grabbing random mailing list posts from non core maintainers and trying to extrapolate conclusions about an entire multi-year is not a good way to evaluate a project.
Most of those guys have single digits number of commits and there are 7 guys with double digits number of commits so let's say there's about 7 people most active doing rust in linux.
Do you know Rust yourself? Take a look at the bulk of the patches that got merged, could you write that if someone paid you? My guess is yes, because it looks pretty average to me. (No offense to you personally).
There's been 1500+ commits since 2020 on the 'main' Kernel fork that has Rust support [0]. Many of them through collaboration and backs and forths of review (look at the PRs on the fork).
My understanding is that the `rust-next` branch represents what's ready for Linus to merge. So there's a whole lot that's not in there, which you're missing in your evaluation.
Last year a Kernel dev from Western Digital wrote an NVME driver in Rust [1] (look at slides 15 & 16) which I would consider to be qualified and knowledgeable.
I wouldn't measure the effort solely on what's been merged to mainline as there's been 6 merge windows from 6.1 to now. Many of those commits are laying the groundwork for the next few years.
> Last year a Kernel dev from Western Digital wrote an NVME driver in Rust [1] (look at slides 15 & 16) which I would consider to be qualified and knowledgeable.
NVMe drives are probably the simplest form of storage you can possibly think of in today's day and age, from a user perspective. Which is a perfect candidate for a proof of concept like that. Not saying that it's not good work.
In those 1500+ commits I see a lot of noise, and some good stuff, I like the alloc stuff, chardevs, buffer management, the async executor stuff looks really good scrollsscrolls, scrolls, okay yes.
A lot of this stuff looks meaningful and I hope to see it merged in the near future.
I don't have a dog in this fight, but " someone learning about kernel development who happened to be using Rust" sounds to me like anecdotal evidence for the assertion that most rust contributors don't know how kernel development works.
"The problem really is that the people who want to see Rust in the linux kernel have next to no linux kernel background and have a poor understanding of how Linux works and how things are generally done."
Is this a non topic then?
Having drivers in Rust is one thing. Drivers were done in C++ before. But you use a subset because a language targeting kernel needs to use kernel facilities. You won't be able to use your standard library. Syntax needs to be resolved to what kernel provides, or dropped.
That's why C++ in kernel never took off that seriously, when you restrict it, and when you already have to accommodate for kernel's "API mindset", what's left of language in between isn't that important.
Well. integration of rust into linux kernel is still in experimental stage however there are only few things that about the use of rust in linux kernel such as progress experiment, C to rust transioning, community discussions and many other.
Thanks
The the Android Bluetooth stack is written in Rust.
(I assume the OP knows and was making a joke about that, but for those who did not: https://news.ycombinator.com/item?id=26647981)
My comment is not about changing the language but about the fact that whenever we rewrite it, we improve the quality because of the lessons learned during past iterations.
> My comment is not about changing the language but about the fact that whenever we rewrite it, we improve the quality because of the lessons learned during past iterations.
So many developers have said this over the years but it almost never comes true.
You just end up with different bugs in a different language.
I dunno why I misread "write in any language" with "write in another language".
I'm still skeptical about rewriting - the bluetooth spec is notoriously buggy itself, and many "bugs" and glitches in BT are due to how poorly the spec is written.
I'm not too sure I agree that the spec itself is buggy, certainly the implementations vary wildly from Sony almost doing their own thing to Chinese off the shelf copy pasting whatever makes a noise.
That said I have worked extensively with Bluetooth within Ericsson and while there is a learning curve, I never found the spec to be lacking.
Your last sentence is exactly what I was thinking. The problem with BT isn't necessarily on the kernel's bluetooth driver. The spec is buggy and also a lot of makers of bluetooth devices don't implement the spec properly. But the spec itself isn't spectacular to begin with.
A rewrite might simply make it more resilient through changes in the base architecture. However, I know nothing about Linux's bluetooth stack and I assume that it's probably taking into account a lot of those glitches already.
That highly depends on who is doing the rewriting and whether they were involved in writing the current system. If someone new starts rewriting the system in a memory safe language then it's quite likely they will make many of the same mistakes the original author did.
Debatably, as language rewrites can bring their own problems. Especially with a newer language like Rust that lacks experienced eyes to review. You get more benefit rewriting the bluetooth drivers in their current languages.
Before Rust starts getting used "for real" in the kernel there's a lot of barriers to overcome. Most maintainers aren't hugely proficient in Rust and absolutely do not have the time to learn. Needing yet another toolchain is annoying (especially when you need bits that aren't in stable yet), anything you write in Rust probably won't get built in distro kernels for a long time, and anything you work on today is hugely uncharted territory since it's all so new.
I think one thing a lot of people don't understand from "outside" the kernel development sphere is that standards for getting stuff upstream are typically pretty high for most subsystems, a lot higher than most open source projects. There are a lot of open questions about Rust in Linux that don't have clear answers, and Linux really struggles with consensus. Yes it has a dictator, but that dictator very rarely dictates anything that hugely moves the needle.
I do think "real" kernel bits written in Rust will get upstream and will get used, but it will be a very slow burn.