like overriding it now makes a lot of sense, there needs to be grace periods etc.
but we live in a world where OSes have to become increasingly more resilient to misbehaving programs (mainly user programs, or "server programs" you can mostly isolate with services, service accounts/users etc.). And with continuous increases in both supply chain attacks and crappy AI code this will only get worse.
And as such quotas/usage limits of a temp fs being shared between all user space programs like lvm2 and dmraid is kinda a bad idea.
and for such robustness there aren't that many ways around this change, basically the alternatives are:
- make /var/lock root only and break a very small number of programs which neither use flock nor follow the XDG spec (XDG_RUNTIME_DIR is where your user scoped locks go, like e.g. for wayland or pipewire)
- change lvm2, dmraid, alsa(the low level parts) and a bunch of other things your could say are core OS components to use a different root only lock dir. Which is a lot of work and a lot of breaking changes, much more then the first approach.)
- use a "magic" virtual file system which presents a single unified view of /var/lock, but under the hood magically separates them into different tempfs with different quotas (e.g. based on used id the file gets remapped to /run/user/{uid}, except roots gets a special folder and I guess another folder for "everything else"???) That looks like a lot of complexity to support a very small number of program doing something in a very (20+ years) outdated way. But similar tricks do exist in systemd (e.g. PrivateTemp).
kinda only the first option makes sense
but it's not that it needs to be done "NOW", like in a year would be fine too, but in 5 years probably not
Why is that non-clickbait? Honestly "Debian Technical Committee overrides systemd /run/lock permission change" might be a better title than either, I don't know whether the thing or the actors are more interesting here. But you can only say so much in a title.