Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did you read the same article?

   * Their is an option for the old behavior.
   * It is a security issue and better solutions to replace exist.
   * FHS isn't maintained.
I think everyone involved would prefer updates to the applications, which fix the issue. Debian opted - for now - for reliability for its users, which fits in their mission statement. On Arch /run/lock is only writeable for the superusers, which improves security. As user I value reliability and security and that legacy tools remain usable (sometimes by default, sometimes by a switch).




> It is a security issue

The "security issue" expressed is that someone creates 4 billion lock files. The entire reason an application would have a path to create these lock files is because it's dealing with a shared resource. It's pretty likely that lock files wouldn't be the only route for an application to kill a system. Which is a reason why this "security issue" isn't something anyone has taken seriously.

The reason is much more transparent if you read between the lines. Systemd wants to own the "/run" folder and they don't like the idea of user space applications being able to play in their pool. Notice they don't have the same security concerns for /var/tmp, for example.


they don't like the idea of user space applications being able to play in their pool

i think that is somewhat reasonable. but then systemd should have its own space, independent of a shared space: /var/systemd/run or /run/systemd/ ?


> then systemd should have its own space, independent of a shared space

This would go contrary to an unstated goal: making everyone else to dance to systemd's tune, for their own good.



> On Arch /run/lock is only writeable for the superusers, which improves security.

Does it? That means anyone who needs a lock gets superuser, which seems like overkill. Having a group with write permissions would seem to improve security more?


no that isn't what it means at all

a global /run/lock dir is an outdated mechanism not needed anymore

when the standard was written (20 years ago) it standardized a common way programs used to work around not having something like flock. This is also reflected in the specific details of FHS 3.0 which requires lock files to be named as `LCK..{device_name}` and must contain the process id in a specific encoding. Now the funny part. Flock was added to Linux in ~1996, so even when the standard was written it was already on the way of being outdated and it was just a matter of time until most programs start using flock.

This brings is to two ways how this being a issues makes IMHO little sense:

- a lot of use cases for /var/lock have been replaced with flock

- having a global writable dire used across users has a really bad history (including security vulnerabilities) so there have been ongoing affords to create alternatives for anything like that. E.g. /run/user/{uid}, ~/.local/{bin,share,state,etc.}, systemd PrivateTemp etc.

- so any program running as user not wanting to use flock should place their lock file in `/run/user/{uid}` like e.g. pipewire, wayland, docker and similar do (specifically $XDG_RUNTIME_DIR which happens to be `/un/user/{uid}`)

So the only programs affected by it are programs which:

- don't run as root

- don't use flock

- and don't really follow best practices introduced with the XDG standard either

- ignore that it was quite predictable that /var/lock will get limited or outright removed due to long standing efforts to remove global writable dirs everywhere

i.e. software stuck in the last century, or in this case more like 2 centuries ago in the 2000th

But that is a common theme with Debian Stable, you have to fight even to just remove something which we know since 20 years to be a bad design. If it weren't for Debians reputation I think the systemd devs might have been more surprised by this being an issue then the Debian maintainers about some niche tools using outdated mechanisms breaking.


> software stuck in the last century

OK, but suppose you have a piece of software you need to run, that's stuck in the last century, that you can't modify: maybe you lack the technical expertise, or maybe you don't even have access to the source code. Would you rather run it as root, or run it as a user that's a member of a group allowed to write to that directory?

The systemd maintainers (both upstream and Debian package maintainers) have a long history of wanting to ignore any use cases they find inconvenient.


most very old software will depend on many other parts so you anyway often have to run it in a vm with a old Linux kernel + distro release or similar

and if not, you can always put it in a container in which `/var/lock` permissions are changed to not being root-only. Which you probably anyway should do for any abandon ware.


It is my opinion that three things are true:

1) A piece of software can be complete.

2) It is virtuous when a piece of software is complete. We're freed to go do something else with our time.

3) It's not virtuous to obligate modifications to any software just because one has made changes to the shape of "the bikeshed".


That’s true and compatibility is a grace. Software shall not need every month an update. I sign of its quality.

Is this case, usage of /var/lock was clumsy for a long time. And not cleaning up APIs creates something horrible like Windows. API breaks should be limited, to the absolute minimum. The nice part here is, that we can adapt and patch code on Linux usually.

On the other side Linux (the kernel), GLIBC/STDLIBC++, Systemd and Wayland need to be API stable. Everybody dislikes API-Instability.


No, I didn't read the whole article. I follow debian-devel directly. Watched all of it unravel, step by step. I know the resolution since the day it posted to debian-devel.

This was a general question to begin with.

> Their is an option for the old behavior.

The discussion never centered on an option for keeping old behavior for any legitimate reason. The general tone was "systemd wants it this way, so Debian shall oblige". It was a borderline flame-war between more reasonable people and another party which yelled "we say so!"

> It is a security issue and modern solutions to replace exist.

I'm a Linux newbie. Using Linux for 23 years and managing them professionally for 20+ years. I have yet to see an attack involving /var/lock folder being world-writeable. /dev/shm is a much bigger attack surface from my experience.

Migration to flock(2) is not a bad idea, but acting like Nero and setting mailing lists ablaze is not the way to do this. People can cooperate, yet some people love to rain on others and make their life miserable because they think their demands require immediate obedience.

> FHS isn't maintained.

Isn't maintained or not improved fast enough to please systemd devs? IDK. There are standards and RFCs which underpin a ton of things which are not updated.

We tend to call them mature, not unmaintained/abandoned.

> On Arch /run/lock is only writeable for the superusers. As user I value reliability and the legacy tools are usable.

I also value the reliability and agree that legacy tools shall continue working. This is why I use Debian primarily, for the same last 20+ years.


I mean /var/lock was kinda on the way of being super seeded when FHS3 was written 20 years ago. We known it is bad design since a similar amount of time.

If FHS hadn't been unmaintained for nearly 2 decades I'm pretty sure non-root /var/lock would most likely have been deprecated over a decade ago (or at least recommended against being used). We know that cross user writable global dirs are a pretty bad idea since decades, if we can't even fix that I don't see a future for Linux tbh.(1)

Sure systemd should have given them a heads up, sure it makes sense to temporary revert this change to have a transition period. But this change has be on the horizon for over 20 year, and there isn't really any way around it long term.

(1): This might sound a bit ridiculous, but security requirements have been changing. In 2000 trusting most programs you run was fine. Today not so much, you can't really trust anything you run anymore. And it's just a matter of time until it is negligent (like in a legal liability way) if you trust anything but your core OS components, and even that not without constraints. As much as it sucks, if Linux doesn't adept it dies. And it does adopt, but mostly outside of the GPG/FSF space and also I think a bit to slow on desktop. I'm pretty worried about that.

> > FHS isn't maintained. > Isn't maintained or not improved fast enough to please systemd devs? IDK.

more like not maintained at all for 20+ years in a context where everything around it had major changes to the requirements/needs

they didn't even fix the definition of /var/lock. They say it can be used for various lock files but also specify a naming convention must be used, which only works for devices and also only for such not in a sub-dir structure. It also fails to specify that it you should (or at least are allowed to cleared the dir with reboot, something they do clarify for temp). It also in a foot note says all locks should be world readable, but that isn't true anymore since a long time. There are certain lock grouping folders (also not in the spec) where you don't need or want them to be public as it only leaks details which maybe an attacker could use in some obscure niche case.

A mature standard is one which has fixes, improvements and clarification, including wrt. changes in the environment its used in. A standard which recognizes when there is some suboptimal design and adds a warning, recommending not to use that sub-optimal desing etc. Nothing of the sort happened with this standard.

What we see instead is a standard which not only hasn't gotten any relevant updates for ~20 years but didn't even fix inconsistencies in itself.

For a standard to become mature it needs mature, that is a process of growing up, like fixing inconsistencies, clarifications, and deprecation (which doesn't imply removal later one). And this hasn't happen for a long time. Just because something has been used for a long time doesn't mean it's mature.

And if you want to be nit picky even Debian doesn't "fully" comply with FH3, because there are points in it which just don't make sense anymore, and they haven't been fixed them for 20 years.


> In 2000 trusting most programs you run was fine.

Yes. This is why Microsoft didn't decide to base Windows XP on the NT kernel and Windows 95 was nothing more than a (arguably very) pretty coat of paint on top of Windows 3.11.

It's also why multi-user systems with complicated permissions systems that ran processes in isolated virtual address spaces never got built in the decades prior to NT. All those OS researchers and sysadmins saw no reason to distrust the programs other users intended to run.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: