Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Windows Subsystem for Linux is good, but not enough yet (akitaonrails.com)
41 points by castlegloom on Oct 14, 2017 | hide | past | favorite | 61 comments


> In an ideal world, Windows should replace its entire NT underpinnings and either build their own BSD-inspired infrastructure (like Darwin) or create their own Linux distro using off-the-shelf Linux kernel as the main driver (probably can't because of GPL).

Why? Is a 'BSD-inspired infrastructure' or Linux the absolute best, or even significantly better? Is an OS monoculture a good thing?

> Apple had its miracle in the turn of the century, and OS X and iOS are terrific OSes. Best of the breed.

The user-facing portion is, but there are significant problems with some core features (semaphores, forking being very slow, file system notifications, HFS).

Also, for a fair comparison you should run the test suite inside a vboxfs mount.


This is the first thing I thought when I saw that comment. The Windows NT kernel is one of the best and most stable ever written. Why should Microsoft throw away all that investment?


Seriously, the original NT Kernel was designed to be Posix compliant to run a UNIX environment. It eventually grew away from that but since Vista, Microsoft has been extricating unnecessary Win32 bits to Userland to take it back to it's roots. I think the MinWin project was completed prior to Windows 10 / Server 2016.

Linux was developed as a Unix compatible environment because Unix wasn't an option on commodity hardware. BSD isn't UNIX or Linux.

Why can't NT exist in the same area as a *nix compatible system?


I feel like the entire appeal of WSL is that it lets you have your Linux environment and those Windows apps you just can't give up (Office, Photoshop) running together. You get to have your cake and eat it too. From that perspective, throwing out NT and building Yet Another Linux Distribution would be an absolutely terrible idea.


The other part is significantly better hardware support (including powersaving and wider range of devices) which would also be completely undone by a unixy kernel.

I really kinda fail to see what would be gained by rewriting a piece of kernel that works just fine (well, except that ./ users would finally declare that they were right all along!)


I think the hardware support point is moot because if Windows did switch to Linux (Microsoft wouldn't but we are playing the "what if" game here) then hardware manufacturers would obviously port their drivers to Linux else nobody would buy their hardware. We've already seen this to be the case on Android for example. Plus Linux already has reasonably good support for older hardware.

I should stress again, just in case anyone misinterprets my comment, that I don't for one second think Microsoft will not should switch to Linux


They would port their new hardware drivers, maybe. It's the 20 years of backwards compatibility that keeps Windows strong in many places, especially government, military, corporate.


I did also address that point in my post you're replying to


Unfortunately they aren't running together. They run entirely separated in isolated subsystems with no way to communicate between the two. The GUI makes them seem more integrated than they actually are.


Linux w/ a windows vm makes more sense for that use case though


Not always. If the windows only software needs access to a hardware device (e.g. GPU) then there's still a significant barrier to setting that up. Hardware support is kind if there but some vendors dont allow it with consumer cards and you'll need two of them if you want both desktops at once. Along with that it's not uncommon for even new hardware to get details about doing this wrong. Currently amds threadripper and corresponding motherboards don't work with it due to a bug with power management.


>Why? Is a 'BSD-inspired infrastructure' or Linux the absolute best, or even significantly better? Is an OS monoculture a good thing?

If you used unix based OSes for all your life, it makes sense that every operating system should be unix.


Ehh, I’m not so sure this is true. I think people are open to, erm, analagously powerful forms of computation. I was under the impression that, at least before powershell, the ability to compose programs and form scripts was super limiting, and by now bash is realistically far to entrenched to replace on the unix side, and people don’t really want to target two scripting languages.

However, there are alternative ways of doing this that seem quite powerful, like the lisp machine—it didn’t fail because it couldn’t handle tackling abstract problems, it failed because unix ended up doing it cheaper and faster. There might even be arguments for visual shell scripting.

That said, if you’re doing anything like unix, you might as well go full unix :)


I was under the impression that, at least before powershell, the ability to compose programs and form scripts was super limiting

Do you mean using batch files in cmd.exe? Sure.

But that's not really what cmd.exe "was for." It's what WSH - Windows Scripting Host[0] - was for.

A lot of people (perhaps those who have little Windows experience) don't realize this and think that Windows was entirely un-administratable without using the GUI. While that may indeed have been the case (and may still be) for certain tasks, you could get far with WSH. And using real (well, non-shell script) programming languages: WSH was a host on which programming languages could be added. Such as perl, for example. Or Python.

[0] https://en.wikipedia.org/wiki/Windows_Script_Host


Enlightening; thank you!


You were comemnting on a blogpost from a person that thinks that linux on desktop is a thing. Well no it is not.


It is a thing. It's a very small thing, but a thing regardless. Some of us have used Linux on the desktop since the 90s.


No it is not. Not even chromebooks are a thing. Android is a thing and if they try enough the might turn it into a desktop alternative.


Linux on the desktop is a thing. At my previous job all the consultants (100+) used a custom distro based off Mint/Kali, with a Windows VM.

It's a thing. Might not be your thing, but its a thing nonetheless.


What do you suppose I'm typing this on?


Dude, how pretentious are you? I'm writing this on a Acer Chromebook, btw, which I absolutely love.


More than one in 30 users is using Linux on the desktop. That does NOT include ChromeOS (which you may or may not classify as running a Linux desktop - I do not).

That's a "thing", much more so than e.g. Windows Phone market share, or the Baha'i religion.

[0] https://linux.slashdot.org/story/17/09/01/1639250/linux-desk...


My desktop has been running Linux for 18 years


I own a pair of clogs. Still, I'd hesitate to call it a thing.


I guess I've been doing all my work and getting all my computing desktop needs from an imaginary box then.


It was my desktop from '95 'till 2008. Then - tired of the always recurring wrestle with drivers and multimedia - I bought a MacBook. Never looked back.


1995-2008? That is some serious self flagellation. I have tried to use various Linux desktops from 2000 onward and, while they have improved immensely over the years, I can't see using one as my daily driver even today.


> In Linux distros, every binary is compiled against very specific library headers such as the kernel itself, Glibc and many others. Whenever one of those change, all binaries must be recompiled to run.

This is wrong. Only kernel modules are compiled against kernel headers, and for userland programs the kernel is binary-compatible since ever.

Which is not to say that random binaries run bare on random systems, but this is a cultural rather than a technical problem (package managers insisting on having only one version of libraries, ignorance of the soname system). But with enough effort you can always get binaries to run on a system, provided you can get ahold of the libraries it expects, up to and including glibc.

>Which is why having source code available is so important

Back to front. The reason for the current situation is it's expected that the source code is available, so there's no incentive to provide stability for binaries. Why bother? It's gonna be recompiled soon anyway.


Exactly. Linux keeps its ABI stable and is committed to not break userspace. Just bundling your app with all needed libraries should allow you to run your app on any distro. There are only two requirements: same machine architecture (e.g. x86_64) and the kernel should support all features used in the app. That's why Go's statically linked binaries don't break on Linux but e.g. on Mac.

I am quite excited about flatpak, should be quite useful for shipping applications to a wide variety of distro with a single binary.


I need to add that even glibc is also quite stable and backwards compatible: https://abi-laboratory.pro/tracker/timeline/glibc/.

That's why some projects deliberately build their software on very old distros, since the software then works from this version on.


> glibc is also quite stable and backwards compatible

But not forwards compatible. A binary linked dynamically against with a newer glibc wouldn't run with an older glibc.


> In an ideal world, Windows should replace its entire NT underpinnings and either build their own BSD-inspired infrastructure (like Darwin) or create their own Linux distro using off-the-shelf Linux kernel as the main driver (probably can't because of GPL).

No, no! This is the exact opposite of the ideal world. What we need is for the NT kernel to stay, and for Win32 to get replaced.

Look, people from the Linux world can have trouble admitting this, but NT is a really good kernel. Better than Linux in some ways- not all ways, but some. The shittiness of Windows mostly comes from Win32. That's what should go away, and WSL is taking the first steps in that direction, even if Microsoft isn't planning on "going all the way" with it.


Out of curiosity, in what ways do you think the NT kernel is better than Linux?


Read about the history of NT and Dave Cutler the designer behind NT. He was a VMS veteran and brought heavy influence from there (https://en.m.wikipedia.org/wiki/Dave_Cutler). He is an amazing engineer. Still working.

I would love to return the question and ask, why the Linux kernel is better.


I haven’t claimed that Linux has a better kernel.

Basing your claim that NT kernel is better than Linux on the employment of a single developer seems pretty pointless. Smart people work on both kernels. Not so much of a feature that is better in one or the other.


Was not my claim that the NT kernel is better ;) They are different. There are only 3 dominant kernels left. Two commercial ones and one free. My opinion is just that the NT and also the macOS kernel are in popular opinion a little bit underestimated because they are very misunderstood.


I'm going to assume OP is under an NDA and can't discuss details.

/s


He incorrectly assumes there's some kind of file system emulation going on. There's not, WSL just runs on top of NTFS directly (NTFS doesn't have those pesky restrictions Win32 imposes).

Fact of the matter is, the file system has _always_ been a sore spot on Windows. It has known to be slow for decades and Microsoft engineers have flat out stated that it's not affecting their bottom line, so they have no incentive on changing that.


Well, there is a little bit of emulation. We have to encode and handle Linux-style file permissions, which has some overhead.

We are working on making both the WSL layer and NTFS itself better. For example, in the upcoming Windows release, we have created a fast path for stat that should yield a modest speed improvement.

We have a lot more work to do, though.


How can you explain that unzipping an archive under WSL filesystem is 5 minutes, while on the same machine atop of windows native filesystem it's 3 seconds?


For WSL to be perfect (for me), it would also have to have better integration with the native filesystem. This whole business where the WSL filesystem is a separate thing, hidden away in the user's AppData folder, that you can't write to using Windows tools, is a significant drawback. I dunno, I guess I don't quite get who WSL is supposed to be for. The author of this article presents it as an alternative to running Linux in a VM. Which, OK, maybe that's exactly what it's intended to be. But I still wish that MS would include something like Hamilton C Shell as a built-in part of Windows. I.e. a unixy shell that provides unixy tools, and the ability to compose them, but where the tools are designed to mesh well with the rest of Windows.


>In an ideal world, Windows should replace its entire NT underpinnings and either build their own BSD-inspired infrastructure (like Darwin) or create their own Linux distro using off-the-shelf Linux kernel as the main driver (probably can't because of GPL).

In an ideal world, Windows NT stays, but Win32 is demoted to be just one of the subsystems the NT kernel supports.


Win32 is demoted to be just one of the subsystems the NT kernel supports

'Twas it not always thus?


In theory, yes: NT has always had the capability to have multiple subsystems. But apart from some attempts in the 90's that were aborted, it has gone totally unused up until WSL arrived.


some attempts

There have been Microsoft POSIX, Interix, and OS/2 subsystems.

https://en.wikipedia.org/wiki/Architecture_of_Windows_NT#Use...

EDIT: Re. "totally unused," Interix was available through Windows Server 2012, and supported through 2014.


A creator of Interix said that after he left Microsoft in 2004, he had to explain to the Windows HPC team that Interix existed. So totally unused may be an exaggeration, but it sounds like it was overlooked.

https://medium.com/@stephenrwalli/running-linux-apps-on-wind...


Not well known? Sure. GP didn't know about it, for example.

But "totally unused" is not an exaggeration, it is simply incorrect.

Thanks for the link. Interesting to read Interix founder and open source advocate's personal tales of interacting with early-2000s Microsoft after an acquisition that he didn't feel sufficiently compensated for. ;)


> My ideal setup is actually a MacBook Pro. But I am forcing myself to live out of the Apple ecosystems, and it hurts.

Why? In all of the digressions I would like to read about this is a big one. I have been considering the same. The only Apple product I have left is a MBP. There are soooo many choices on the PC side of things. The paradox of choice hits hard when looking at; Samsung, Acer, ASUS, Lenovo, Dell, and HP. Some $700 models have the same specs as $1500 Mac's with touch screens and pen support.


Without guaranteed full Linux compatibility, it's not worth the risk on many of those vendors. OEM (adulterated) Windows just isn't an option and unless there's an ad-free version of Windows 10 (I haven't researched extensively but I don't think I can just buy one that won't spam my Start Menu) I'm going to have to revert to another OS - having ads built in simply isn't an option.

I'm no Apple fan but the competition is embarrassing at times.

For the record my two notebooks are a MBP 2014 and a Dell XPS13 Developer addition running Arch. Both are equally competent as development machines.


My start menu does not have any advertisment. It had after blank installation but removing the tiles and initial apps solved that. Never came back.

Well, I do not have the store on the tile part of the start menu. But I think it is fair game that the store would show some advertisement when it is a tile.


Benchmarks were not made properly: there's no warmup, no multiple runs, etc There are number of things, that would go wrong with it: cold caches, unloaded shared libraries, that might be already in memory in full Linux system in VM, etc

Also I should mention, that native Linux build might be optimized for ext2 or whatever as opposed to NTFS.


I agree with the author on quite a few things, but not the main point.

First thing that bothered me: He was bothered by the need to `sudo /etc/init.d/PostgreSQL start` to start up Postgres every boot. Yeah, that's annoying, but easily mitigated by running Postgres (and MySQL and any other service you need) in the Windows layer. Postgres installs just fine in Windows, and uses the Windows service system, which has gone basically unchanged for decades, versus Linux with like 3 service management systems.

But the main thing is that his main objection is that his test suite is about half as fast as it should be. This is honestly not a big deal to me, and I do Rails full-time as well. IMO, even the fastest run of his test suite at 2 minutes is still way too slow to run regularly. If I was working on that app, and I've worked on many with test suites many times slower than that, I would have already switched to running a limited part of it that tested what I was actually working on. I stick to about a 10-second max for tests that I run routinely while working. If it's longer than that, I'll run a smaller set that's under that, and run the full suite maybe once before pushing, or just let the CI server run it. I have to figure the number of people for who that kind of slow-down is a deal-breaker is pretty low.


The problem is, this file system slow down really affects the productivity.

For example, I have a heavily customized Zsh and Neovim. In Linux, using a SSD and the startup times for both are insignificant. Even on my system with a slow HDD (5400RPM) the startup times are not bad unless my system is under heavy IO.

In WSL inside a Windows setup with SSD, the startup time takes half a second, hardly worst than my Linux setup with HDD! Not only startup times are bad, tab completion sometimes take multiple seconds to complete. This is unacceptable, specially considering a system running on SSD.

So while I concur his CI times are bad either way, this slow in files system is really bad in WSL.


> easily mitigated by running Postgres (and MySQL and any other service you need) in the Windows layer.

Which comes with its own set of problems.

The entire point of MSFT in releasing WSL was so devs on Windows could use Unix-y goodness on Windows. I think that reasonably applies to the usage of Postgres too, and not just ruby/node/etc.

----

The proper way would be for Windows to autostart WSL when the user logs in / system starts, like providing a "Linux boot up" behaviour to the WSL userland and trigerring it on actual system boot up.


> Which comes with its own set of problems.

Such as? Seriously, I'm running it like that and it works fine. I'll grant that Ruby on Windows only kind of works, but no problems with PG or MySQL.

I do agree though that WSL ought to run as a Windows service and do the normal Linux boot process when that starts.


My only fear is if there is, or will be, a way to write Linux software explicitly requiring to be run under it.


That'd be the Extend step of EEE.

Although, I strongly doubt it'd come to that. Not because I think the new MSFT is above EEE, but because I think it'd be hard to pull that off with the community of Linux software writers and users.


Hmm.. no performance comparisons with Docker for Windows.


Docker for Windows seems to be just a Linux VM running on Windows 10's Hyper-V support, so if VirtualBox is faster, Docker for Windows is probably faster too.


Docker is linux + namespaces. Docker on windows is linux VM + namespaces. He already compares with a linux VM sans namespaces.


I would like to see that as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: