- Linux bad because BTRFS, so it's bad (well, you don't have to use it)
- BSD jails > anything on linux (I was hoping to see a clear outline of pros/cons, but the author's reasoning is very hard to follow, and the tone is very combative instead of constructive)
At this point there really isn’t any security difference between zones, jails, and hardened containers. That word “hardened” is the key. Zones are a primitive in Illumos, containers are not a primitive in Linux, they are a collection of security features. If you use them all then you can achieve parity with Zones, but surprisingly few projects make use of all of them by default (Docker certainly doesn’t).
The main difference then is usability, which is almost always the biggest security flaw in any piece of software (that the security isn’t “on” by default or has a prohibitive learning curve for the user).
When one talks about security, it's really hard for people to follow what she really means; the context has always been qualitative, but not quantitative.
I found the idea "Vertical/Horizontal Attack Profile" interesting, which may help measure security when people argues different container solutions or even VMs.
Honestly, I don’t know, but my guess is that it had some things turned off by default. Actually K8s is the most encouraging project in this regard in that it has opinions about how containers should work so they are slowly adding all the secure options by default when containers are orchestrated to the various runtimes. There are only a few features left that remain to be turned on (user namespacing being the most notable).
I paraphrased you by cutting down your comment, hope that's OK.
>Actually K8s [...] are slowly adding all the secure options by default [...] (user namespacing being the most notable).
K8's RBAC is nice, and in particular auth-delegator really helps with odd duck integration for the enterprise. But k8s is reliant on OS features to make user namespaces happen and Redhat customers are still waiting for Redhat to backport a newer version of shadow-utils [1] [2] to RHEL 7 so that user namespacing works there.
Given enterprise certification and release cycles "Wait for RHEL 8" isn't really a good answer for those kind of customers. I'm not disagreeing with you, just saying some features of kubernetes aren't available everywhere. Personally I'd really like to see user namespaces come to RHEL 7 as it helps with removing or mitigating privileged access.
> - Linux bad because BTRFS, so it's bad (well, you don't have to use it)
The point is that there are no good filesystem options for Linux. ext4 has various issues mainly due to being ext2 underneath, reiserfs is unmaintained, xfs and jfs have no recovery tools at all (and xfs additionally has the trailing zeroes problem), and all of those require you to use a separate system for RAID for which Linux has no good options (true hardware RAID is expensive, BIOS RAID (dm-raid) and software RAID (md) will kick out disks during a rebuild if they see a single bad sector, which per https://www.zdnet.com/article/why-raid-6-stops-working-in-20... is more likely than not with modern disk sizes. This risk is reduced (but not eliminated) by setting up regular scrubs, but tutorials and installation wizards don't always do that). BTRFS has issues up to and including data loss, userspace ZFS has major performance and reliability issues while in-kernel ZFS on Linux has substantial licensing concerns.
> - BSD jails > anything on linux (I was hoping to see a clear outline of pros/cons, but the author's reasoning is very hard to follow, and the tone is very combative instead of constructive)
Agreed that the writing is bad, but BSD jails do have a much higher level of maturity than linux containers. Linux is probably catching up now given the sheer weight of numbers of people using containers, but I still wouldn't trust them quite as much as BSD jails just yet; various kinds of flakiness and security vulnerabilities still seem relatively common.
I'm naturally inclined towards BSD simply because of its extensive history (though I've spent little effort towards it thus far), but even still this article is wholly unconvincing... of anything. It really doesn't say anything about ZFS, jails, or even the current state of affairs beyond a respectable number of packages (and more importantly, linux compatibility); it just assumes you already accept that they're better, and for some reason haven't already switched?
It also badly requires editing
Hopefully someone has a better argument for FreeBSD than this... mess
Yeah TFA reminds a bit of passionate blog posts and articles about switching from Windows to Linux something like a decade or two ago. Something that would fit better in BSD community such as Lobste.rs than HN ;-)
> PS: I haven’t mentioned both softwares, FreeBSD and SmartOS do have a Linux translation layer. This means you can run Linux binaries on them and the program won’t cough at all
I don't know about SmartOS, but on freebsd, linux compat is at version 2.6.38, and so most programs don't work.
I haven't run SmartOS for about a year, but I was using it to run Linux binaries. It was fundamentally great. Stable, efficient ... great. Joyent were nailing compatibility bugs in a matter of days.
A couple of small things - one is that entire distributions "just as they are" are unlikely to work, in much the same way as trying to boot a docker container doesn't. Lots of things rely on /proc and cgroups and these are big, ugly interfaces that are never going to see parity between the two OS's. /proc is (amusingly) built as a r/o filesystem that has an htop-esque thing writing state onto it constantly.
The big one is memory allocation. Linux will let memory allocations fail. It will also just shoot apps in the head if it wants to. Solaris does not do such shit and will return the malloc call when it has allocated some memory - even if it's next week some time. On more than one occasion I have been completely locked out of a SmartOS box because of this and one ends up having to be really, really careful with allocation strategies as a result.
SmartOS will run Linux distros in an LX zone, much like Solaris did. There are some limits around what syscalls are available, but generally most software works or a bug can be opened. There is support for Linux containers natively, but the last I checked only Docker v1 repos were supported. v2 is forth coming. Virtualization-wise, Bhyve is supported as is KVM.
If you are running open source, there are usually SmartOS packages in pkg src, which makes SmartOS zones a CPU/memory/security bargain. https://pkgsrc.joyent.com/
I have the longest beard (not yet white) in my company (1000+ plus employess), I am primarily the AIX guy (800+ virtual machines) and I do use FreeBSD since 2002 iirc. I also do plenty Linux but AIX and FreeBSD, despite being totally different, seems to be the rock solid option. Given AIX runs only on freakishly expensive power platform FreeBSD can be a good alternative. ZFS is great too.
Could you give some of your personal reasons for saying so? So much online sources just say how great it is without any specific detail. It's not like XFS on Linux has some obvious problems - it is rock solid too.
Let's say I want to run several different operating systems simultaneously from a single big ass SSD. With Linux, XFS and Xen, I can try to set up LVM and coerce Xen to use those logical volumes for VMs, but it is pain to get there with the current state of XFS and Xen documentation. Is this possible/easier with FreeBSD and ZFS?
Context: I use XFS as the default file system on all HPC compute nodes, and few other servers that run CentOS7.
I built a FreeBSD+ZFS backup server recently to mirror our primary NAS.
The experience with ZFS was so smooth and logical. All I needed was the ZFS section from the FreeBSD handbook, and I actually understood what was going on underneath to a reasonable extent. No magic - "run this command" without explanation. Once spools are set, ZFS datasets provide the equivalent of logical volumes you were asking for. You get a lot of tunables at that level, and they come with sensible defaults. The whole process is simple, heck you can even make a file act as a zpool for learning or testing purposes.
Once done, I was testing performance rudimentarily and saw that it was almost saturating the network pipes beating my expectations without any time spent on custom perf tuning.
And I get snapshots and online compression, SW RAID out of the box.
I may be able to achieve most of it with XFS, some specific hardware and lot of hours of tuning, but it will never match the experience with ZFS.
Thank you very much for this. I'm thinking about using ZFS on a single big ssd, as a storage for several simultaneously running VMs. I'm aiming for easier administration and advanced features like error detection/correction on the drive. Would ZFS make sense in that case (single physical drive)? Or do I have to put in more physical drives to get the actual benefits of ZFS, like you did with NAS backup?
> Could you give some of your personal reasons for saying so?
For one example, ZFS checksums spot all manner of erratic, intermittent idiocies--memory corruption, data bus corruption, disk controller corruption, etc.
AIX sort of died in the 2000s. Back in the late 90s the only reason to have AIX systems were as gateways to the AS400 systems. You didn't generally use AIX systems for anything else back then.
Back in those days, Aix did not use the ELF binary model, rather IBM went their own path and imported the Windows concept of definition files and import libraries to generate shared objects.
Aix shared objects had more in common with Windows world than with other UNIX variants.
My biggest personal grievance is how it handles DNS or rather how unpredictable and undocumented the process is. If everything fails, e.g., they actually hard coded usage of 8.8.8.8. Not sure if that’s still there or configurable. The handling of actual security reports and CVEs is rather bad and you can see many examples of that on Github.
I’m not anti systemd and overall I welcome lots of contributions Lennard made in the past (pulseaudio, e.g.). With systemd it’s just too much functionality in one place. Do we really need an NTP alternative in systemd?
for client machines looking for a very light alternative, perhaps. A project called Ntimed was started some time ago here at Network Time Foundation (nwtime.org), but the development stalled.
We'd really like to revive it, as there is demand for a lighter NTP client.
administration differences, not being designed according to UNIX principles and personal dislike towards developer(s) is coming to my mind.
administration differences is the most direct effect of systemd, but after learning its various compatibility layers and watchdog functions, it can be tamed to a great helper.
not being designed according to UNIX principles (composition of single-role small applications, which are doing their jobs very very well) is also bothering me, but we need to be pragmatic here. inter-process-communication cannot replace in-memory-communication in all cases it seems.
The hate towards the developer(s) is stemming from his attitude, history (he's also developer of pulseaudio IIRC) and somewhat from his coding style. It seems like he's a kind of guy who can defend his work, do this in an unorthodox way, make it work and got it accepted. At the end of the day, the hate is a feeling and is personal. I think we need to be pragmatic here too. Not that better communication from both parties will benefit everyone and everything in the long run.
At the end of the day, we are using systemd, and it's working. If we want a better system, we need to improve it. systemd is the state of the art right now, and we should either write something better or improve it if we have any beef with systemd itself.
OTOH, Beef with developers is something that should stay out of software and community in general.
My original comment was trying to explore whether there are more reasons to dislike it besides the original points that I stated above.
> administration differences is the most direct effect of systemd, but after learning its various compatibility layers and watchdog functions, it can be tamed to a great helper.
It's both a negative and a positive. It's a change for sure but nowadays I can log into an ArchLinux, Fedora, Ubuntu, Debian,... machine and just leverage systemd instead of whatever local idiom (and its various assorted corner-cases) the distro decided to use. It's becoming the happy path of the XKCD comic about standards.
The hate largely stems from the way it was rolled out - the linux community at large was left feeling like it had been told it would adopt this, and there was no choice, and that rubbed a lot of people up the wrong way.
It's also not 'the unix way' to have tightly bound, large blobs, and the team behind it seemed to take pains to say that lots of other things would rely on it, like udev, so there really was no choice.
So the hate comes from the idealogical side, largely.
Personally, having worked with it, I've rather liked writing service files and timers to get stuff done, it's much quicker to get something up and running than the old system.
> The hate largely stems from the way it was rolled out - the linux community at large was left feeling like it had been told it wold adopt this, and there was no choice, and that rubbed a lot of people up the wrong way.
It wasn't just a feeling, it really was just like that. Had the architecture of Systemd been different, it would have been possible to decouple it from other components and be able too choose an init system just like in the not so distant past. As it is now, practically speaking there is no choice.
It does, but packages aren't obligated to provide sysvinit scripts for your non-systemd-using convenience.
Over time, the number of packages that do will fall, because engineer convenience and cognitive bandwidth are incredibly high priorities; "that's the crusty old way while this is the new hotness"; and, "oh, but we need this one little thing systemd offers," &c.
And before you know it, those of us on the ops side are stuck supporting something we loathe, and having to rewrite a bunch of scripting and tooling in response to discovering new and exciting ways the "new hotness" ends up breaking things that worked perfectly well until it came along.
Literally, we're facing another iteration of that very cycle right now: our non-production environment refresh process suddenly fails to complete without being manually thumped at multiple points. Why? System fucking d.
But no: "It works fine for us, so it can't be a problem for anyone else, right? Right?"
Debian recently reached out to Devuan for sysvinit maintenance. Devuan has, in spite of a somewhat acrimonious start, delivered what they set out to do, a Debian without SystemD, so it might be worth consideration when dist-upgrading.
You're right, I didn't take the scripts part into consideration, also the problem of breaking pipelines is real, there's no doubt about that.
However, dividing teams into developers and (Dev)Ops sides like warring factions is not healthy. I'm also an ops guy (more precisely a sysadmin who supports a lot of users and servers) in my day job, and a I develop stuff for academic world and for my own pleasure in my free time, so I know both sides of the so-called moat.
Using features because they're hot is not also OK for any application, but not using features which can solve real problems is unwise in my book. We also had hacks which were working perfectly fine until systemd came. We evolved, changed how we do things and moved along.
> "It works fine for us, so it can't be a problem for anyone else, right? Right?"
I didn't want to tell or imply anything resembling this. I just gave my own opinion, based on my experience.
> Over time, the number of packages that do will fall, because engineer convenience and cognitive bandwidth are incredibly high priorities; "that's the crusty old way while this is the new hotness"; and, "oh, but we need this one little thing systemd offers," &c.
The thing that many people don't understand about systemd is that they don't have to sell to users or developers, they have to sell to (and design for) distributors. Lennart and Kay know that because they are both working for distributors (Redhat and Suse, respectively).
To distributors, the promise of systemd is the promise of less integration work. It paves over a ton of useless idiosyncracies in distributions (e.g. compare systemd's /etc/os-release with the previous /etc/*-release files that you cannot even call by name because they were called something else on each distribution). Initscripts is the biggest bullet point in that list: Initscripts were always written by the distribution because each distribution had their own scripting layer on top of sysvinit to make it a bit nicer, and each of these was incompatible with those of other distributions. Systemd units, on the other hand, can be maintained by the application developers themselves, and several distros (e.g. the patch-hating Arch Linux) actively pushes systemd units upstream.
When the application developer takes care and has only one target (the de-facto standards forced by systemd), life gets a whole lot easier for distributors. And that's why distributors choose systemd.
SystemD was selected by the Technical committee, which was very far from unanimous in the decision.
SystemD caused problems at the time and there was no reason not to postpone the decision to Jessie+1, which is what Debian usually does in such cases: "more discussion needed".
Debian broke their social contract, showing that their priority was no longer with their users.
> SystemD was selected by the Technical committee, which was very far from unanimous in the decision.
I didn't know that. I thought the poll was open to anyone to vote for the relevant time window.
> Debian broke their social contract, showing that their priority was no longer with their users.
Isn't this a little bold claim? AFAICS, the poll has spotted the potential problems and general grievances pretty well (like binary logs, etc.), and concluded that these reservations should be addressed before systemd has included in Debian?
I'm not trying to defend systemd here. As I said in my comments, I disliked it first, and then I learnt and got used to it. My teammates has shown bigger reactions to it, and then they liked it too, after we configured it to behave the way we like. I didn't migrate to systemd voluntarily. It fell from above on me in a pretty hard way.
If you don't know about the referral to the Debian Technical Committee, then there's a lot that you do not know (including to be wary of threads where "systemd" is mis-spelled "SystemD", as that is a common shibboleth).
There was a whole Hoo-Hah. It was years long. People resigned over it. The votes and discussions are often misrepresented by people recounting the tale. (Common misrepresentations: presenting it as a two-way contest between van Smoorenburg init+rc and systemd, when OpenRC and Upstart were both included, and both favoured over van Smoorenburg init+rc; presenting it as a single vote when there were several, including some votes on Debian General Resolutions by all of the club members; presenting the wiki pages written by the partisans and ignoring the lengthy technical reports from the TC members.)
I'm aware of the Debian's Technical Committee, but I was unable to follow them all-along during that so-called discussion.
During this period, I was a developer and so-called tech lead of a Debian derivative distro. We were working on some absurd requirement about generating system responses on some events, pulling continuous all-nighters. One day systemd packages arrived to Debian (sid or testing, I don't remember), and using it solved our problem. This is how I became aware that Debian packaged systemd, and it's gonna be the next default init, that's all.
As I said before, I don't blindly support systemd or anything. I'm just using Debian for 14 years or so, and I'm riding on what comes along. I don't have time to devote serious development time to them, however I wish I was able to contribute into Debian or any serious FOSS project. Hmprhf.
Fwiw, some distros have stuck with different init systems: Void uses runit, Devuan (Debian fork) uses sysvinit, Gentoo uses OpenRC[1]. There are probably more that I am not aware of.
Systemd has not given me any trouble, yet, but I run Void in a VM, and it boots extremely fast.
[1] I believe one can also use different init systems on Gentoo, after all it is Gentoo, but I have not touched it in years, so I do not know for sure.
Well, yesterday I was messing with hostapd custom configurations, and starting/stopping it from systemd seems to give absolutely no feedback. It doesn't even tell me if it started or not. Very advanced and helpful, eh?
As an (un)written rule, UNIX commands are mostly silent, and output something if something went wrong. This is one of the things that systemd got right I think.
Yes, it feels awkward at first, but systemd's behavior is not wrong.
edit: oh, downvoted, OK. This is actually an outcome of rule of silence, which is part of the UNIX philosophy [0].
my ubuntu 16.04 LTS requires a 'service networking restart' after any update that touches systemd. having to log onto the vm console to do that kinda sucks.
Because the terribly slow startup times and scripts instead of configs are such an advantage. FreeBSD people want to build a systemd-like thing as well, they just don't have enough resources atm [1] (sorry for reddit, original is in German).
Personally, I never found startup time to be much of a problem, on any OS. But having an init system that automatically restarts crashed services and is able to properly shutdown services including any child process they might have spawned sounds nice.
One thing I do like a lot about the BSD init system is that the interface is a text file with a simple format. I hope when they replace good old rc, they keep the interface as simple as that.
>Personally, I never found startup time to be much of a problem, on any OS.
Well, start up problem is a matter of convenience, it's not a fatal problem, but I just like that my laptop with arch+systemd+systemd-boot starts instantly (3 seconds + 6 seconds for firmware), while bsd on my other laptop starts soo damn slow. (btw slow shutdown pisses me off even more)
Though the real issues with rc are much more serious, and eloquently described here (aka you don't have proper login management, service management, device and policy management, and better in a unified way):
I first started using GNU/Linux (with sysvinit) in 2000. I still remember a) finding runlevels incredibly confusing, b) finding the entire concept stupid when I finally understood them, and c) the sense of relief I felt when I discovered FreeBSD in ~2003 and the fact that it pretty much did not have this nonsense. There used to be a single-user mode that was required for certain tasks, maybe that still is a thing - but even so, it was much easier to understand than System V init.
So let me rephrase my statement, I hope that if they replace the good _new_ rc, they keep the interface as simple. For all I care, it would not have to be exactly the same, but I sure hope it stays as simple. The fact that on FreeBSD, I just need to look at a single text file with an easy-to-read-and-edit format to see what is going on, is one of FreeBSD's greatest virtues, IMHO.
It really didn't. I can tell you with first hand knowledge the majority of the internet by any measure, but especially high scale systems where I've worked and know much of the small community, use neither Docker nor Kubernetes. That's not any indictment on either technology it's just simple fact. Fad tech appears a much bigger tiger in echo chambers like this website but the majority of the companies aren't even using languages that are popular to this crowd let alone tooling. Whether or not these tools have staying power or contribute meaningfully to the industry the jury is still out on, and will have to wait about 5 years to tell. In the past you "were an idiot" if you weren't using C++ or CORBA or RPC or Java or SOAP or message queues or Ruby on Rails or AWS or or Hadoop or OpenStack or blockchain or machine learning or.. all of these are fads and all that mattered then and now for consumers of the tech is solving business problems cost effectively.
Software developers are an incredibly easy demographic to influence with marketing. There is an industry of startups that have been minting money on this for a long time. But it's important to know that a lot of software development tools we use are just fashion statements.
It's not about current market share but about adoption rates. Which container technology/toolset/ecosystem is more likely to be adopted by the average software shop today?
It's the wrong question unless you are in the business of profiting off of container tech directly. What are you doing for the stakeholders delegating your pay checks and the person authorized to fire you? If they are happy, then it doesn't matter what the company uses. Most transactions and the world's wealth are stored and processed on operating systems and frameworks most people have never heard of which will outlast either of these by decades but that doesn't mean you should or shouldn't use them necessarily..
I'm well aware that there's tons of time proven software that serves us well and will continue to do so for some decades. Yes, most of this is not written in the PL/framework combo du jour, true. You seem to think I want everyone to switch to Docker/Kubernetes. I don't.
Fair enough, and the parent topic about /switching/ between broadly similar technology like operating systems is where departments usually go awry. My axe to grind is about initial selection when starting from scratch (do you really need fad tech? Statistically, likely not), or playing nicely in an existing environment and being very careful with new tech (can the company internalize and support fad tech? Statistically, likely not)
OP asked what stops people from switching to BSD. I agreed that for me, too, the missing Docker toolchain/runtime is a dealbreaker for me, as I have to work with these technologies on a daily basis. As Docker and friends see rapid adoption, others will think twice whether they switch to a BSD and have to run a Linux VM, too. I'm not sure what you're after here. You seem to think that Docker and Kubernetes are "fad technology" and are inferior to some alternative? Which orchestrator would you recommend? Which container tech?
Again it's not a judgement against the technology. What I'm getting at is the talking that fad technologies won the industry is false and too early to call like that; in the particular company you are working in docker and kube you may have to use. In industrial terms, they are far from required and are not used in the majority of deployments. Proprietary schedulers or other deployment machinery are running most workloads. Distant second by cluster size is Mesos, which has passed its fad phase.
I am currently using Nomad which works on multiple platforms and that is a requirement for my particular setting. I don't currently use containers in production environments and only see negatives to introducing them into this particular setting. This is not general advice to use nomad, just a counterpoint to any particular fad tech being an industrial requirement.
Very much so, and nearly one decade separated between the three in their respective fad cycles. Being a fad isn't a judgement against the technology, some will fail and some will prevail.
I came into IT on FreeBSD back in 2004. Was a real fanboy and couldn't stop ranting about it. My first IT boss at the time got me into FreeBSD and PostgreSQL without actually knowing why. I had no real experience to back my opinion.
FreeBSD was my primary OS, both at work and at home.
To fast forward a bit I handed over my last BSD environment around 2012, last Solaris environment around the same time. After that I only had Linux responsibilities. Around 2014 I was running Debian as my primary OS and sent on a RHEL course in Stockholm.
The course leader sold RedHat to me so well that the very first night at the hotel I reinstalled my laptop from Debian to Fedora. Haven't looked back yet.
Been professionally building systems on Linux since before that. So I'd like to say my perspective spans over both sides of this argument. I've tried to summarize why I'd rather use Linux and why you might consider using BSD.
The only reason I can justify to use BSD is that the open source world needs competition. It would be dangerous if we only had one OS.
But all my professional sense of getting things done and keeping them stable tells me not to use it. Correction, I use OpenBSD on my router at home but that's it. OpenBSD has a reputation of being very secure (perhaps earned in part by having a relatively small user base) and I rarely need to login to make changes. Because the user space tools are horrible compared to Linux.
And I believe most of that stems from the community being much larger and therefore the software better tested and more mature.
I'm talking mainly about operational security and stability. I believe Linux does this much better than any BSD.
Less critical bugs, less issues with package management, better maintained packages and more binary packages available that don't require source building. Pretty much all things very reliant on a strong community.
More programmers available to fine tune user space tools and make them hum. A lot more docs available from hobbyists and officials.
Some of my friends today are hardcore BSD geeks and they often have to deal with system breaking bugs, kernel panics even. I can't remember the last time I saw a kernel panic in Linux. Almost every day I can see at least one issue in our chats dealing with package management or kernel and system bugs. Bugs in jails for example, or network management related to jails. They're often following the latest releases to remedy these but that's even worse than using Fedora on servers imo. Bleeding edge software just to avoid the bugs and be able to use the latest so called virtualization technology available.
About FreeBSD, I've heard that its' software is lagging behind many fast updating Linux distros but is extremely reliable, how does Debian Testing compare to something like TrueOS i.e FreeBSD-Current stuff?
P.S when I say lagging behind I don't mean the server things but rather on the developer side.
The base OS and the installed 3rd party software (ports/packages) are on different release models.
The base is either stable/LTS (~5 years support) or rolling if you track -CURRENT.
For the 3rd party software it is either rolling or stable in short lived quarterly branches.
Most server side software is usually pretty up to date but some desktop-related tools can be outdated for a while. You might want to take a look at [0] and especially at the commit history of some fairly popular desktops like Gnome3 [1] or KDE5 [2]. They got updated recently but it took quite a while.
There used to be some projects using Darwin outside of macOS etc, but they are long dead. People have apparently had a harder time even getting the kernel to boot nowadays for various reasons.
The selection of software that simply works on Linux (especially Debian & derivatives like Ubuntu) eclipses that on FreeBsd.
I've seen attempts at bringing the Linux ABI to FreeBsd, including the familiar and popular apt package manager. If and when it's as easy to "sudo apt-get install package" on FreeBsd as it is on Ubuntu, it will see an increase in adoption in both the sever-production setting, and as a development environment.
> I've seen attempts at bringing the Linux ABI to FreeBsd, including the familiar and popular apt package manager.
That's ridiculous. `pkg` on FreeBSD is a thousand times more consistent and user friendly. Perhaps you are talking about the days before pkg-ng (next generation) became the default, but now installing anything is as easy as `pkg install foo` and it just works. It even updates automatically so URLs aren't out of date.
I have a deep respect for BSD family, and their way of doing things, but regardless of OS and distro, apt is probably the most advanced package manager currently.
When used with aptitude, it's very easy to use, extremely powerful and flexible. Also, it has the most extensive knowledge on details of packages in terms of dependency graphs and modification of files which belongs to packages.
While this is certainly true, have you used pkg in the last year or so? It's come on leaps and bounds. The solver is now on a par with apt's SAT-solver, and overall it's (IMO) almost as good as apt at this point.
Can you elaborate on this? I'm not sure which version added it but /etc/installurl is configured to use cdn.openbsd.org during install. Prior to this, I believe /etc/installurl was populated with the mirror you chose during install. That had been around for at least a couple/few versions, IIRC.
I'd certainly not discourage having a play around with it!
I started trying it out with FreeBSD 10.0 a few years back. pkg was rough around the edges back then. If you had a lot of packages installed, like lots of development tools and desktop environments, it wasn't hard to get it into a situation where it would segfault or run into dependency problems during an upgrade. Right now, with 11.2, I've not run into any crashes for a good 2-3 years or any intractable dependency problems in a good 2 years or more. It's probably not perfect, these are very complex tools which are hard to verify, but it's certainly eminently usable from the point of view of an admin/end user/casual ports contributor like myself.
In fact, when upgrading from Ubuntu 18.04 to 18.10 a few weeks back, I did actually run into a problem with apt refusing to upgrade because it couldn't work out the upgrade path. I had to remove a few packages which were causing problems, then re-install them after the upgrade. Whether that was apt or the packages having genuinely broken dependencies I'm not sure since I didn't investigate in detail.
> , apt is probably the most advanced package manager currently.
Apt is certainly the best package manager in my opinion. I haven't used rpm based package managers in a while, but I doubt they don't have feature parity at this point.
I think the magic ingredient of apt that makes it work so seamless is the thousands of diligent maintainers behind it. That's the "feature" that none of the other packaging systems have at the moment.
I also have a deep respect for the raw no-nonsense approach of the various BSDs, but every time I try running it in a professional capacity, the apt eco-system lures me back.
I use both apt, pkg-ng and yast/zypper/rpm. Both apt and pkg-ng work well. Apt has --really-long-switches that are easy to remember but not quite as many shortcuts for them. Sometimes pkg-ng is annoyingly upgrading itself but that's it. Rpm is rather annoying but zypper and yast make things easy to manage. I remember the old RedHat 4.2 days when I had to work with rpm exclusively; it has an inconsistent CLI with weird command line switches.
You shouldn't compare rpm with apt. rpm is a different layer equivalent to dpkg in the Debian world.
As you point out, apt layer/functionality is provided by zypper, yum, dnf, etc. and the main difference is that it is not unified across the main rpm distributions. Still, Fedora/Redhat yum replacement, dnf, is built on SUSE/openSUSE zypper's solver library (libsolv).
Have you seen the current size of the ports tree? It's on the scale of a distribution the size of Debian.
While there is certainly some Linux-specific software which doesn't work on FreeBSD (and vice versa to a smaller extent), this is a tiny fraction of the total amount of portable free software packaged for both.
There are degrees of difficulty. If you're developing on Linux there are numerous portability traps you might fall into if you're not paying attention and testing on other platforms like MacOS or BSD. From glibc-specific functions and structure members, to libraries which are Linux-only. You also have compiler differences, e.g. GCC vs LLVM.
None of these are usually much work to correct. But they can creep in without you realising if your CI isn't testing on multiple platforms.
The most annoying I had recently was a service using PAM for authentication/authorisation. Linux-PAM is both Linux-specific and subtly incompatible with Solaris PAM (the original implementation) and OpenPAM on FreeBSD. This means you have to have three sets of ifdefs for the different implementations, which is annoying. But if you developed for Linux and never tried building or testing on the other platforms, you would never appreciate the problem even existed!
It's a bit like shell scripts that call for sh, but have bashisms in them.
Thanks for the comments. It's easy to forget the norm is not testing on multiple compilers and OSs. BTW, I haven't been testing my stuff on ARM and I probably should. Not that I expect anything to break, but I don't want the stuff I don't expect to break surprising me.
- BSD has ZFS (but so do some Linux distros)
- Linux bad because BTRFS, so it's bad (well, you don't have to use it)
- BSD jails > anything on linux (I was hoping to see a clear outline of pros/cons, but the author's reasoning is very hard to follow, and the tone is very combative instead of constructive)