I've got flashcache running on a 20GB SSD partition fronting a 120GB spinning disk partition, on a desktop at home. It feels (anecdotal, subjective) as fast as a pure SSD for all normal desktop things. I also ran a small MySQL db through this setup for a few weeks, and that improved noticeably over a single spinning disk, especially when there was IO load from other processes.
The install procedure is a little rough around the edges, so I'm looking forward to having that more polished.
I'm quite interested in the user namespaces feature, but can't come up with any use cases for myself other than say sandboxing (sort of) applications. Can anyone explain it a bit better? The lwn article is a bit heavy going
> It allows for applications with no capabilities to use multiple uids and to implement privilege separation. I certainly see user namespaces like this as having the potential to make linux systems more secure.
I don't know enough about the feature to say if it is applicable directly, but Android uses uid/gid to create sandboxes on a per-publisher basis, so that installed apps do not have access to other publishers' apps' files.
This can be used in two ways: Application security, in general, would benefit from this practice, and if you are creating a Linux that can be both a traditional "desktop Linux" as well as running the Android environment, this is a useful feature for enabling Android app security to remain secure, which it wouldn't if you did user namespace separation merely by convention.
It looks like the kernelnewbies.org page is getting stampeded and is unreachable. Too bad, they usually have a nice high-level overview of the new kernel features and changes.
Am I the only one who feels that the kernel releases have been more frequent during recent times? Or is it that they have changed the version numbering like firefox?
About an year or so ago, it was something like 2.6.x. Then, it jumped to 3.0 and now it is at 3.9.
Kernel releases have been getting a bit more frequent. A couple of years ago they were about 80 days apart, now it's closer to 70. A change, but not a huge one.
Linux jumped from 2.6.39 to 3.0 because the numbers were getting too big. He also decided that as the middle digit wasn't being incremented, he'd drop it.
Yeah, some software assumed that there must be three digits in version numbers so 3.X kernels were sometimes called 2.6.X+40 which means 2.6.42 actually is 3.2.
There's no unstable 2.7, it's mainly because of that. The change to 3.0 happened basically to signal that.
EDIT: For those who do not remember the 2.x stuff when x was a odd number the version was unstable, when it was an even number the version was stable and basically was what distributions used.
Pull requests this late in the -rc cycle should contain only important bugfixes, other changes should normally wait for the next kernel release.
For example Linus ignored some pull requests sent for 3.6-rc2 that didn't really belong there:
https://lkml.org/lkml/2012/8/16/577
I use it full time on my Desktop and my Laptop. It glitches fairly often but nothing unrecoverable. You'll mostly run into issues if you run out of space. Also there's some weird side effects with improper shut-down that you should note. Btrfs tends to leave some stuff in my logs too, but nothing that actually lost data, normally I see the error after a brief lockup in the system.
On that note about running out of space, an interesting thing you have to keep in mind with btrfs is that, as a snapshotting filesystem you can't simply unlink files to free up space. They're still present and taking up space in a previous snapshot.
There's ways to free up space for real but you have to manually use some tools to do it from what I understand.
I'm using it on two machines (both running arch-linux, so it's typically only one week behind kernel.org kernel releases or so), and I have not noticed data loss or corruptions, even though I crash these machines from time to time (with btrfs-unrelated things).
The ability to make daily snapshots effortlessly is godsent on my tinkering computer (so I can rollback whatever stupid idea I came up with), and on a slow laptop with a even slower 2.5" PATA harddisk, I at least have the illusion of a performance improvement by LZO compression causing less disk-accesses.
I've read a little bit of the material, but I don't get the feeling that they looked beyond block-level RAID5/6 (which btrfs/raid5, as far as I understand, is not).
[1] mentions that RAID5 never checks parity on read, which is not an issue with ZFS/zpool (and, I think, the proposed BTRFS/raid5) because they verify all read blocks' check sum. Also the creeping multiplication of bit-errors on parity rebuild is a non issue because of this. [2] only mentions ZFS, but attributes copy-on-write as the remedy against distribition of bad data over the RAID volume.
And, yes, I see this silent corruption problem, and the inabilty to identify the "wrong" drive in case of parity mismatch as a big deficit of block-level RAIDs, and I share their view on the abysmally bad performance of some degraded/rebuilding arrays. But that is mainly what the filesystem-integrated redundancy mechanism try to address with their "blatant layering violation".
Yeah, I don't think they cover raidz or raidz2. raidz is not exactly raid5, in the classic sense, because as you mention zfs does checksum verification.
I wasn't sure what btrfs (or the parent) meant by 'raid5'. I think zfs was wise to call it something else.
Reviews were quite positive of Apple implementation. Has anybody tried the linux version ?