Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It is optimized for Flash/SSD storage and features strong encryption, copy-on-write metadata, space sharing, cloning for files and directories, snapshots, fast directory sizing, atomic safe-save primitives, and improved file system fundamentals.

\o/ Hallelujah, something modern!



I was excited to until I read the limitations section. Can't be used as the startup disk and doesn't work with time machine.

Hopefully this with change in the next macOs release.


Sure, this is a developer preview. Final release in 2017 will surely make this FS the default.


That's what people were hoping for with ReFS from Microsoft, which released years ago, but that still hasn't happened yet.


btrfs was available in the Linux kernel in 2009, but didn't see production release in the tinkerer-friendly distros until 2012 and in an enterprise distro until 2015. These things take time: filesystems need to be absolutely bulletproof, especially in the consumer space where (unlike Linux) most users will have no idea what to do if something goes wrong. I'd say Microsoft is still on schedule.


Speaking of which, is there any other good FS to use for desktop Linux (on an SSD on ArchLinux) or is Ext4 still the recommended standard?


Just yesterday, I did my first Linux installation with ZFS as the root/boot filesystem (Ubuntu 16.04). This is after using it as the default filesystem on my FreeBSD systems for several years, and being very happy with it.

I've used Btrfs many times since the start, and been burned by dataloss-causing bugs each and every time, so I'm quite cautious about using or recommending it. I still have concerns about its stability and production-readiness. If in doubt, I'd stick with ext4.


Depends on your needs.

Stability, reliability: ext4/XFS

CoW, snapshots, multi-drive FS: ZFS/btrfs

SSD speed, longevity: F2FS


I've had F2FS on an Android tablet for many years. Resurrected it. However I'm running Debian on my laptop and I'm scared to try f2fs on / Because i get warnings about it being not fully supported "yet" i would love to have an SSD optimized FS on Linux. Since AAPL will open source the release version, is it conceivable that AFS could replace ext4 as the default Linux FS?


Do you think Apple will release it with GPL-compatible license?


Most Apple OSS stuff is released under Apache (Swift 2.2 is Apache 2), so probably?


I think it could be mentioned that most of the features regarding CoW and snapshots could be provided by LVM these days.


Without getting stability and reliability correct don't bother with the other features. What good is it if the filesystem handles oodles of drives if none of them have any of the data you put on them?


I use btrfs as my daily driver.


XFS, ZFS, btrfs


Apple is far more willing to switch to more newer technologies than Microsoft is.


... and Microsoft is far more willing than Apple to put effort into backwards compatibility.


I don't know about that. Apple switched processor architectures twice, and both times software written for the old arch ran on the new one. And when they replaced their entire operating system, they not only made it so you could still program against the old API—just recompile and go—they also made it possible to run the old OS inside the new one so you could still run apps that hadn't yet been recompiled.


And before that, when Apple made the 68k -> PPC transition in the mid-90s, they ran the whole system under a crazy, bizarre emulator that allowed for function-level interworking - 68k code could directly call PowerPC code, and vice versa. Early PowerPC systems were in fact running an OS that was mostly composed of 68k code; it wasn't until 1998 or 1999 (around the release of the iMac) that most of the Toolbox was ported to PowerPC.


In the past, nobody did a better job of backwards compatibility than Microsoft.

Lately, Microsoft is showing that they aren't afraid to break things in the name of progress. If W10 is indeed the last version of Windows, maybe that's okay.


But wasn't that as the expense of clarity for new developers? I remember a horrible graduation exam where I had to code in VisualStudio.

The Most harsh part was not coding or UI, it was determining which version of different window API had a remote chance to smoothly work together. (It involved DB drivers and data grids)


Perhaps. But I suspect there's a lot more extending and maintaining existing software than writing new software. For the former, backwards compatibility makes a huge difference.


I'm not saying Microsoft's approach is bad, just pointing out that there's a much higher chance of rapid adoption for this new filesystem.


Also, NTFS being a much better filesystem than HFS+, there was a lot less incentive to switch.


Too bad we still can't use NTFS flash drives on mac.

Although ExFAT is at least somewhat promising.


One of the benefits of being vertically integrated.


OTOH Microsoft has a terrible track record of overpromising and underdelivering their next gen file system. I give Apple the benefit of doubt here. It is worded so that the limitations for the better part clearly sounds related to this being a preview release.


Apple has (relatively, you can replace some harddrives) the most control on hardware, so at least from that perspective it's easier for them.


Control over hardware doesn't really buy you anything here. Just about any hardware can use any filesystem with, in the worst case, the requirement that you have a small boot partition using the legacy filesystem.


Interestingly with SSD storage devices, control of the hardware can help a lot more as it can become possible to categorize, fully explore and if needed, ensure a particular behavior of commands like TRIM. Other filesystems have the unenviable task of running on any random piece of storage you throw at it, including things where the firmware straight up lies, or the hardware delays non-volatility past the point the filesystem assumes (potentially producing data loss in a crash) or similar types of problems.

Anyway. Overall, I think it's safe to say hardware control doesn't make most of filesystem development much simpler or easier. But there's a few interesting places it arguably does!


That doesn't really change anything about the filesystem design. A storage device can fail to write data it claims to have because of damage as well as design defects. When that happens, a reliable filesystem will detect it and a less reliable filesystem will catch on fire.

It also doesn't help to control 100% of the built-in storage if anybody can still plug in $GENERIC_USB_MASS_STORAGE_DEVICE and expect to use the same filesystem.


Many filesystems exist that do not run on a "plain" read/write block device, because storage based on flash is more complicated than the old random-sector-access magnetic hard drives. See for example UBIFS and JFFS2 on Linux.

Having full and direct low-level control of on-board SSDs could very well be advantageous for performance and longevity of the flash on modern macbooks. Things like combining TRIM with low-level wear leveling etc.


Taking advantage of the differences between flash and spinning rust only requires that you know which one you're running on.

Moving the wear leveling code into the OS where the filesystem can see it is an interesting idea but why aren't we doing that for all SSDs and operating systems then?


(raw) flash and spinning rust are fundamentally different, because spinning rust drives provide a READ SECTOR and WRITE SECTOR primitive, while raw flash provides READ SECTOR, ERASE (large) BLOCK, WRITE (small) SECTOR primitives. Stuff like UBIFS do try to move the wear leveling code into the OS. But the big players like Windows' NTFS and Mac's HFS were originally designed for the spinning rust primitive, so I guess vendors of flash storage (SSD drives, USB sticks etc) had to deal with providing a translation layer to emulate the spinning rust primitives on top of the nand flash primitives. I'm sure various nand flash vendors have different characteristics / spare blocks / secret sauce / defects that are masked by proprietary firmware, and probably see a significant business advantage on keeping those secret. Even things like building smarts about how a FAT filesystem is likely to have heavy rewrites of the file allocation table compared to file contents, into the firmware for USB sticks where FAT is a likely fs, could prove an advantage. So being a single vendor behind the entire stack from the raw NAND flash memory to the motherboard it's soldered onto to the OS is likely very advantageous.


They have their secret sauce so that legacy software can pretend the SSD is spinning rust. Let them.

Why shouldn't we also demand standard low level primitives so that every OS can do the thing you're describing?


Of course a standard would be nice, but good luck getting everyone to agree on one before the end of the century :)


Already implemented in faster DSP from what I gather... http://arstechnica.com/apple/2011/12/apple-lays-down-half-a-...


Apple's EFI firmware has an HFS driver built into it. The way today's macOS boots is the firmware reads the bootloader off the boot partition created these days on Core Storage installations, and the bootloader (more correctly OSLoader) is what enables the firmware pre-boot environment to read core storage (encrypted or not) and thus find and load the kernel and kext cache and then boot.


How can it be the "default" when you can't use it on a Fusion Drive or any system volume?


This is a developer release. It's hardly likely that Apple is sinking dev resources into evolving a new OS filesystem without planning on the bootloader and backup functionality also being in place for the final release.


Copy-on-write and snapshots are excellent building blocks for Time Machine, so much in fact that, when it was announced, I suspected it was because of ZFS (which, at the time, was being considered for OSX). It's very likely TM will be adapted to work on it (with about 20 lines of code)


It's long overdue, but having spent months developing an implementation of bootable snapshots on os x that works with HFS+, (http://macdaddy.io/mac-backup-software/) this kind of stings.


I'm not sure about snapshots, TM is made to work by just copying the directory structure over, and each backup is a fully functioning hierarchy. But cloning is definitely a big deal.


Time Machine uses hard links to create a duplicate directory structure without duplicating all the files themselves; only changed files need to be copied.

As I recall, HFS+ was explicitly modified to support directory hard links, which is less common in the Unix world, explicitly to support this feature.

TM also maintains a folder called /.MobileBackups to store temporary backups while your backup drive isn't connected. OS X also maintains /.fseventsd, a log of file system operations that TM can use to perform the next incremental, instead of having to compare each file for modifications.


it doesn't "just" copy the directory structure over each time, it creates the structure for each backup, any files that are changed get copied in, ones that haven't changed are just hardlinked to the existing copies.


A snapshot is a copy of the file (and its blocks) at a given point in time. Subsequent writes to it will happen to new blocks, leaving the ones connected to the snapshot undisturbed.


Well, consistency is important too in backups. So TM will probably make a snapshot and do the backup from there. Avoiding moved files from later dirs to already processed ones etc.


"Can't be used as the startup disk" is not necessarily a strong limitation; with FileVault enabled, you start up from your Recovery partition anyway. Even if they lose FileVault, I would guess they'll keep the Recovery partition setup (since they've invested in it a bit as a pseudo-BIOS, for changing things like SIP.) So that image can stay HFS and hold your boot kernel, while "Macintosh HD" can just be APFS. Sort of like Linux with /boot on FAT32 and / in LVM.


Linux /boot tends to be on ext3 or ext4 on most distributions. Recently it's XFS on the server flavor of Fedora, CentOS, and RHEL. For openSUSE the default is Btrfs, /boot is just a directory.

The bootloader/bootmanager is what determines what fs choices you have for /boot. GRUB2 reads anything, including ZFS, Btrfs, LUKS, even md/mdadm raid5/6 and even if it's degraded, and conventional LVM (not thinp stuff or the md raid support).


/boot on FAT32 is mostly an artifact of UEFI these days. When I set up BIOS-based systems, I usually had /boot on ext2.


Or even ext4. But I think the parent's point was that you'd keep your boot partition out of LVM.


Those limitations are obviously because it's in development.


The irony, a case-insensitive fs on case-sensitive "macOS"


> Filenames are currently case-sensitive only.


Will they deploy it case insensitive, still?


Presumably as with today, you'll have the option. I don't have a strong opinion on case sensitivity of file names, but I suspect they'll keep it case insensitive by default. I think for the average non-technical user that two files, "MyFile.txt" and "myfile.txt", being different could lead to some confusion, and Apple historically has apparently considered that confusion unacceptable.


The average user is also confused by "MyFile.txt" and " MyFile.txt" being different, or "Proposal II" and "Proposal 2" being different, but filesystems aren't usually built around that. I don't think case sensitivity is special enough to get that sort of treatment.


I believe the problem is also present for a large amount of third party software, making the move to case sensitive drives pretty hard to do:

[0] https://helpx.adobe.com/creative-suite/kb/error-case-sensiti... [1] http://apple.stackexchange.com/questions/192185/os-x-case-se... [2] http://dcatteeu.github.io/article/2015/12/31/case-sensitive-...


I've noticed some bugs with case-sensitivity recently in Ruby, of all things.


More problematic is that many case insensitive hard drives would be copied into new machines and there would be millions of conflicts. Some utility would have to sit there and annoy people by asking them to make decisions.


I can see why there would be conflicts going case sensitive -> case insensitive, but I can't see why there would be conflicts going the other way. Am I missing something?


If you have a file named "MyFile.txt" and another system is looking for "myfile.txt", then it'll not be found and Apple will not let you rename it because it thinks it's a no-op. That's frustrating as hell.


Apple's software (Finder, mv) lets you rename these. But it's true that some tools (I think git) get confused here.


Git init / clone on case-insensitive HFS+ sets `git config core.ignorecase true`, which can lead to confusing behaviour where it ignores a change in the case of a filename.

> The default is false, except git-clone(1) or git-init(1) will probe and set core.ignoreCase true if appropriate when the repository is created.

https://www.kernel.org/pub/software/scm/git/docs/git-config....


I think the parent meant the other way around too.

However, the transition between the case insensitive and case sensitive filesystems isn't going to happen overnight. People will be copying files around both ways for quite some time, so the insensitive -> sensitive case is still going to be a concern.


I think you have it backwards. If you try to expand an archive with FOO.TXT and foo.txt, what should happen if you're writing to a case insensitive file system?

    $ touch HI
    $ touch hi
    $ ls
    HI
So that's disturbing. Another problem is every software you can think of will be comparing two files case insensitively. Almost weekly I get burned by this.

    $ touch HI
    $ test -f hi && echo ok
    ok


That's not convincing me that I have it backwards. I was responding to this point in the parent comment:

> many case insensitive hard drives would be copied into new machines and there would be millions of conflicts

I still don't see where you get a conflict copying the contents of a case-insensitive file system to a case-sensitive one.


> I still don't see where you get a conflict copying the contents of a case-insensitive file system to a case-sensitive one.

Because some apps create MyFile.txt and expect to be able to access it later by myfile.txt. Adobe's applications, for example.


I had no idea. That's horrifying.


Open up the various folders of Adobe's software (on Windows). The DLLs are a mish-mash of all lowercase and upper-lower mixes. Heck, open up System32; the DLLs there are definitely not case-sensitive capable (`kbd*.dll' being one example). In fact, I bet you there's at least one program on your computer that accesses the Program Files using `C:\PROGRAM FILES (X86)' instead of `%programfiles(x86)%'. In fact, even environment variables aren't case-sensitive.


For the end-user they could prevent duplicate different-cased file names in the UI layer (the Finder), instead of the file system. That would be a more appropriate place for it anyway.


And then some code using Unix APIs would create two files whose names differ only in case and the UI layer would choke. This is why spray-on usability is bad.


The UI already has to deal with that anyway because it supports case sensitive volumes. What exactly constitutes case is locale specific, it differs from one user to the next, that logic would be messy to have inside the file system.


That would likely be a hassle because you'd have to be consistent for all programs that ever save or read a file. As a result it has to be an OS-level thing at least, if not at the file system level. I don't have a huge preference (case sensitive or insensitive), I think it's not worth a religious war, but whatever the choice is, it should be completely transparent to understand what convention the system is using as a coder, and as a general user.


This is a horrible solution.


Steam relies on the filesystem being insensitive.


Steam on Mac does, or at least did last time I tried to use it on a case-sensitive partition. It's not that steam inherently needs case-insensitivity, it's that some of the main app mixes the case of files in the app from what is on disk. So without case-insensitive FS it cannot find some files. Stupid problem really.


No it doesn't. How would it work on Linux if it did?


Both, you and cuddlybacon are right. A long time ago, they worked under the assumption that the FS is case-sensitive, and all the games I installed back then had title-cased folder names. Gradually, Valve stopped caring about this, and my games stopped working. I had to go in and manually change some game folder names to lower-case. It then kept some small files under SteamApps and the downloaded games under steamapps. They have fixed that now. Now, I have both CONFIG and config in my Steam folder.

How would it work? By a combination of magic and "we can't be bothered; the users should figure out something".


Really? What filesystem do they use in SteamOS?


SteamOS is Linux, based on Ubuntu. So I assume it is ext4, ext3, or xfs.

Btrfs is not stable enough IMO for something like SteamOS.


SteamOS is based on an older Debian release and uses ext4.


I know Unreal Engine 4 does, but Steam does not.


Does Adobe still?


Yes they do.


The fs should be case sensitive. If they want to enforce insensitivity it should be done with APIs for programs including the Finder.


According to their documented "Current Limitations" (https://developer.apple.com/library/prerelease/content/docum...):

> Case Sensitivity: Filenames are currently case-sensitive only.

First thought: they have seen the light!

A moment later: wait...they consider this a "limitation", and it's only "currently" the case. So maybe they're going to perpetuate the brain-damage anyway.

Sigh.


There's plenty of code in the wild that assumes case-insensitivity since that's been the case since forever.

Backwards compatibility is going to end up trumping whatever ideological purity case sensitivity represents.


Just as with HFS+ and ZFS, case-insensitivity will be an option.


Ok, what is the argument against case preserving but insensitive file systems?


It pushes a localization and UI problem down into the filesystem layer. Case-insensitivity is pretty easy for US-ASCII, but in release 2 of your filesystem, you realized you didn't properly handle LATIN WIDE characters, the Cyrillic alphabet, etc. In release 7 of your FS, you get case sensitivity correct for Klingon, but some popular video game relied on everything except Klingon being case-insensitive on your FS, and now all of the users are complaining.

How do you handle the case where the only difference between two file names is that one uses Latin wide characters and the other uses Latin characters? This one bit me when writing a CAPTCHA system back in 2004. (Long story, but existing systems wouldn't work between a credit card processing server that had to validate in Perl, and a web form that had to be written in PHP, where the two systems couldn't share a file system. It's simple enough to do using HMAC and a shared key between the two servers, but for some reason, none of the available solutions did it.) I noticed that Japanese users had a disturbingly high CAPTCHA failure rate. It turns out that many East Asian languages have characters that are roughly square, and most Latin characters are roughly half as wide as they are tall, so mixing the two looks odd. So, Unicode has a whole set of Latin wide characters that are the same as the Latin characters we use in English, except they're roughly square, so they look better when mixed with Unified Han and other characters. Apparently most Japanese web browsers (or maybe it's an OS level keyboard layout setting) will by default emit Latin wide unicode code points when the user types Latin characters. Whether or not to normalize wide Latin characters to Latin characters is a highly context-dependent choice. In my case, it was definitely necessary, but in other cases it will throw out necessary information and make documents look ugly/odd. Good arguments can be made both ways about how a case-insensitive filesystem should handle Latin wide characters, and that's a relatively simple case.

Most users don't type names of existing files, exclusively accessing files through menus, file pickers, and the OS's graphical command shell (Finder/Explorer). So, if you want to avoid users getting confused over similar file names, that can be handled at file creation time (as well as more subtle issues that are actually more likely to confuse users, such as file names that have two consecutive spaces, etc., etc.) via UI improvements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: