Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was excited to until I read the limitations section. Can't be used as the startup disk and doesn't work with time machine.

Hopefully this with change in the next macOs release.



Sure, this is a developer preview. Final release in 2017 will surely make this FS the default.


That's what people were hoping for with ReFS from Microsoft, which released years ago, but that still hasn't happened yet.


btrfs was available in the Linux kernel in 2009, but didn't see production release in the tinkerer-friendly distros until 2012 and in an enterprise distro until 2015. These things take time: filesystems need to be absolutely bulletproof, especially in the consumer space where (unlike Linux) most users will have no idea what to do if something goes wrong. I'd say Microsoft is still on schedule.


Speaking of which, is there any other good FS to use for desktop Linux (on an SSD on ArchLinux) or is Ext4 still the recommended standard?


Just yesterday, I did my first Linux installation with ZFS as the root/boot filesystem (Ubuntu 16.04). This is after using it as the default filesystem on my FreeBSD systems for several years, and being very happy with it.

I've used Btrfs many times since the start, and been burned by dataloss-causing bugs each and every time, so I'm quite cautious about using or recommending it. I still have concerns about its stability and production-readiness. If in doubt, I'd stick with ext4.


Depends on your needs.

Stability, reliability: ext4/XFS

CoW, snapshots, multi-drive FS: ZFS/btrfs

SSD speed, longevity: F2FS


I've had F2FS on an Android tablet for many years. Resurrected it. However I'm running Debian on my laptop and I'm scared to try f2fs on / Because i get warnings about it being not fully supported "yet" i would love to have an SSD optimized FS on Linux. Since AAPL will open source the release version, is it conceivable that AFS could replace ext4 as the default Linux FS?


Do you think Apple will release it with GPL-compatible license?


Most Apple OSS stuff is released under Apache (Swift 2.2 is Apache 2), so probably?


I think it could be mentioned that most of the features regarding CoW and snapshots could be provided by LVM these days.


Without getting stability and reliability correct don't bother with the other features. What good is it if the filesystem handles oodles of drives if none of them have any of the data you put on them?


I use btrfs as my daily driver.


XFS, ZFS, btrfs


Apple is far more willing to switch to more newer technologies than Microsoft is.


... and Microsoft is far more willing than Apple to put effort into backwards compatibility.


I don't know about that. Apple switched processor architectures twice, and both times software written for the old arch ran on the new one. And when they replaced their entire operating system, they not only made it so you could still program against the old API—just recompile and go—they also made it possible to run the old OS inside the new one so you could still run apps that hadn't yet been recompiled.


And before that, when Apple made the 68k -> PPC transition in the mid-90s, they ran the whole system under a crazy, bizarre emulator that allowed for function-level interworking - 68k code could directly call PowerPC code, and vice versa. Early PowerPC systems were in fact running an OS that was mostly composed of 68k code; it wasn't until 1998 or 1999 (around the release of the iMac) that most of the Toolbox was ported to PowerPC.


In the past, nobody did a better job of backwards compatibility than Microsoft.

Lately, Microsoft is showing that they aren't afraid to break things in the name of progress. If W10 is indeed the last version of Windows, maybe that's okay.


But wasn't that as the expense of clarity for new developers? I remember a horrible graduation exam where I had to code in VisualStudio.

The Most harsh part was not coding or UI, it was determining which version of different window API had a remote chance to smoothly work together. (It involved DB drivers and data grids)


Perhaps. But I suspect there's a lot more extending and maintaining existing software than writing new software. For the former, backwards compatibility makes a huge difference.


I'm not saying Microsoft's approach is bad, just pointing out that there's a much higher chance of rapid adoption for this new filesystem.


Also, NTFS being a much better filesystem than HFS+, there was a lot less incentive to switch.


Too bad we still can't use NTFS flash drives on mac.

Although ExFAT is at least somewhat promising.


One of the benefits of being vertically integrated.


OTOH Microsoft has a terrible track record of overpromising and underdelivering their next gen file system. I give Apple the benefit of doubt here. It is worded so that the limitations for the better part clearly sounds related to this being a preview release.


Apple has (relatively, you can replace some harddrives) the most control on hardware, so at least from that perspective it's easier for them.


Control over hardware doesn't really buy you anything here. Just about any hardware can use any filesystem with, in the worst case, the requirement that you have a small boot partition using the legacy filesystem.


Interestingly with SSD storage devices, control of the hardware can help a lot more as it can become possible to categorize, fully explore and if needed, ensure a particular behavior of commands like TRIM. Other filesystems have the unenviable task of running on any random piece of storage you throw at it, including things where the firmware straight up lies, or the hardware delays non-volatility past the point the filesystem assumes (potentially producing data loss in a crash) or similar types of problems.

Anyway. Overall, I think it's safe to say hardware control doesn't make most of filesystem development much simpler or easier. But there's a few interesting places it arguably does!


That doesn't really change anything about the filesystem design. A storage device can fail to write data it claims to have because of damage as well as design defects. When that happens, a reliable filesystem will detect it and a less reliable filesystem will catch on fire.

It also doesn't help to control 100% of the built-in storage if anybody can still plug in $GENERIC_USB_MASS_STORAGE_DEVICE and expect to use the same filesystem.


Many filesystems exist that do not run on a "plain" read/write block device, because storage based on flash is more complicated than the old random-sector-access magnetic hard drives. See for example UBIFS and JFFS2 on Linux.

Having full and direct low-level control of on-board SSDs could very well be advantageous for performance and longevity of the flash on modern macbooks. Things like combining TRIM with low-level wear leveling etc.


Taking advantage of the differences between flash and spinning rust only requires that you know which one you're running on.

Moving the wear leveling code into the OS where the filesystem can see it is an interesting idea but why aren't we doing that for all SSDs and operating systems then?


(raw) flash and spinning rust are fundamentally different, because spinning rust drives provide a READ SECTOR and WRITE SECTOR primitive, while raw flash provides READ SECTOR, ERASE (large) BLOCK, WRITE (small) SECTOR primitives. Stuff like UBIFS do try to move the wear leveling code into the OS. But the big players like Windows' NTFS and Mac's HFS were originally designed for the spinning rust primitive, so I guess vendors of flash storage (SSD drives, USB sticks etc) had to deal with providing a translation layer to emulate the spinning rust primitives on top of the nand flash primitives. I'm sure various nand flash vendors have different characteristics / spare blocks / secret sauce / defects that are masked by proprietary firmware, and probably see a significant business advantage on keeping those secret. Even things like building smarts about how a FAT filesystem is likely to have heavy rewrites of the file allocation table compared to file contents, into the firmware for USB sticks where FAT is a likely fs, could prove an advantage. So being a single vendor behind the entire stack from the raw NAND flash memory to the motherboard it's soldered onto to the OS is likely very advantageous.


They have their secret sauce so that legacy software can pretend the SSD is spinning rust. Let them.

Why shouldn't we also demand standard low level primitives so that every OS can do the thing you're describing?


Of course a standard would be nice, but good luck getting everyone to agree on one before the end of the century :)


Already implemented in faster DSP from what I gather... http://arstechnica.com/apple/2011/12/apple-lays-down-half-a-...


Apple's EFI firmware has an HFS driver built into it. The way today's macOS boots is the firmware reads the bootloader off the boot partition created these days on Core Storage installations, and the bootloader (more correctly OSLoader) is what enables the firmware pre-boot environment to read core storage (encrypted or not) and thus find and load the kernel and kext cache and then boot.


How can it be the "default" when you can't use it on a Fusion Drive or any system volume?


This is a developer release. It's hardly likely that Apple is sinking dev resources into evolving a new OS filesystem without planning on the bootloader and backup functionality also being in place for the final release.


Copy-on-write and snapshots are excellent building blocks for Time Machine, so much in fact that, when it was announced, I suspected it was because of ZFS (which, at the time, was being considered for OSX). It's very likely TM will be adapted to work on it (with about 20 lines of code)


It's long overdue, but having spent months developing an implementation of bootable snapshots on os x that works with HFS+, (http://macdaddy.io/mac-backup-software/) this kind of stings.


I'm not sure about snapshots, TM is made to work by just copying the directory structure over, and each backup is a fully functioning hierarchy. But cloning is definitely a big deal.


Time Machine uses hard links to create a duplicate directory structure without duplicating all the files themselves; only changed files need to be copied.

As I recall, HFS+ was explicitly modified to support directory hard links, which is less common in the Unix world, explicitly to support this feature.

TM also maintains a folder called /.MobileBackups to store temporary backups while your backup drive isn't connected. OS X also maintains /.fseventsd, a log of file system operations that TM can use to perform the next incremental, instead of having to compare each file for modifications.


it doesn't "just" copy the directory structure over each time, it creates the structure for each backup, any files that are changed get copied in, ones that haven't changed are just hardlinked to the existing copies.


A snapshot is a copy of the file (and its blocks) at a given point in time. Subsequent writes to it will happen to new blocks, leaving the ones connected to the snapshot undisturbed.


Well, consistency is important too in backups. So TM will probably make a snapshot and do the backup from there. Avoiding moved files from later dirs to already processed ones etc.


"Can't be used as the startup disk" is not necessarily a strong limitation; with FileVault enabled, you start up from your Recovery partition anyway. Even if they lose FileVault, I would guess they'll keep the Recovery partition setup (since they've invested in it a bit as a pseudo-BIOS, for changing things like SIP.) So that image can stay HFS and hold your boot kernel, while "Macintosh HD" can just be APFS. Sort of like Linux with /boot on FAT32 and / in LVM.


Linux /boot tends to be on ext3 or ext4 on most distributions. Recently it's XFS on the server flavor of Fedora, CentOS, and RHEL. For openSUSE the default is Btrfs, /boot is just a directory.

The bootloader/bootmanager is what determines what fs choices you have for /boot. GRUB2 reads anything, including ZFS, Btrfs, LUKS, even md/mdadm raid5/6 and even if it's degraded, and conventional LVM (not thinp stuff or the md raid support).


/boot on FAT32 is mostly an artifact of UEFI these days. When I set up BIOS-based systems, I usually had /boot on ext2.


Or even ext4. But I think the parent's point was that you'd keep your boot partition out of LVM.


Those limitations are obviously because it's in development.


The irony, a case-insensitive fs on case-sensitive "macOS"


> Filenames are currently case-sensitive only.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: