Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A few details on the "standard linux support" part.

To remove the hard dependency on the AUFS patches, we moved it to an optional storage driver, and shipped a second driver which uses thin LVM snapshots (via libdevmapper) for copy-on-write. The big advantage of devicemapper/lvm, of course, is that it's part of the mainline kernel.

If your system supports AUFS, Docker will continue to use the AUFS driver. Otherwise it will pick lvm. Either way, the image format is preserved and all images on the docker index (http://index.docker.io) or any instance of the open-source registry will continue to work on all drivers.

It's pretty easy to develop new drivers, and there is a btrfs one on the way: https://github.com/shykes/docker/pull/65

If you want to hack your own driver, there are basically 4 methods you need to implement: Create, Get, Remove and Cleanup. Take a look at the graphdriver/ package: https://github.com/dotcloud/docker/tree/master/graphdriver

As usual don't hesitate to come ask questions on IRC! #docker/freenode for users, #docker-dev/freenode for aspiring contributors.



When it will be "production ready"? It seems that the schedule in "GETTING TO DOCKER 1.0" post is outdated.


Does the LVM part mean that the partition/disk used to store/run Docker images has to use LVM?


No, docker sets up its own partition in a loop-mounted sparse file. There is a slight performance tradeoff, but in production you should assume the copy-on-write layer is too slow anyway and use volumes for your performance-critical directories: http://docs.docker.io/en/master/use/working_with_volumes/

In the future the drivers should support using actual lvm-enabled devices if you have them.


Any drawbacks to using lvm comapared to AUFS? I'd like to switch to Debian from Ubuntu - will I notice any performance differences or other restrictions?


There is a slight performance overhead with lvm, but nothing dramatic. The other advantage of the AUFS driver is that it is more proven. If you have a way to get aufs on your system (and I believe debian does), my pragmatic ops advice would be to use that and give the other drivers some time to get hardened. Of course my advice as a maintainer is that all of them are equally awesome :)


how would you compare aufs/lvm performance vs with the upcoming btrfs support? That is the one that I presume will have the most long term continuity because of btrfs becoming the default sooner than later (isnt it already for opensuse/fedora ?)


btrfs will probably compare favorably to lvm.. But it's just an educated guess at this point.


LVM would use more disk space, wouldn't it?


Is there a document on the relative costs and benefits of the two drivers?


does this mean that docker will run on 32 bit systems ?


Not yet, but that's coming very soon. We've been artificially limiting the number of architectures supported to limit the headache of managing cross-arch container images. We're reaching the point where that will no longer be a problem - meanwhile the Docker on Raspberry Pi community is growing restless and we want to make them happy :)


So the software in a container runs on the host-OS and there is no extra OS installed in the container?


There's a sleight of hand going on here.

The boundary between "kernel" and "libraries like libc" is very stable and doesn't change often. That means that often, the kernel distributed by Arch can work reasonably well in an Ubuntu system, and vice versa.

With that in mind: The "ubuntu" image ships the "ubuntu-glibc" and "ubuntu-bash" and "ubuntu-coreutils" and so on, but they continue to work on your Arch host because the system calls don't ever change.

You can't link (say) ubuntu-glibc into arch-bash though, which is why containers are built off of a "base ubuntu image" in the first place.


ah, so only the host-kernel is used and I have to add (distribution specific) libraries to the container?


Pretty much.

Containers come with their libraries though; you don't have to "add" anything. You'd just apt-get it within the container and it would pull down its dependencies.


This is correct. It uses features of the kernel and modern filesystems for efficient isolation.


For your sake, you better hope there won't be another ARMv7 and also ARMv8 version of Raspberry Pi, too :)


Well, these things exist not raspberry pies. ARM architectures are a pain. Like x86 ones were in the old days (i386, i486). MIPS is even worse.


Docker already runs fine on 32 bit systems if you compile it yourself and remove the check for 64-bit-ness.


Didn't know that - I'll try it!

What happens if you put a 64bit executable in a container they try to run the container on a 32bit machine? Or a Rasberry/Arm device?


When you try to run an executable on an architecture it wasn't designed for, it does not run. But this has nothing to do with docker or Linux.


This, by the way, is the reason we artificially prevent docker from running on multiple archs. If half of the containers you download end up not running on your arch, and there's no elegant way for you to manage that by filtering your results, and producing the same build for multiple archs (and what does it even mean to do that?) - then all of sudden using docker would become much more frustrating.


This sounds like a clothing manufacturer who only makes clothes in black because trying to match colors would be too frustrating for customers. At the end of the day, lots of people have multiarch networks, and don't want to have to choose between using a tool or supporting just one platform. Removing functionality does not make their lives easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: