Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Docker in the browser using x86-to-WASM recompilation (copy.sh)
119 points by g3 on Nov 1, 2022 | hide | past | favorite | 30 comments


If you're bored (or your code is still compiling), you can also try:

SerenityOS: http://copy.sh/v86/?profile=serenity (one of their developers contributed PAE support to v86, which is extra cool. I believe it contains their own browser.)

ReactOS: http://copy.sh/v86/?profile=reactos

Haiku: https://copy.sh/v86/?profile=haiku

9front: http://copy.sh/v86/?profile=9front

Android: http://copy.sh/v86/?profile=android

KolibriOS: http://copy.sh/v86/?profile=kolibrios

HelenOS: http://copy.sh/v86/?profile=helenos

Oberon: http://copy.sh/v86/?profile=oberon

QNX: http://copy.sh/v86/?profile=qnx

Windows 95 with IE 3: http://copy.sh/v86/?profile=windows95-boot

Windows 98 with IE 5: http://copy.sh/v86/?profile=windows98 (run networking.bat)

Windows 2000 with IE 6: http://copy.sh/v86/?profile=windows2000 (run networking.bat)

v86 running in v86 (the inner one is running in node): https://copy.sh/v86/?profile=archlinux&c=./v86-in-v86.js

As well as most BSDs and Linuxes, as long as they still have i686 support.


Can you explain in more detail how this recompilation works? When is it triggered? Because your emulator is still very slow, and recompiling doesn't seem to help. A wiki or blog post would be helpful.


Yes, I should write about v86's internals some day. Meanwhile, the code is right here: https://github.com/copy/v86/tree/master/src/rust

It's much faster with recompilation than without, but I agree that it's slower than expected (compared to, for example, qemu-tcg).

There is still room for improvements (e.g. eflags updates, 16-bit instructions, call/ret optimisations, main loop), but part of the problem is limitations of web assembly (no mmap, only structured control flow) and browser engines (memory blow up on large generated wasm modules, related to control flow).

The webvm folks explain the control flow problem quite well, and seem to be doing a better job than v86: https://medium.com/leaningtech/extreme-webassembly-1-pushing...


And the whole story seems to be here (available via the Exit button from any of those pages): https://copy.sh/v86/ and https://github.com/copy/v86


Gary Bernhardt's talk [1] is becoming more and more of a premonition.

[1] https://www.destroyallsoftware.com/talks/the-birth-and-death...


If you already have x86 to WASM, what do you need docker for? Seems like you already have all the encapsulation you could want at that point.


Docker is not just a container runner, it's also an API for interacting with them, a container creation language (Dockerfile), and more.

Being able to run Docker enables you to get access to all these tools and build the Docker containers from other projects.


It makes for a cool demo (and it's a reference to the recent "wasm in docker" announcement).



There's a generation of people who appear to treat docker images as a combined packaging/distribution mechanism. Instead of learning to create a .deb or .rpm, they write a dockerfile.

I tend not to use that sort of thing, but not just because of docker - there's a bundle of related practices that tend to turn my eye elsewhere. YMMV.


It's very true. However what is intrinsically better about a .deb or .rpm apart from having been there first? Are they the epitome of something? This is an honest question, I am really not sure. I vaguely remember that making an rpm was unpleasant and not very well documented, but it's been about ten yeats now...


An rpm just needs to add the missing scripts and binaries to your existing operating system. It doesn’t need a full mini OS image to run. So an rpm will be ~2mb while a docker image of the same thing might be 150mb. Rpm and deb packages also usually contain startup scripts so any services can be set to start when your computer starts. And they can put log files and database files in your regular filesystem without any special nonsense. They run with (generally) user level access permissions, which are much more battle tested than root permissions through lxd. And they have full access to the computer’s networking devices.

Rpm/deb packages can also install man files, command line tools, gui apps, and so on that can run directly in your operating system.


As a counterpoint, a disadvantage of deb/rpm is that the installed code can actually scribble crap anywhere they like in your system. Did you just install a setuid program? Did it just install a service? Who knows!

As much as I’m not a fan of Docker, it does isolate the container’s file system and other resources from the OS, something that deb/rpm/brew etc don’t do.


I’ve been meaning to check out Nix for awhile for this exact reason. I want the state of my operating system to be a simple state machine managed by a package manager. Right now the filesystem on Ubuntu and friends it feels like a horrible mess of random stuff. You end up in a different OS state if apt fails halfway through installing, or if you install an old package, configure it then upgrade it vs installing a new package. The classic “make install” just puts files wherever it wants. And then there’s the layered complexity of language-specific package managers - which are needed because apt isn’t up to task.

This is one big advantage of docker. Applications are actually portable and isolated. We need more of that in our Linux distributions.


You can unpack those formats with `bsdtar xf PACKAGE` and that's something that is much harder to do with Docker.


I'm not sure if I understand the advantage here. Aren't container images trivial to export? I've done this many times as part of build pipelines


I wouldn't say "intrinsically better". I have no idea what the One True Distribution Format looks like. I'd say at this point OS-level packaging provides more options and integrates better with the wider software ecosystem.

But let me ask you this: how often do you see:

FROM ubuntu:latest

RUN apt-get -y update && apt-get -y install ...

or the moral equivalent from a different distro?

And do you ever consider the idea of your stuff being incorporated into some other container?


Resistance is futile.


The next step is to replace the kernel with a type 1 hypervisor, doing container orchestration.

Thanks to them we already have a way out of POSIX monoculture.


For driving development costs to zero.


Did I say zero? Overcomplicating development will drive costs into the negative because your payroll will decrease when people quit and no one wants to maintain that jenga tower.


v86 also featured by supabase running postgres in the browser: https://supabase.com/blog/postgres-wasm


This is awesome, we have come full circle.


Not until we can have a browser within this docker container


JSLinux [0] can run a whole Alpine Linux distribution with a x window server and a built in browser. We are already past the days of browsers within operating systems within browsers.

[0]: https://bellard.org/jslinux/


This is how we killed Moore's law. It stopped but we didn't.


I don't see a network connection on the Docker host, so we have a little way to go yet.

(Unless someone knows how to access the network?)


Then stack overflow error is expected.


Could this be used for GPU computation?


Depends on the browser. There's standards here but not everyone is conforming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: