I wish containers hadn't created the abstractions of a registry and an image instead of exposing it all as tar files (which is what it kind of is under the covers) served over a glorified file server. This leads to people assuming there's some kind of magic happening and that the entire process is very arcane, when in reality it's just unpacking tar files.
A .tar file on a file server wouldn't suffice, because you have to store a bunch of metadata for that .tar file. But now you could argue that the metadata could just be a file next to the tar file. True, but container registries are kind of like that + optimization + some useful extra features. If you were to start from a simple file server and think about the stuff you need, then after some iterations you would actually end up with a container registry.
AFAIK this is how it worked with classical OS packages (think rpm/yum/apt/etc.)
Packages and metadata was all prepared for distribution over nothing more than static file hosting and http(s) to leverage as much existing infrastructure as possible.
The TUF spec (and PyPI TUF PEPs) explains why a tar over https (with optional DNSSEC, a CA cert bundle, CRL, OCSP,) isn't sufficient for secure software distribution. "#ZeroTrust DevOps"; #DevSecOps
What's the favorite package format with content signatures, key distribution, a keyring of trusted (authorized) keys, and a cryptographically-signed manifest of per-file hashes, permissions, and extended file attributes? FWIW, ZIP at least does a CRC32.
We now have the Linux Foundation CNCF sigstore for any artifact, including OCI container images.
Because ld-proofs is RDF, it works in JSON-LD and you could merge the entire SBOM [1] and e.g. CodeMeta [2] Linked Data metadata for all of the standardized-metadata-documented components in a stack.
But most Linux distros are not that great either. They don't implement TUF and are not safe against some vectors prevented by that. It's OK to start anew when container distribution poses a different problem anyway.
The linked page doesn't explain anything at all. It says some incredibly obvious things (remotes can be compromised and you should be prepared for that) and then goes on to say the subject fixes all that, but there's actually no explanation of how it does that or what it does to get there. It's a marketing piece, essentially.
>> The TUF Overview explains some of the risks of asset signature systems; key compromise, there's one key for everything that we all share and can't log the revocation of in a CT (Certificate Transparency) log distributed like a DLT, https://theupdateframework.io/overview/
>>> Proposed is an extension to PEP 458 that adds support for end-to-end signing and the maximum security model. End-to-end signing allows both PyPI and developers to sign for the distributions that are downloaded by clients. The minimum security model proposed by PEP 458 supports continuous delivery of distributions (because they are signed by online keys), but that model does not protect distributions in the event that PyPI is compromised. In the minimum security model, attackers who have compromised the signing keys stored on PyPI Infrastructure may sign for malicious distributions. The maximum security model, described in this PEP, retains the benefits of PEP 458 (e.g., immediate availability of distributions that are uploaded to PyPI), but additionally ensures that end-users are not at risk of installing forged software if PyPI is compromised.
>> One W3C Linked Data way to handle https://schema.org/SoftwareApplication ( https://codemeta.github.io/user-guide/ ) cryptographic signatures of a JSON-LD manifest with per-file and whole package hashes would be with e.g. W3C ld-signatures/ld-proofs and W3C DID (Decentralized Identifiers) or x.509 certs in a CT log.
> FWIU, the Fuschia team is building package signing on top of TUF.
>> HTTP resources in a Web Bundle are indexed by request URLs, and can optionally come with signatures that vouch for the resources. Signatures allow browsers to understand and verify where each resource came from, and treats each as coming from its true origin. This is similar to how Signed HTTP Exchanges, a feature for signing a single HTTP resource, are handled.
How do these newer potential solutions compare to distributing packages and GPG-signed hash manifests over HTTP, with GPG public keys retrieved over HTTPS (HKP)? (Maybe with keys pinned in the GPG source codes for common key servers? Or why not?)
Are DNS (DNSSEC, DoH, DoT downgrade attacks), CA compromise (SPOF), and x.509 cert forgery still the significant potential points of failure? What of that can e.g. Web3 solve for?
I don't want to be dismissive, but I hate deploying my applications for this reason. I'm an application developer. I'm not averse to infrastructure as code, and containerization, and I'm happy to do ops for my preferred stack. But I can't learn all this stuff too.
I hope the image registry abstraction opens up the possibility to improve the storage and transfer of images without breaking backwards compatibility. See https://www.cyphar.com/blog/post/20190121-ociv2-images-i-tar for why tar is not optimal.
Running containers on AWS requires uploading an image to a registry/repository, but there isn't any API to actually upload the .json manifest files and .tar.gz layer files they need, even if I have them sat in a folder in front of me (compare this to Lambda, where we can upload a .zip to S3).
The worst part is that the "official" way to upload artifacts to a repository actually requires installing the 'docker' command, running some sort of "login" command in that, piping around some secret tokens generated by the AWS CLI, etc.
The only other approach I could find is their "low level" chunk-based upload API. I wrote a Python script around that (pretty much just a 'while' loop), so I can avoid this docker sillyness.
Yeah, but "fake login dance" is mostly just a consequence of the Docker CLI's design. As an API client you just get an API credential and use it to authenticate against the API.
I don't mind that registries are a thing but I hate that they're so baked into the tools, which I think is what you're getting at.
If I'm starting a project, I want everything small, simple, and self-contained within my project directory. If, right out of the gate, I'm supposed to already have a daemon running (as root it seemed for a while?) which will do the building and execution, and a "registry" set up somewhere, I've already got pieces I likely don't understand and magic commands and high level abstractions for interacting with them, and introductory material telling me "this magic command is all you have to do! look how much you don't have to worry about!"
But I am worrying about it. And that's why I love when people write these "hard way" guides. So thank you for that link.
Actually it's the process how I (DIY) deploy several static sites from my local machine. I build the Docker image (hugo sites with nginx, expose a single port for HTTP traffic) and save it as tar. Ironically the base images do come from a registry, but I can't deploy to a public registry and don't want to host my own.
On the server where I need to run that site, I just transfer the tar, load the image and run the docker image. It's so straightforward I much more prefer this way than being dependant on external registry sites for deployments.
Why wouldn't you just run a vanilla nginx image and mount in the folder of site content? Then a deploy would be as simple as rsyncing the built folder from your laptop to the server, no need for sending a whole image every time
`docker save` archives the entire (often huge) image. `docker-pushmi-pullyu` uses an ephemeral registry as an intermediary, so it only needs to transfer layers that have changed. It saves me a lot of time.
Heh, the last time I saw something called pushmi/pullyu it was clkao's svn repo sync stuff.
Is that where you got the names from or somewhere else? (the blast-from-the-past aspect of seeing those names again has got me wondering about the etymology in general :)
Heh - in college I took a network security class that required us to set up our own attack/defend environments, and one of the important parts of the report was to explain to the TAs how to set up our environment. We were able to save ourselves a ton of writing by just exporting docker images like this and giving instructions on how to load those onto the VMs :)
Someone else I knew made .deb packages, which might have been smarter, since there were some hosts we didn't containerize (mainly ones that handled routing and such; I know now we might have been able to get away with doing it, but at the time I didn't and I thought it might be too much hassle for an already complicated project)
It's interesting skopeo [1] hasn't popped up in this discussion, partially because it is part of redhat's container tools along with podman, and partially because although it started out as a tool to examine remote containers it too supports container migration, but not just between registries.
From the linked website "Skopeo is a tool for moving container images between different types of container storages. It allows you to copy container images between container registries like docker.io, quay.io, and your internal container registry or different types of storage on your local system". Perhaps redhat plan to roll up skopeo functionality into Podman at some point?
Anybody know of some simple/lighweight registry for local usage? Quay boasts itself as a super duper enterprisey all solution. I'm looking for something more of a 'simple http server with basic acl' solution.
I don't know if this is light weight enough but I have some experience with [Harbor](https://goharbor.io/) for our company. The ACL it presents is simple enough. Maybe it was just our implementation but it ended running a lot of components on our cluster so I can't vouch for local use.
I ended up replacing it with AWS ECR. We only have a couple of container repos so ECR only ends up costing a few dollars per month. Not local, but very easy and almost free.
I use this trick to push to servers in an unnecessarily tight network I have to deploy to sometimes that can't see my source control / container registry.
But I do it for Docker. I have overall the sense that Podman is trying to accomplish feature parity with Docker but isn't there yet. Feedback on this formulation?
There's a few places where Podman probably has to catch up with docker but conversely, I think Docker is still trying to catch up with Podman in regards to running rootless (i.e. running containers using a user account without having root privileges).
Does Podman have enough parity that I can run our Compose stack locally? Last I tried (months ago), enough things failed that I gave up, but it would be great if developers could easily develop locally without root (which causes a bunch of permissions problems).
Thanks for the update. Last I tried podman-compose was probably a year ago and it was clearly not ready.
So according to your update it's still not ready. Podman-compose is just a shell wrapper that translates compose yaml into podman shell commands. Which is why it will never really work imo because it's a hack.
We really need some OCI way of declaring containers. Recently apko[1] was announced but it's for apk systems so I'm not sure how it helps me on Fedora.
I think the idea was that you should be able to use the Kubernetes format spec to specify podman Pods. But I have yet to see that work. Until then podman is just something I use locally, but I still need docker to help my devs. There is no way I can force podman on them.
> Podman-compose is just a shell wrapper that translates compose yaml into podman shell commands. Which is why it will never really work imo because it's a hack.
Well sort of, I mean it's some sort of python script so maybe it uses the docker py client instead of running docker commands with system. I dunno.
But my experience with podman-compose has been that it's not ready, mainly for more complex compose setups with different network access. I think that is what Pods are perfect for.
Yes Podman supports the Docker API and supports docker-compose directly a opposed to podman-compose which uses podman commands directly by reading the compose format and generating podman commands.
docker-compose2 has a bug with Podman right now, but a fix went into upstream this week to fix support.
Yes, having to set up the docker group is quite painful, and especially the membership requirement can cause issues when the account has to log in to the machine to be eligible for membership.
If you want to DIY a container with unix tools, this should help: https://containers.gitbook.io/build-containers-the-hard-way/