Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Asdf – language tool version manager (asdf-vm.com)
253 points by timhigins on Oct 24, 2022 | hide | past | favorite | 157 comments


Used this at my last company. Super painless and makes setup a breeze. Current company uses Docker and I hate it. Every company I've been at loves to throw everything into a container. Even if you setup your dev configs to maintain hot reloading, it's always slower. Node package management is also a pain no matter how you implement it with Docker it seems.

I also hate the Docker tagline that it "eliminates 'works on my machine' issues". I believe a tool like asdf would also achieve this (correct me if I'm wrong). Docker itself can go haywire depending on the machine and you're basically in hell fighting with it just to get your dev environment working. You essentially eliminate one problem in exchange for a variety of equally frustrating challenges.


I personally love docker. Almost all of it makes sense to me. It's not perfect, but very pragmatic, almost in a blunt way.

On a workstation level, people who really know their way around stuff tend to dislike docker in my experience, and for people who don't its still too difficult. One problem I think it has is that docker - still talking about workstations here - solves mostly problems on the team level, not on an individual level. But individuals tend to regard those problems as 'other peoples problems'.

Another problem is that you can work with docker just fine with barely knowing how it works, until there's a problem. In my experience, developers tend to immediately blame docker for the problem, even though its a misconfiguration of npm for example, which they would also have in a native installation.

Still, docker solves a lot of problems that aren't solved by asdf, apt or rpm, which is why it continues to be relevant and used also for workstations.

EDIT: docker on windows before wsl2 was hell too, a lot of discontent about docker that people have comes from those installs. On wsl2 it's mostly fine, if you don't attempt to mount from an ntfs file system.


In my experience as web (mostly) backend developer on Linux targeting Linux servers:

- docker is good when the deployed software runs on docker too, because dev and prod are the same and on Linux the performance hit is negligible if any. If you're on Mac or Windows it seems that it can be felt.

- docker is a good way to distribute the development environment, anecdotally much better than Vagrant or bare VMs.

- However having to always prefix commands with docker exec container is a little pain. Working in docker exec container bash is worse.

- Sharing only volumes between the host file system and docker means that sometimes I have to copy files to the shared volume instead of addressing them where they are.

- asdf doesn't have all those problems but I must be careful to pick different ports for different postgresql servers in different projects. Docker would shield those ports inside the network of the project.

All considered, I use docker when a customer uses docker, VMs when a customer uses VMs, asdf for all the other cases including my own software which I never deployed to docker so far. If I did I'd probably work in docker because I don't want surprises at deployment time.


In reality, when using Docker, Dev and Prod are almost never the same. We usually install different packages in Dev image i.e. debuggers or linters, we try to optimize the heck out of the Prod image, thus using a different build of the base image, etc. And all this, when you are only developing on Linux for Linux.

I’m not saying this is the right way but this is very common. I also agree with the rest of your points.


Yes, I agree. I'm working on a Rails project today and the container in development has some gems we only use for development and testing: byebug, pry, rspec, timecop, simplecov, factory_bot. They won't go into the production images. The instructions for developers are

    docker-compose build
    docker-compose run --rm api bundle
    docker-compose up
api is the name of the service in docker-compose.yml

One developer has a M1 Mac and he's using something called mutagen instead of docker. His docker-compose.yml is a little different and he's paying a performance penalty. His M1 is only 50% faster than my Intel from 2014 at running the test suite (50 seconds vs 75.) The difference is not noticeable when running only a few tests per time when working on a single feature.


You probably want to use

    docker compose
not

    docker-compose
The latter was the name of the compose v1 binary, so using it may cause your command to be handled with a v1-to-v2 compatibility mode. Depends on how your installation is setup. Compose v2 is implemented as a Docker CLI plugin.


When I invoke `docker compose up` on my project with Docker version 20.10.18 it says:

    The new 'docker compose' command is currently experimental.


I'm not sure what that means, but I don't think compose v2 is considered experimental at this point. It is included in Docker Desktop, for example. You might be running an older version. You can check `docker compose version`. Version 2.12.2 was released a few days ago.


Does he run ARM docker images or X86? I can imagine the latter will slow things down


I found docker to be pleasant to work with for support services (if your app needs postgres, redis, mail service, pdf converter, object storage, etc) in development and you need an easy command (docker-compose down; docker-compose up) to tell designers and product people to easily reset their environment to a sane state.

It is NOT great for having a live-reloading dev webservice running. People seem to swing for an all in one solution, but consideration is needed about what services you want in docker or not. Docker is great to have preconfigured database environments that are easy to tear down and start from scratch and works across multiple platforms. Docker is bad for local dev server.


> It is NOT great for having a live-reloading dev webservice running.

Oh my god this.

I was running a fastapi server on my m1 Mac in an arm Linux docker container. Everything else works great in this, but the live-reloading server was running at 200% cpu and ate 50% of my battery life!


I've found a lot of docker wins come from installing modules which have native bindings. I still hate running my dev server in docker, though.


If you genuinely think you have a better way to do it, propose it. Don't sit back with the terrible attitude of "my company does it this way which sucks" - why not try to make things better?

There are two possibilities - a) you are right, you know something your company doesn't, and by proposing your new plan in a constructive manner you might make life better for everyone b) you are wrong, you don't fully comprehend the surface area of the software you are working on, and your suggestion is short sighted and doesn't account for all of the requirements.

Either way it is helpful to put it out there - so long as you do it from the right place, with the right intention. Either you'll humble yourself, or you will eliminate an unnecessary dependency and make life easier for your team.


Huh? The whole comment is comparing asdf to docker and saying asdf is better.


He's talking about doing something about it in his job, instead of complaining about it on HN.


They lead by saying how much they liked asdf vs docker. Ergo: proposed solution is asdf no?


Then why does their company still use docker for that? Did they fail to convince anyone else? Did they disagree? Did they not even bother bringing up their frustrations? People love to complain, but rarely try to contribute to actually make things better.


The tool doesn't have to be just better. It needs to be so much better that it's worth: changing defaults / what people use now, changing documentation, verifying it works for all projects, ensuring that nobody has use cases that asdf doesn't address, and actually spending time with the decision maker to present your idea. For a larger company that may mean days of work.


Days?

In the "big 'ol" companies I worked at, something like that takes months! If even possible...


Ironic, it would have been far more helpful to just ask OP "what are the reasons your company still uses Docker?" instead of mindlessly complaining.


Counter argument: your version is fine and may have helped to solve one particular issue in one particular company.

But countless developers in countless companies have each their own set of frustrations where they sense that something could be improved but don't know what to do with the frustration.

Frustrations are a useful symptom but learned helplessness is a terrible thing that can lead to anything from being annoyed to leaving your job to depression.

I would argue that the response was in fact a very valuable lesson.


That right there is text-book Ad Hominem fallacy!

It is totally valid to point out failures in something without knowing how to put them right, or being in a position to do so.


Exactly, let's first acknowledge there is a problem, then we can find a solution.

Complaining for complaining is not helpful, but don't dismiss problems because no one can think of a better solution at the same time.


Sometimes the problem is more that the grass looks greener on the other side. When you realize there are a bunch of day 2 problems for the "better" solution you didn't uncover beforehand, many will enter sunk cost territory


> I believe a tool like asdf would also achieve this (correct me if I'm wrong).

This is true for very simple applications, where you’re just running `npm ci` or `pip install`, but breaks down as soon as you have sophisticated dependencies with system / library requirements or dependencies (e.g. databases, caches, message brokers, queues) that are beyond the scope of asdf.

Also, since you’re probably deploying containers to production, it’s useful to have a similar environment for local development so that you know what will actually happen in production.


Best dev setup I've seen has been to use Docker compose for all of those dependencies, but just use asdf for the main application development. You get all the dependency wins of Docker for the peripherals but with the faster native local dev experience.


> since you’re probably deploying containers to production

Citation needed I feel - I’d wager the number of containerized deployments pales in comparison to the number of binaries scp’d into a production host.


I’d also be interested to know the answer!

I said “probably” because the vast majority of deployments I’ve done over the past decade have been containerized, but I have no trouble believing that my experience isn’t indicative of the average.


Absolutely.

Docker is NOT about local dev, it's about deployment.

We forgot this...


It can be anything you want it to be, especially inside the confines of your own setup.

Do you want to quickly debug some software that relies on an older version of Postgres 9 but you don't want to clutter your host machine? Just run the following command to fetch and start an instance. With --rm flag it's going to be removed as soon as you terminate it:

docker run --rm -p 5432:5432 postgres:9


It can be.

But that was never the primary purpose of Docker. Like I can use Mac Minis as door stops, etc.

Docker runs best on top of Linux. As in actually run a Linux host , everything else in my experience is a hack. As long as you have internet access, spinning up an ec2 instance to test stuff shouldn't be too hard.


> Docker is NOT about local dev, it's about deployment.

I mean... if Vagrant can be about local development, I don't see why Docker can't be about local development too.


> Also, since you’re probably deploying containers to production, it’s useful to have a similar environment for local development so that you know what will actually happen in production.

This such a complete and utter lie and I'm surprised people in 2022 still believe it.

You do know what's happening when you run Docker on your Macbook, right? Right?


Ignoring your tone, here’s a concrete example: I was debugging a HTML-to-PDF service that was crashing for some payloads, but I couldn’t reproduce the problem locally.

I used Docker to run the application using the exact same image that was deployed to production, and was able to replicate the issue.

Why? Because the PDF renderer depended on loads of finicky dependencies (e.g. linked libraries), and it turns out that the default fonts in our base image didn’t support the Unicode code points in the example payload. I couldn’t replicate this outside of Docker because I have a completely different set of linked libraries and don’t stack.

Happy to answer any questions you have.


you always have that option though and i assumed it was a fairly common debug thing to have to do if you use docker enough. not just for the times it works (ie you find a bug between the layers) but for all the times it isn't the culprit.

the other post mentioned setup, so i thought he meant they imposed presetup containers for developers to use?


> You do know what's happening when you run Docker on your Macbook, right?

A Linux vm spawns a process with new namespaces - which is often exactly what happens in production as well.


The only problems I’ve ever seen a coworker or myself encounter with docker is not knowing it well enough. Admittedly it has a strong learning curve. There are things that will not work unless you know certain underpinnings of docker, dns/networking, etc. But to say it doesn’t work and goes haywire doesn’t make sense. It’s an extremely stable piece of software.


> I also hate the Docker tagline that it "eliminates 'works on my machine' issues".

Docker basically just takes a snapshot, which hides the problem rather than fixing it: when we want to change something, like updating a dependency, we're back to the same dependency hell.

Even worse, those snapshots are often not reproducible (e.g. running things like `apt-get install -y foo`, which depends on the latest contents of third-party servers). Again, Docker tries to hide the problem by putting snapshots into a cache.

To avoid these problems, we need the discipline to do sensible things (e.g. using specific .deb packages, rather than apt-getting whatever's latest; or using something more brute-force like Nix). Yet once we do that, there's usually no point doing it with Docker at all; since those commands work perfectly well outside of a container (if we want a container to deploy, we can tar up the resulting directory)


I'm not sure what you mean by "snapshot" when most docker images are instructions to (more or less) reproducibly create an image.

I hope you don't COPY your entire system into the container at least.


> most docker images are instructions to (more or less) reproducibly create an image

I can't recall ever encountering such a thing myself. On the other hand, I've seen LOTS of this sort of thing:

  RUN apt-get update && \
    apt-get -y install apache2
The above is taken from official AWS documentation[1] but it's pretty rampant.

As I mentioned above, when the instructions are reproducible, there's no point using Docker to run them; a shell script would suffice.

[1] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...


It depends a lot on your base image. If you use Debian locked to a release, you'll likely get the same Apache version plus small fixes / security fixes (whatever the exact Debian policy is).

That's not reproducible in the sense of "reproducible software", but it is usually good enough to build other things on top of.

If you want reproducible in the sense of unchaging/same digest, you should make a custom base image with the things you never want to change and pull it from somewhere.


"Base image", AKA a cached snapshot, like I said ;)


I use asdf, nix-shell, docker, docker-compose, and kubernetes... just use what makes sense.


What use case are you addressing here? Docker can be used to maintain development tools in various ways, but I see it as mostly operating on a different level from asdf.

For example, asdf can install tools on a developer's system. asdf can route invocations to the appropriate version of an installed tool.

Docker can be used to install tools in isolation by way of an image build. That image can be shipped off, so that developers can pull the image, and do not have to run an installation procedure. Docker can pull and run the appropriate version on demand upon invocation.

It's also not a binary choice. You can use asdf in a Docker container, and you could use Docker in an asdf plugin.


I've written a rust version of asdf over the last few months, both as an exercise and to fix a few issues I've been having with it - like noticeably slower tool startup times due to the shims resolving the version (my version basically works like direnv under the hood and hooks into the shell directly) and me just wanting a single "use" command which installs a plugin, downloads the requested version and sets it as active. It works very well and I've been daily driving it for a while.

Having the ecosystem of asdf plugins which are basically just shell scripts has been a huge boon. It's been a breeze to work with, and most of the plugins are well written.

Now, I've been contemplating switching to NixOS, but most version managers don't work at all with it due to dynamic linking. I absolutely love the idea of NixOS, but this has really bummed me out. I feel like the nix language is still a little clunky for general use, so as long as there is not a straightforward solution like having a tool-versions file I'm really hesitant to make the full switch.


There was a recent post about asdf performance from the maintainer that is quite illuminating:

http://stratus3d.com/blog/2022/08/11/asdf-performance/


I feel like they're just hitting the limits of a bash+shims based approach. I understand why they've done it, but in the end an implementation in a compiled language will beat that approach most of the time. There are issues since a shell hook will only work as long as you're, well, using a (supported) shell.

However, it's surprising to hear that they've been focusing on UX. The fact that you have to manually install a plugin, then install a version, and then finally set it, without the tool suggesting any of those steps to you when you forgot one, really makes the tool annoying to use when switching versions a lot.


I don’t agree.

As a proof of concept I wrote a Fish native version of the function that figures out what local version of a tool to use I do actually use it to show the currently selected version in my shell prompt. The asdf version was unusable in a shell prompt.

It goes through the parent directory structure up until the top level root directory and finds the first .tool-versions file and parses out the version of a specific tool.

The asdf native shim takes ~100 ms in Bash (no idea if it can be optimized, but I assume so). My Fish version takes ~1 ms.


I'm not familiar with the performance characteristics of fish, so I can't really speak to that. I only remember that the developers of asdf themselves had problems with resolver/shim performance and even temporarily implemented only that specific part as a go executable, and projects like asdf-direnv also try to work around the problem.


I don't use NixOS myself, but have Nix installed on my Mac, and it seems to provide all functionality of package or version managers I needed.

I think though it is more complex because it is a programming language that provides this functionality instead of purpose build tool like asdf.

For my needs I created a framework for development: https://github.com/takeda/nix-cde to avoid cruft of including the same things over and over in my projects.


Interesting, thanks for sharing! I'll have a look over the weekend.


I first started using nix thinking of it like a version manager also, but that’s not what it is, and thinking of it that way will only bring pain.

Nix is a package manager (like apt or whatever), and like package managers, it works with a set of vetted, decently verified package versions (NixOS releases). Mostly this is fine. It’s easy to use nix stable for most stuff and nix-unstable for anything where you need newer stuff. You can then configure “overlays” to the set of packages where you specify how to build particular versions where you need it.

It took me a long time to make the mental shift from nix being something like pip or npm or asdf or pyenv or whatever to thinking of it like a true linux distribution’s package manager, but it helped everything to click once I did


Thanks, that's a helpful perspective. I saw that there's also specific packages for things like node versions, so that might be an alternative to writing overlays in some cases. I'm just concerned about packages without any support being extremely cumbersome to install, since it's hard to run regular binaries.


Thank you for sharing - I've written my own "version" of asdf in Bash too, and I'm getting to the point where I would like to write some portions in Rust.

Does qwer support per-directory, local versions of plugins? I'm finding that that's difficult to do without shims...


Yes it does! It uses the pre-cmd hook of shells and sets the environment, similar to how direnv works.


Cool I see - I override cd, but now I'm pondering if I should change approach. But I guess what I was really asking is how you do it in non-interactive mode?


> noticeably slower tool startup times due to the shims resolving the version

I migrated to asdf from nvm because of nvm's very slow startup time. I haven't noticed excessive startup time on asdf yet. I wonder what's the issue with your asdf installation.


Would you mind sharing your tool?


You can find it at https://github.com/happenslol/qwer . Be warned though, it's heavily lacking in documentation and polish, although all core features are working. I've just added a bit of info to the Readme, hit me up if you have any questions.


This looks really promising. I love seeing these tools written in languages other than the thing they're targeting.

As an example I use fnm (https://github.com/Schniz/fnm) for managing JavaScript versions. It runs so fast (compared to nvm) I'm inclined to think something went wrong and it silently failed, but it never does!


This is really cool! You're pretty much solving all the issues I've had with asdf! Keep up the great work!


Appreciated. Will try next weekend.


"ASDF (Another System Definition Facility) is an extensible build facility for Common Lisp software. ASDF comes bundled with all recent releases of active Common Lisp implementations as well as with quicklisp, and it is the most widely used system definition facility for Free Software Lisp libraries."

Does asdf come with a plugin for asdf & quicklisp? would've preferred for the new guy on the block pay respect to the old guy on the block and used a different name..


Yeah really not a great name choice hey.

asdf for common lisp been around for years and used by the clergy of programming basically and still widely used today


One tip that I didn't know about and didn't seem to be documented at the time I was looking: if you want to add a plugin for some new project but just defer to what you were already using for everything else, you can set your `global` option to `system`. e.g.:

  asdf global python system


The one thing I really miss from rvm is the ability to do this only in the current session rather than globally.


I might be missing something, but I think you can use `asdf shell` for this: https://asdf-vm.com/manage/commands.html


Oh, that's exactly what I wanted, yes.


I have been using asdf for quite a few years, and I've always been impressed. It's honestly a breath of fresh air to only have a single set of commands to remember for node, go, ruby, python, even crystal. For node, it even respects existing .nvmrc files.


Yep. I love being able to do `asdf install`, and it goes through the .tool-versions and installs everything needed.


After the pain of pyenv and it's terrible compatibility with poetry, asdf has been a dream come true. It just works, every time. No fragile config. No mess.


What sort of compatibility issues exist with pyenv and poetry? Curious as I've never mixed the two...


One of the things Pyenv let's you do is to use a specific version of Python within a folder. Poetry will sometimes fail to detect it and use whatever version is used globally instead.


Thanks, that is quite unfortunate


Rarely have I seen a tool adopted so swiftly and at such scale. It eats the role of hundreds of other language & tool specific version managers, and whatever the secret sauce is, it's nicely fast & relatively low pain. Hats off to asdf. Major major delta in devs lives from this recent-ist new more-meta contender.


> whatever the secret sauce is, it's nicely fast

Mostly by reusing existing version managers, which have been developed and optimised over the years. Asdf itself is a relatively thin wrapper really.


I don't think I've even seen an asdf plugin that directly uses a version manager. They generally seem to either A) download prebuilt binaries or B) fallback to building from source. Either way the installs go straight to ~/.asdf/installs.

The only exception I can think of is any dependencies on libraries that may or may not be installed by the system package manager. Even then, those dependencies are often optional.


Yeah, I should've been more precise. The asdf plugins use parts of existing systems. For example (python/ruby/node)-build which mostly come from other version managers. Asdf does provide extras, but the plugin-specific code is tiny: https://github.com/asdf-community/asdf-python/blob/master/bi...


The Erlang plug-in does this. It uses the older and well-tested Kerl under the hood, but you the nice UI of asdf on top instead.


The secret sauce is a low barrier to entry. An asdf tool is just a shell script that does the work to list packages, download and install them, etc. You can make a simple tool script in an afternoon.

The low barrier to entry is also a bit of a curse too. The quality of asdf tool scripts varies wildly. Anything not maintained by their core org is a crapshoot in my experience. Many of those smaller tools don't support ARM or have lots of undocumented dependencies or quirks.


`asdf` is a great tool but I'd like to remind people that it's probably a smart idea to be agnostic to it, for the very same reasons they should've been agnostic to their previous technologies of choice that `asdf` had replaced.


I have an issue out to make it agnostic of keyboard layouts if that helps! https://github.com/asdf-vm/asdf/issues/930


Hmm, yeah I saw that one. Still don't think the motivation is super strong


  alias aoeu='asdf'
Seems like a reasonable shortcut compared to trying to do keyboard layout detection.


I've been using this at work, sharing with my colleagues, and integrating this into our default CI/CD environment. It's awesome and has taken away the thought of "how do I install version XYZ of tool ABC here?" entirely. Just asdf what I need, done. Huge productivity boost.


I wonder how many millions of lines of shell configuration have died by people switching to asdf. Fantastic tool


Yeah, what is more... it is really helpful and easy to use. Given that I work with Ruby, JS/node & Elixir, it's been such a pleasure to have one tool managing all packages.


Ironically, it’s written as a shell script


While asdf solves lots of pains, I still feel it solves the wrong problem. What I've instead started to do is to build docker images for various needs. Like, multiple of our projects need terraform, but all different versions. A project should then include that in its dev-dockerfile. No setup, just git clone a project and ready to go. No messing around with "to install and run X you need to have Y already installed, which half of you don't so for some reason it wont work in your environment".


That’s all well and good if you’re on Linux, but docker is super slow on Mac and windows. I’ve often worked places that develop iOS apps, s were stuck on macs, and thus alternatives to docker are very welcome.

Asdf can automatically install tool chain versions based on a plain text file stored in your repository, so it’s pretty plug and play too.


VirtioFS[0] is making a _huge_ difference on the macOS side. I hated using Docker on macOS for a long time and started testing it as soon as it came out. Had some bad bugs initially and apparently does still have kinks being worked out but I’ve been using it consistently for a few months now and actually forgot all about how terrible the experience used to be. It is such a great improvement.

[0] https://www.docker.com/blog/speed-boost-achievement-unlocked...


I feel docker is super fast on windows, as long as the files you mount are on wsl side. For instance most of my npm stuff is even faster there than lots of small files on Windows side.


I get using docker is more painless, but I still prefer running things outside of a container when the setup isn't too complex. It is mostly because I have some heavily customized dev environment including zsh and all other tools installed, so developing in docker always make me miss a lot of things.


In that case, if you can get it running in nix then thats an even better option.


Asdf is a great boon when upgrading language versions in a git branch and needing to switch back and forth between that branch and another feature branch without the upgraded language version.

Switching back and forth between branches with .tool-versions seamlessly switches to the correct version of the language and it “just works”. Trying to accomplish the same purely via docker is slow and painful.


> No setup, just git clone a project and ready to go.

Okay, a worthy target.

> No messing around

I will list layers that I guess you are using:

docker client binary

a command to run the thing <---

user-specific docker configuration

docker daemon as root

system-specific dockerd configuration

Dockerfile

embedded in it, a small shell script to download Terraform binary <---

files scattered on /var/lib/docker/* which never require cleanup

dns implementation

virtual bridge and other kernel-ish entities

tcp proxy

a running Terraform binary <---

I marked the layers that, in my opinion, would be enough to achieve the same goal. The explanation is that terraform is a self-contained binary that is not trying to scan system-wide or homedir-wide for files that can and will break it into dozen pieces. It is confined to the repo.


tfswitch might help with particular issue of terraform versioning:

https://tfswitch.warrensbox.com/

Even then some versions of terraform providers are not compatible with M1 macs. Docker would help with that probably, but so can: https://github.com/kreuzwerker/m1-terraform-provider-helper

Perhaps these sort of issues support the benefits of per-module docker images?


Agreed, I tend to treat my raw system as read-only and I much prefer it this way. Not only do I get the reproducibility, I also get the ability to more easily prune unused software.


Works well without docker installed, does it? ;)


I'm a big fan of asdf, I switched to it from nvm back when I found it and was using node. To be honest I don't use it much anymore because I've mostly converged on vscode dev containers. But anything I need to do directly on my local machine or for running the odd random tool it's a life saver. Dealing with things like nvm, n, rbenv, and kerl were such a pain before.


I also use it to run different versions of PostgreSQL without having to use docker.


Oh sheet, I never realized you could do that! Thank you.


Probably the only sane way to install Erlang / elixir.

Sadly, reuses the name of a completely unrelated common lisp tool.


Been using this for years, one of my favourite tools. Ruby, nodejs, python, golang, even direnv - all managed seamlessly. Never had problems with it when things like rvm or nvm would just break one day for no reason, never had it happen with Asdf.


In a very quick & dirty read I couldn't understand what it's for. Could someone give me a practical example? I think a practical example is lacking in the introduction.


I think what you're missing is that language runtimes typically install themselves globally, and major release versions typically have issues with backwards compatibility. If you install Ruby 3 globally, your Ruby on Rails project programmed on Ruby 2 may not work. Thus, there is a need to be able to install multiple versions of runtimes, and the ability to cause a particular version of a runtime to be used depending on the project you are in. This allows the development team to decide if and when to make runtime version changes.

My team maintains multiple projects that are over 10 years old with a mix of Ruby on Rails and JavaScript. The code base is large, so there is pretty much a guarantee that going to the next version of Ruby or Ruby on Rails or Node will introduce problems. We lock down all the versions for everything so that we don't have to worry a deploying to a new environment (new server, developer, etc) will have different versions and require use to change priority to fix things. Instead, we periodically review our projects for upgrades, find upgrade issues and fix them, lock new versions, and deploy. Some of the code base is in legacy status. It does what it needs to do, and we have no need to change it unless some security issue is found.


I feel your pain, I always upvote people who summarise links on HN because so often the linked info leaves me in the dark!

There was a discussion about asdf before[1] with a blog post[2] about someone's shift to using asdf which may help, it's pretty straight to the point.

[1] https://news.ycombinator.com/item?id=30917354

[2] https://jinyuz.dev/2020/07/switching-from-pyenv-rbenv-goenv-...


The most common need for this kind of thing is when you write software that targets a very specific version of a given binary (e.g. a ruby/python/node interpreter) and that version differs from what you have installed globally.

This is often the case when the software you are writing is a web application that has one intended production deployment target, rather than a library or even a shrink-wrapped product that might need to work in diverse installations.

If you just have one of these projects, you can just install the dependency in user-space and update your user's startup files to point to it and be happy. But if you have two or more, or the version changes often (hopefully it does, so you can scoop up security and other updates), then a version manager helps with the toil of swapping back and forth.

Like many of its predecessors, asdf not only gives you a set of commands to swap these versions on demand, but it also creates "shims" to automate this swapping behavior when you enter and exit directories with an appropriate configuration file.

The "killer feature" of asdf (versus rbenv, pyenv, phpenv etc.) is that it is an extendable toolkit that gives you the same tools for dozens, maybe hundreds of different plugins contributed by the community. It is a polyglot web programmer's friend.


My problem is that I know what it's for, but I don't understand how it works. The referenced page contains a "How It Works" section, but it gives me no useful information:

   > Once asdf core is set up with your Shell configuration, plugins are installed to manage particular tools. When a tool is installed by a plugin, the executables that are installed have shims created for each of them. When you try and run one of these executables, the shim is run instead, allowing asdf to identify which version of the tool is set in .tool-versions and execute that version.
There is a hyperlink on word shims to the Wikipedia article https://en.wikipedia.org/wiki/Shim_(computing) that explains the generic concept of shims. Not helpful.


Most likely $PATH and variables like eg. $JAVA_HOME get modified so that asdf captures any invocation to commands like "python" in the shell, and runs its own code instead of /usr/bin/python. Then it can decide which version of Python to defer to.

Edit: here it is: https://github.com/asdf-vm/asdf/blob/v0.10.2/asdf.sh#L30-L31


Love this. Can never remember the syntax for installing plugins vs. specific versions without double-checking though.


Nothing wrong with not remembering syntax for a rarely-used functionality. It's not like you install a plugin every day to warrant a muscle memory. I use tar once or twice a month and still need to double check the parameters.


`tldr-pages` has a page for asdf, as a quick and easy reference for these common operations.

Web interface: https://tldr.ostera.io/asdf

Other clients (ways to access it): https://github.com/tldr-pages/tldr/wiki/tldr-pages-clients


I tried asdf and direnv in the past but they did not cut it when working with Java and Java related tooling. direnv was slow and asdf does not have all the tooling I needed and use. I started using sdkman.io for Java development and it is very complete, up to date and speedy.


What tooling were you missing working with asdf and java?


Good to see it's still kicking, longevity is hard in cli space for non OS built-in ones.

Works great in itself, however, PostgreSQL version upgrade is quite hustle. It's plug-in area, though could have some protocol with the core to make it seamless. I didn't upgrade my postgres for a while (on 12.4 right now).

Not sure if it still requires you to do:

`POSTGRES_EXTRA_CONFIGURE_OPTIONS=--with-openssl LDFLAGS="-L/usr/local/opt/openssl/lib" CPPFLAGS="-I/usr/local/opt/openssl/include" asdf install postgres x.x`

And run `pg_upgrade` yourself moving data from previous version to the new one's directory.


Downside: it's all shell scripts. Upside: it's all shell scripts.

Seriously through, it's pretty easy to create an asdf plugin, and it works great. But it would be great if there were a static executable to handle it all.

A couple projects out there come close, but need people to contribute code to finish the most useful functionality. One example is https://github.com/marcosnils/bin - the developer is fully in favor of improvements and added features, but needs someone with the free time to add them.


I love asdf, switched from nvm, jenv, pyenv to one tool

Sadly this is not available in windows so i had to use WSL to run everything. Development in windows is so wonky once you switched from unix based system


Just use nix and write shell.nix file.


Not sure if /s was implied here, but there's an issue with this. Specific versions of most runtimes require hunting for repos that replicate the xyz-build tools but for nix. There isn't a nix-native version of "give me ruby 2.3.4" (unless you want to dig through history and hope it was included).

So yeah, there's https://github.com/bobvanderlinden/nixpkgs-ruby and others - but you have to find them yourself. (until someone makes a flake to integrate them all, I guess?)


There's lots of ways nix can be used. It's framework and shell.nix lets you build any sort of environment you want. E.g. if you don't want to use the nixpkgs ruby, you could install rvm in the env and in the shellhook get the correct ruby. Or use the nixpkgs-ruby as you found out. It's relatively easy to pull any sort of nix libraries into your own .nix file (and even easier with flakes). The most powerful thing about nix is that every derivation is cached, so now you can have your own binary cache shared somewhere, and suddenly everyone's build times go down, especially useful for complicated dev environments.

For example, I cross-compile rust code to arm64, build websites, and deploy the whole infrastructure to AWS including the said rust code as AWS lambdas from a single nix derivation. The cross-compiling toolchains, etc take time to build, but then they are cached and only stuff that gets rebuild are the actual stuff I'm developing / changing. This is the backbone of nix really. You may not think this is not that amazing though, but nix is from ground-up built so that it's very difficult to write anything impure, anything that can't be cached or does not cache properly.

For my work, I've written shell.nix for every project I'm developing. This means my own personal environment stays clean, but for projects I always have the environment I need up in seconds. And if I start running out of space, I can just run nix-collect-garbage and forget about it. This is especially useful for me, as I do contract work and don't use the work PC just for one company.

I haven't actually yet used NixOS, but if I ever have to spin up new servers, I'm considering it.. I have ryzen machine that I use for gaming and sometimes building larger stuff... and I'm feeling like I want to migrate that to NixOS.. maybe someday .. :)

Also if you are an mac user, you really should do yourself a favor and switch away from homebrew


I was highlighting the difference in ease of use. While you can do anything in nix and provide versions you want, it's not as nicely wrapped as asdf. For some things supported by asdf, like elasticsearch, nix requires a lot more work. It's a bit like the old dropbox-via-rsync comment - it's true you can do these things through shell.nix, but that's not why people use asdf. There's space for a flake wrapper which will make things much easier.


Yeah I wish you could just start with a `.tools-version` and nix would generate a development flake. Someday!

That being said, the task of debugging a deprecated tool version in asdf just one time makes the investment in the nix awkwardness worth it.

the pitch for nix isn't that it's a nicer tool than, e.g. `asdf`, but rather, that it will probably work in five years. frightful flashbacks of of trying to get nokogiri to build.


Some people are doing these kind of wrappers, for example: https://github.com/luispedro/nixml

Machine learning people usually use anaconda which is all sorts of mess... But honestly yeah, I think it's worthwhile to invest learning nix for simply guaranteeing your environment still works for years to come and isn't affected by side-effects (some thing outside of your environment description actually is causing the whole thing to work, or some thing outside the environment is making the thing not work).


we're working on something like this :) Check out https://floxdev.com


Slightly off topic, but that page lags like crazy on ff mobile. I was trying to read/scroll it and it wasn't a great experience.


There is always the option of writing your own derivation for specific versions too, however.

I don't think mentioning Nix is meant to be sarcastic - this is a function the package manager has built-in by default, so it is worth mentioning.


This is horrible advice.

Use a flake.nix


I haven't ever needed flakes myself, but I guess this was sarcasm as it's nix eitherway :)


Oh yes sorry, there was an implied /s there.


Love asdf. Been using it for python, java and ruby for some time and it does away with the pain of having multiple version managers. Highly recommend.


Being a bit of a tool myself I'd like to define a .toolfile for the whole internet:

Convert all data / markup to s-expressions

Make all code lambda calculus in some paren form

No more "xml being manipulated by a scheme designed to look like imperative language" (javacript manipulating the DOM)

Thank you have a nice day.


One version manager to rule them all


I've been qurious about asdf for a while, but looking at the plugins I don't find anything about gcc or clang for instance (but quriously cmake).

Though asdf itself is language-agnostic, is it in practice used more for web-development or something?


The plugins are community based, plugins are created on a need. It looks like most people don't want to mess up with multiple gcc/libc versions installations


Or, those that do don't use asdf for whatever reason.


asdf is very nice, with the exception of its interface. You have `asdf <context> <command>`, `asdf <command> <package>` for others. It's actively hostile for memorisation.


I discovered this tool a couple years ago since I’d type asdf into Google to test my internet and it started topping the results list. Great tool!


Is ASDF supposed to work with fish and macOS?

My boss loves ASDF and I liked the idea but I tried to use it three times and failed


Yes it does! Although I recommend not installing asdf via homebrew and installing it via git instead. It makes updating more seamless.


I just joined a company in the past few months that uses Macs and pretty much all dev environment documentation insists on using ASDF. I had a lot of problems, and am still having one or two, in a few of the codebases I need to work in.

I think the main thing is just having consistent documentation, especially for global settings.

But other than that it definitely "works" on MacOS; can't comment on fish, I use zsh.


This is my setup, and it works great! Install was flawless aswell, no trickery needed on my behalf. What problem did you run into?


I gave it another try and it works

   asdf which node
   /Users/jmfayard/.asdf/installs/nodejs/19.0.0/bin/node
   
   which node
   /Users/jmfayard/.asdf/shims/node
   
    node --version
   v19.0.0
   
   asdf local nodejs 16.18.0
   
   node --version
   v16.18.0
I probably had read the documentation in the wrong order. Thanks for your message!


Yep, I use adsf and fish on MacOS installed via Homebrew. Works great.


It’s also super easy to integrate with non-POSIX shells, if you’re into that kind of thing.


I've used this for six years on macOS. It's been working nicely.


How does this compare to volta.sh?

EDIT: Besides being only for JS.


Exactly.


Can anyone suggest a comparable tool for dotnet?


I get on better with sdkman, fwiw.


Great tool. I love it. Thanks!


Does it support java?



Yes


Have a project that requires multiple versions of terraform? what? you don't want to...

clone the terraform source repo

  ~  git remote -v
  origin  https://github.com/hashicorp/terraform.git (fetch)
  origin  https://github.com/hashicorp/terraform.git (push)
check out the tag you want

  ~  git tag | tail -1
  v1.1.5
  ~  git checkout v1.1.5
  Previous HEAD position was 516295951 Release v1.1.4
  HEAD is now at fe2ddc22a Release v1.1.5
compile/install the binary in a local bin dirctory,

  ~  go build
  go: downloading github.com/aws/aws-sdk-go v1.42.35
  ~  install terraform ~/bin/terraform-v1.1.5
  ~  ls ~/bin/terraform\*
  /home/matt/bin/terraform          /home/matt/bin/terraform-v0.13.7  /home/matt/bin/terraform-v1.1.4
  /home/matt/bin/terraform-v0.13.4  /home/matt/bin/terraform-v0.14.9  /home/matt/bin/terraform-v1.1.5
then manage a series of brittle aliases to disptach the proper version?

  ~  which terraform
  terraform () {
    binary=terraform
    if [[ $1 == "v"\* ]] && [[ $1 != "validate" ]]
    then
      version=$1
      shift
      binary="$binary-$version"
      [ -f ~/bin/$binary ] || bail 1 "missing binary $binary" || return 1
    else
      binary=/usr/bin/$binary
    fi
    dispatch --name terraform --scope --slice compilers.slice -c 35 -mh 2048M -mm 2048M -s 1M --binary $binary
  "$@"
  }
(dispatch just runs things with memory and cpu limits)

  ~  which dispatch
  dispatch () {
    if [[ $USER == "root" ]]
    then
      command "$binary" "$@"
      return $?
    fi
    declare args=(--user --same-dir -p IOAccounting=yes -p MemoryAccounting=yes -p TasksAccounting=yes)
    while (($#))
    do
      case "$1" in
        (-c) args+="-p"
          args+="CPUWeight=$2"
          shift 2 ;;
        (-mm) args+="-p"
          args+="MemoryMax=$2"
          shift 2 ;;
        (-mh) args+="-p"
          args+="MemoryHigh=$2"
          shift 2 ;;
        (-s) args+="-p"
          args+="MemorySwapMax=$2"
          shift 2 ;;
        (--scope) args+=--scope
          shift ;;
        (--slice) args+="--slice=$2"
          shift 2 ;;
        (--name) name=$1
          shift 2 ;;
        (-P) args+=-P
          shift ;;
        (--binary) [ -z "$name" ] || name=$2
          binary="$2"
          shift ;;
        (*) break ;;
      esac
    done
    systemd-run $args "$@" 2> >(>&2 grep -vE 'Running.*as unit:')
  }
and of course, then you'd need a shitty little script to call your alias when other tools decide they want to call terraform

  ~  cat ~/bin/terraform
  #!/usr/bin/env zsh
  source ~/.zshrc >/dev/null 2>/dev/null || exit 1
  terraform $@
Because that would be silly and dumb, and I totally don't do that for everything.

Don't even get me started on virtualenvs.


Why do you need that dispatch? My anecdata is that for Terraform, between v0.12 and v1.1, on the big three clouds, with quite complex configs, I never noticed a single problem which systemd unit would solve. Curiosity, not criticism.

Ok there's criticism as well: why do you need an alias and a script that calls that alias? Get rid of the alias, just paste it into a script then?


Do you need different versions of terraform in the same directory? (I suspect not, since that would break lck files) If not, you can use a .tool-version file with a directory-specific terraform version.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: