Thanks for this Jeff. I had built the airgradient and was slowly chipping away at the Prometheus/Grafana config when this popped up on HN a few weeks ago. It was v satisfying to skip the intervening headbanging and get to the air quality monitoring.
I feel like nix[1] is better suited than Ansible for this.
Just I'm this week, I made an iso for my rpi config, which has everything setup from ssh keys to the services i want to use. All I need to do is flash it and it's good to go. Not to mention that I can easier manage a fleet of this with remote deployment[2].
Do you have a repo/gist where you can throw up a sanitized rpi config? I don't know much about Nix but what you are describing sounds like a nice use case as I do have to periodically rebuild different RPIs.
https://github.com/LegendOfMiracles/dotnix/tree/master/hosts... (bottom lines describe the remote build logic) this is the actual config of the pi 3b... One other file also gets sourced: the defaults-nixos file in the root of the repo. And the distributed build host nix file describes creating a build user on my main machine.
I'm not sure what you mean by sanitized, but here's one example. Very simple configuration on my pi4 server-ish. This is not a remote deploy but gives you an idea of a simple-ish configuration
I've been wanting to use Nix(OS) on my Pi for a year now. But from what I understand, there is no binary cache for ARM yet. Building every required package on Pi is out of the question and I don't have another persistent system for cross-compiling (Pi is the only always-on system I own).
That being said, I've been using NixOS on my main system and loving it! I've been through format-and-resetup dance far too many times to understand NixOS is worst system management tool, except all the others.
> Building every required package on Pi is out of the question
Are you sure? TBH I don't know if there's a binary cache, but I installed without issue. It was longer than say a Debian install, but can't say it took that long
> ssh pi4
(nelson@pi4) Password:
Last login: Sun Aug 15 10:44:23 2021 from 192.168.86.235
nelson@pi4:~/ > sudo nix-channel --update
[sudo] password for nelson:
unpacking channels...
created 1 symlinks in user environment
nelson@pi4:~/ > sudo nixos-rebuild switch
...
> copying path '/nix/store/qpd30425yap9y2mhsa4lg8cfid6703cz-glibc-locales-2.33-49' from 'https://cache.nixos.org'...
copying path '/nix/store/59mpcrm3ndbg32834038v1c6lzjh9yi7-linux-5.10.52-1.20210805' from 'https://cache.nixos.org'...
copying path
annecdata fwiw, re 64bit Raspios vs 32bit. I have a 4 node 64 bit k3s cluster running folding-at-home as a testing workload where nodes will stop responding after some indeterminate time (1-3 days) - at present, just power cycling to recover
I'm not worried about initial setup time, but consecutive updates.
But looks like I was wrong and there is a cache as long as I use aarch64. I'm not sure why Pi foundation recommends 32bit image, but NixOS should be fine imo.
stardenburden described it very well, but I'll try another go.
If you have ever reinstalled Windows, you probably had to hunt,download and reconfigure every single piece of software on your system to get it back to previous state. That takes hours and still miss something.
With Linux distributions, things are easier. All software comes from single source and you can back up most config files. But there is still space for error. You can forget to install something, config files can be scattered across multiple locations (/etc, ~, ~/.config etc). If any of this is missed in being backed up or restored, then you have to re-learn the whole process yet again.
With Nix(OS), the whole system state (applications and their configuration) resides in single file (or repo). It is the only thing you ever need to back up or restore. If something changed, configuration.nix is single source of truth to show it. You can go back in time and see what changed, and restore it. If a package is removed from configuration.nix, it also automatically gets uninstalled from system (unlike Ansible). If nVidia driver doesn't agree with kernel update, you're not stuck with black screen until rescue disk is prepared. Most distros solve this problem by extensive testing, which obviously delays updates to end user. On NixOS, you can simply select previous generation from boot menu and be on your merry way. Because of this, NixOS updates fast, and requires practically no maintenance beyond setup. Any time I setup a complex service, I only have to learn it once. The whole process gets automatically scripted when I'm doing it the first time, and I never have to re-learn it.
Its very liberating when you're not worried about your system being unusable. Learning Nix language (which is not pleasant) is a cludge, but right now, I haven't found anything better.
Your system state in nixos is described in a config file. When you reinstall you won't need to remember what packages you installed, what files you touched in /etc and worse of all: how you configured your text editor.
All of that state can be described with nix. So if I wanted to reinstall right now, I'd have the exact same system as before.
Naturally you'll have to put in some work to make your system 100% nix, but it's worth it.
That's also why I think it's perfect for projects like this one, people will merely have to copy your config file, change nothing if deploying on the same hardware (except maybe some IP address here and there) and it should all just work:tm:
> Your system state in nixos is described in a config file. When you reinstall you won't need to remember what packages you installed, what files you touched in /etc and worse of all: how you configured your text editor.
I don't see how this is different to ansible, you install an OS then run an ansible playbook remotely via SSH to get your Pi/whatever to the state you want.
I have an ansible setup that manages 3 Pis this way, all from one command.
Granted, writing YAML is not fun, but i'd like to see a stronger argument why Nix is worth the effort.
I am not well versed with Ansible, but I understand it to be imperative while Nix is declarative.
Put another way, if you add a package to your playbook and then remove it, the package is still there on your system until you remove it old fashioned way. With Nix, if something removed from config, it also automatically gets removed from system.
Next and imo biggest benefit of NixOS is generations. Every time you change or update your system, the whole system gets rebuilt and creates a new generation. If kernel update broke your display, you can reboot and select previous entry from boot menu and go about as if update never happened. I'm not aware if Ansible can do that yet.
Last, Nix can let you create isolated islands of package config, where you can have your project dependencies setup without affecting or being affected by system dependencies. Obvious examples are Python and Ruby, but these islands or 'nix shell's as they are called can be used to prepare any combination of languages unlike single language tools like pip or cargo.
I recommend giving Nix a go for any system for few months. Its not the most pleasant learning experience (documentation is crap, to put nicely), but if you manage to climb that hill, the view will probably make you stay.
It all seems horribly overengineered. Use apt to install python and pip, then use pip to install Ansible, then use Ansible to install a bunch of stuff that appears to pull in docker as well.
This is cool. I've dabbled with Ansible for managing smallish projects like this but have never quite committed fully to it to get the full benefits.
I think part of the issue may be that because I'm not using Ansible every day, whenever I come back to it there is a bit of context that I need to reload into my brain to get back up to speed... I guess that could be a sign that the tool is too heavy for my use case perhaps.
I have a bunch of provisioning shell scripts that I do seem to find a bit less abstract and easier to manage. In fact I need to use one today to renew my home lab wildcard SSL domain certificate and push it to my various local systems.
I was checking out Ansible for my Raspberry Pis, but settled on bundlewrap.org eventually: much lower barrier to entry. (Also much simpler, but that's ok with me.)
I'm having a hard time visualizing how I could use only one interface. How would the rest of the devices connects? Or are you suggesting using as a wifi only router?
It's the "router on a stick" method. You use a VLAN-capable switch, and connect the upstream connection (cable modem, ONT, etc) to a port on one VLAN, while the rest of your network is on another VLAN.
I’m not too versed here, does using this project to make the RPi4 handle DNS make it a router? I’d rather not sacrifice nearly half of my gigabit bandwidth…
Yes, using your connection will always cause an impact. If it's noticeable or not depends on a bunch of factors that are hard to guess about. I wouldn't run it very often if I didn't have some kind of QoS in place.
Yooo, if anyone wants to let me know on this. I'm already using an off the shelf load balancer because I didn't have time to figure out on my own -- would it be possible to "tag this on" and determine stats for all my connections?
When I read the title i was hoping it will shorten my path. Alas, not so. Slightly on a tangent, but fits with the "all things"...
I have been trying to extricate my family from Google & Apple ecosystems. This requires various servers. At first, I was going to do something like a rackmount server with KVM, Docker, or similar virtualization. Turns out, the cost of a handful of RasPi4B8Gs (~$75 x n, where n is server per service) is less expensive then running a full server (~$1500+).
Now just to find the right and stable software packages that are relatively smooth transition. :/
I currently have them set up as my DNS & filtering, & DHCPD, working on calDAV, cardDAV, VPN, and file (& bookmarks) synch.
That is excellent. Thank you. One of the positive features of both commercial services are the sieve-like behaviors on inbound items. Have you set up a sieve? Maybe this is a way to do it, & then have VPN between my system and my AWS setup, would have to tunnel from my side only (or reverse?)... Thank you!
Zero uses a very constrained CPU; you'd get max 100 Mbps throughput over wired, much less on WiFi. But if you only used the DNS/Adblock capabilities, it wouldn't be too bad.
I have, sort of. At my old job, just as the pandemic started, one of my coworkers didn't have vpn access and the company ran out of cisco licences before he managed to get one so I tunneled him through my pi0, afaik it was tolerable for pulling and pushing code but nothing more than that.
[1] https://www.jeffgeerling.com/blog/2021/monitor-your-internet...
[2] https://www.jeffgeerling.com/blog/2021/airgradient-diy-air-q...