/var/tmp, IMO, shouldn't exist--the idea of temporary files that should persist beyond a reboot is tech debt, those files should exist under a proper /var/{cache,lib} directory. Podman uses this directory and it drives me nuts.
/tmp/ is the ram-backed version for small files, while /var/tmp/ is the disk-backed version for large files and I also think both should be clean at bootup.
In my personally opinion the reaping period is way too long, especially for /tmp/ as only few people really have an actual uptime that long but it does make somewhat sense for /var/tmp/ as it currently is "persistent".
Small you say? The block size in that device is the size of the memory page: 4K by default. That's 8 times the very typical 512b for persistent storage.
I've been bitten many times when trying to start in-memory only VMs with a disk image that measured smaller than available memory only to run out of memory on boot :)
Let's say it's not really for small files. It's for few files. ;)
I'm OK with a RAM-backed /tmp getting cleaned every boot simply because that's part of the technical tradeoff, however "on boot" is a bad trigger because its behavior is unreliable and can easily backfire and harm a user.
A classic example would be if the power is interrupted and the computer restarts. Unless the user is present and savvy enough to block it somehow, all of their short-term "throwaway" project/data just got deleted before they actually finished with it.
In other words, the difference to users between "I won't need this after tomorrow" versus "I won't need this by next boot" can be very important.
> Unless the user is present and savvy enough to block it somehow, all of their short-term "throwaway" project/data just got deleted before they actually finished with it.
That's on them.
I use /tmp like this, because I'm lazy and don't want to have to mess with a directory I need to routinely clean up, but I know that if the computer crashes, it's gone. That's on me.
If you don't like this risk, you need to work in persistent storage.
/tmp is for programs' temporary files. Manually using it like this is abusing it. That's fine, it doesn't hurt anyone beside the user doing it, but they need to bear the consequences of doing it.
> If you don't like this risk, you need to work in persistent storage.
Which is literally what /var/tmp is and what the parent is advocating for.
However, clearing /tmp at boot is not the greatest technical choice. There's nothing really special about rebooting the system. The idea that files in /tmp are kept around for anywhere between minutes, hours, and days makes for bad policy which systemd-tmpfiles fixes quite elegantly. You shouldn't lose your temp file after 30 seconds because your machine lost power. You should be able to say "files created here are cleared after $short_time of inactivity."
IMO setting the expectation of 'wont survive a reboot' is more consistent. but even that fails with private tmp files that could be tied to ephemeral units
The article we’re discussing is exactly about Debian changing that for the next release. (The only thing that will be changing in this respect is the default, and possibly the configuration at the moment of upgrade to the next release. Debian has long allowed users to configure both RAM-backed and disk-backed /tmp, and both will remain possible.)
Actually, no. The XDG specifications are intended for desktop environments, the focus is on end users throughout the specs. System services and their locations are out of scope. The standard for filesystem layout was the Filesystem Hierarchy Standard [0], which predates the XDG by about a decade. Although Debian-before-multiarch was the distribution that most closely adhered to the FHS, it was not a Debian-only document.
That being said, all my systems have /var/tmp as just a symlink to /tmp, regardless of what the FHS may have to say about it -- and it's cleaned weekly, not just on reboot.
If feel like this is the moment where these discussions about old filesystem layouts start to go in circles.
Yes, system users tend to not have a home directory. But you are fully able to create one for a new system user if this is what you want to do.
You are also able to create all kinds of temporary directories and read-only filesystems for a process (e.g. using systemd) if this is what you want do to.
Yes there is a lot of fragmentation and no real standard, which is a little disappointing, but you sure have options to work with. And these options do not have to involve any legacy-ish filesystem components like /var/tmp that nobody knows the purpose of.
> filesystem components like /var/tmp that nobody knows the purpose of
The purpose of /var/tmp is temporary files that should survive a reboot. A potential example where that would make since is a partial download file. It is only needed temporarily but if a reboot happens while the download is in progress, you probably don't want to have to start over.
Also if /tmp is or might be a tmpfs, it is probably desirable to use /var/tmp for larger temporary files that don't fit in RAM.
/var was originally for network booting machines. / is read-only fileshare and /var is for user storage. /home -> /var/home too. It's campus wide workstations installation type of thing.
So, /var/tmp is kind of wrong, and so is /var/log/messages on a laptop, and besides the entire /var is useless on most systems. But it's not a huge waste of resources either, so it's left as it is along the rest of directories under / for years.
/var wasn't originally intended to be a system directory, it's just a variables directory. Almost by definition it's writable.
They could merge /var/tmp and /var/cache, with the intention to discourage future use of /var/tmp. With users, programs and admins already using /var/tmp as a persistent cache, a name change seems like the best solution.
Nah. XDG base directory was supposed to solve a ton of things 18 years ago, except none of them are hard-hitting user problems most folks really care about most of the time unless you are really anal about hidden folders in your home directory polluting things up (most people aren't), so it fails with partial support from apathy from some package devs on Linux to implement it.
Honestly, we should probably abandon the hope that we will ever get full universal adoption. It stinks even worse with the half-adoption state we are in now, with some apps writing to those directories and other apps not. If it wasn't for arch Linux folks trying to drive apps to adopt it more, I don't think it would be. Hell, default Debian/Ubuntu still doesn't set XDG_* directory ENVs by default (some flavors do), and some apps ignore the prescribed "if not set, use this default" nonsense in the spec and do what they have always done because it doesn't break compatibility for users.
Part of the spec stinks, too, like the spit between cache/config/data config where the spec has no written rules specified on when anything is expected to be cleaned up and when or what can or should be backed up by the user.
Let's move to containerizing most apps, putting everything in little jails so we don't have to deal with where they want to write to. They can have their own consistent and expected behaviors on their own island. Snapcraft, flatpack, or any of a bunch of other solutions are already available to solve this problem. Don't have to worry about what some app left behind or wrote to your system when it's all contained.
> Hell, default Debian/Ubuntu still doesn't set XDG_* directory ENVs by default (some flavors do)
You are supposed to leave them unset if they match the defaults.
Programs ignoring the spec is another matter and should be fixed in those programs not by needlessly bloating the environment of every process with meaningless data.
> It stinks even worse with the half-adoption state we are in now, with some apps writing to those directories and other apps not.
It stinks about half as much as before.
> Part of the spec stinks, too, like the spit between cache/config/data config where the spec has no written rules specified on when anything is expected to be cleaned up and when or what can or should be backed up by the user.
Caches don't have to be backed up and can be deleted if you need space. Most people will want to back up configs. What other data should be backed up is a question different people will answer differently. And containers don't solve this.
I actually back up my Gradle cache in particular because things have a tendency to disappear from the internet sometimes, and permanently break builds of older stuff.
The build dependencies are the problem. The build process downloads them from hardcoded internet URLs and they are cached for 30 days. By backing up the cache, I can (probably) rummage around it later to find anything that's been deleted from the internet.
That's bad too, but at least it's not right in my home folder! When I have to use Windows, I basically just avoid using Documents for files. (But my daily driver is an ancient version of OS X, which has ~/Library for these things.)
Oh, mac has the same issue and for the same reason: the same programs will create the same garbage.
~/Library also isn't something that is used to be; it was good idea, but mostly ignored. Basically similar idea as XDG_(CONFIG|CACHE|DATA)_DIR.
What's worst, is that many projects ignore any effort to move them to XDG schema. They take the approach, that they always did it this way, it works, so why changing it.
My home directory is much cleaner because I've made a concerted effort to keep it that way, in a couple of cases even recompiling software. I found this much more difficult on Linux.
Or even go take a look at Nix/NixOS and how they pull it off in another way. They have hermetic isolation down to a science.
Or heck, just look at what Android does, running each app under its own uid/gid, sandboxing 3rd party code, and keeping each app from reading and writing outside their little jails. Can't pollute a user directory or even write to /tmp if your user can't even enumerate it.
Hell, even built a whole sandboxing capability-based security model inside of Fuchsia at Google, which I worked on for 5+ years.
I've been building OSs for 20+ years, between Fuchsia and Android at Google and mobile/embedded products at Texas Instruments, so I hope I know what I'm talking about.
Nah. Your objections are rooted in the very limited definition of what an OS is and what a user application model is that fits it. There is no reason why each process/app can't be sandboxed. In fact, it should be for security (we did it in fuchsia). It's actually the way things work with apps from the app store on MacOS in a lot of ways where you can't escape your jail except through what is explicitly entitled.
General purpose just means that it's generally useful for a wide range of applications. Android is that. Hell, you can find Android running small appliance server infrastructure and powering IoT devices. Even in 2024, iOS/iPadOS is general purpose at this point, and they have VERY different application models from legacy app models you find on Windows and Linux. You wouldn't not call NixOS a general-purpose OS, and it's like flatpak for literally every process down to things like bash, grep, and vim.
Snaps are fine. Aversion is only from how it was introduced to people in Ubuntu, but conceptionally, is great. cgroups wrapping user processes to box them is not only good for security but also for stability and solving dependency versioning issues. It's brilliant. It's similar to what we did in Fuchsia at Google (we took it to another level because we had no legacy to deal with).
And sure, maybe I have bad judgment on some things. I contributed a ton to GNOME in the early 2000s both code and ideas that were horrible in hindsight, but I'm not still stuck in an outmoded mental model for thinking about my user environment.
AppImages are not containers at all. They bundle up the program data into a single archive but do not do any sandboxing and leave programms to write their user/config/cache files to wherever they would be written without AppImages, i.e. in xdg-basedir locations. As it should be.
> Or even go take a look at Nix/NixOS and how they pull it off in another way. They have hermetic isolation down to a science.
NixOS's packaging is also completely orthagonal to the xdg-basedir spec.
> I've been building OSs for 20+ years, between Fuchsia and Android at Google and mobile/embedded products at Texas Instruments, so I hope I know what I'm talking about.
None of those are desktop operating systems. Please stay away from those with your anti-user opinions.
Or Qubes, which goes further and features per-app VMs. Snaps was foisted on the community which was then unwelcoming of it, but per-app isolation isn't the worst idea.
Yeah, I had upvoted the comment, and retracted the upvote when I got to the last sentence. Nobody who has anything kind to say about snaps has any business anywhere near my desktop.
The essence of the problem is that there's no standard pathname for a personal directory that's guaranteed to be on local disk, even if $HOME isn't. Consequently, people have relied on /var/tmp/$USER for this. There are realistically affected users who can't change the new defaults.
Cleaning up /var/tmp on a timer is relevant to this academic environment (desktop-based research computing):
1. Each Debian machine is used by only one graduate student, but students do not have root access.
2. Today, /var/tmp is the only persistent local directory where the student has write access ($HOME is on a network filesystem backed up by the university).
3. Within the student population, there is strong institutional memory that /var/tmp isn't backed up by the university and isn't extremely robust (e.g., RAID), but also that nothing there is automatically deleted.
4. Students use /var/tmp for hundreds of Gb of data from simulations that take days or weeks. $HOME is too small and too slow for this.
5. In practice, less than 1% of students lose data through disk failure, accidents, etc.
6. A much larger fraction of students will lose data when sysadmins, who didn't get the memo about the /var/tmp change and thus haven't addressed the ingrained institutional memory, deploy new Debian machines.
7. Some of the students who lose data won't graduate on time.
> The essence of the problem is that there's no standard pathname for a personal directory that's guaranteed to be on local disk, even if $HOME isn't. Consequently, people have relied on /var/tmp/$USER for this. There are realistically affected users who can't change the new defaults.
How is that not a site specific problem they introduced? There's no standard pathname because there are valid reasons why you want to have nothing on the local system, such as when it doesn't have a local disk.
If you (read as "the admins") have configured the system where there's very few acceptable locations to store local data, and that's a specific need, then it's up to the admins to provide a solution to the problem, not the Debian distro, which is flexible enough to handle this fine.
It's not that hard to create your own location for data if you need to, and symlink to it from other locations if required. Need a /tmp/$USER to be a local disk location and /tmp is a ramdisk? Create a script that sets up the correct local location and created a symlink to it in /tmp and make sure it's run on boot and maybe once daily. Both cron and systemd solutions could work for this. Worried that user tmp files will be cleaned out when they shouldn't be? Put in the correct exceptions for tmpwatch or whatever debian uses, or disable that service.
Not only is this not rocket science, it's literally the job of the admins that design the OS deployment so that it works for the intended use cases.
The distribution can't be held to guarantee a particular path is on a local disk because it's not up to the distribution. If it were about the distribution's defaults being a reliable option then /home would already be on local disk.
Why shouldn’t the admins provide an actual solution to this problem? This specific scenario represents a tiny percentage of Debian users. Why should the distribution take pains to work around such a specific issue, when the issue is easily avoided entirely to any admin who reads the upgrade docs?
> Couldn't students address this with their local admins?
They could try to, sure, but normally local admins in the unis don't cave in to any demands from the students. Sometimes they don't agree even to the demands from the faculty!
> They could try to, sure, but normally local admins in the unis don't cave in to any demands from the students. Sometimes they don't agree even to the demands from the faculty!
Yep can confirm; My uni's IT department recently began blocking all inbound ssh traffic for the entire university network (including the data center ranges), and shot down any requests from students, faculty, clubs, and enterprises that asked to have an IP or two whitelisted so they could access their infrastructure from off-campus (except ITs own services were whitelisted; can't be inconveniencing them now)
A subset of us that got refused formed a mini anti-IT 'cabal' of sorts, eventually found an oversight in how they implemented the block (it just pattern-matched the initial ssh handshake version string; you can change it by compiling openssh from source), and have since been on our merry way, with IT none the wiser.
But hey, at least the security guys can sleep soundly at night thinking we're still being inconvenienced by their arbitrary decision. Clearly they must think everyone is as incompetent at locking down a network's security as they are.
Reminds me of when my uni blocked outgoing connections to unsupported protocols (including Minecraft servers). It did DPI on most ports, so we couldn't just put it on port 80 as it would recognise it as not http traffic, but the HTTPS port seemed to get excluded and just assumed as encrypted stuff, so we used to just run our Minecraft servers on port 443
We are in a sort of similar situation, but our solution was to put a Raspberry Pi in a windowsill that runs a reverse SSH tunnel (through a server in a VPS somewhere).
Why should that <1% of users be at risk of not graduating because of local data loss?
Teach students about backups and make network storage available for the code and recoverable checkpoints to restart failed simulations/computations.
If reliable, available storage isn't available for all students to graduate then allocate more money to technology resources. Speed is much less of an issue if it's only used for periodic checkpoints.
I wish Linux (and other Unix-like systems) had an API to let you create named temporary files which are automatically deleted - by the operating system - when a process exits.
If you create an anonymous temporary file, then it will be deleted when the last file descriptor to it is closed-but anonymous temporary files are difficult to use (even given access via /dev/fd aka /proc/self/fd). You can delete it using atexit() but that doesn’t get run if the process terminates abnormally (core dump, kernel panic, kill -9, power outage, etc)
Maybe something like a pidtmpfs where the top-level directories are pid numbers, you can create any files or directories you want under them, but it all disappears as soon as the process with that pid exits. Actually, pids are a bad idea due to pid reuse, you want something like a process serial number or a process UUID which doesn’t get reused.
Another somewhat more flexible idea would be that you can create arbitrary top-level directories, but they - and all their content - are automatically deleted as soon as the last open fd to them is closed. That way you could have multiple processes sharing a temporary workspace, and the workspace could survive the failure of any one of them, but if they all fail it gets deleted
With mount namespaces, you can create an anonymous tmpfs mount that gets detached and deleted once nothing refers to it any more, and add whatever files you want to that. It can be accessed using the *at() syscalls, or alternatively using an initial fchdir() to make it the current directory.
Too bad that unprivileged namespaces have been buried by the big-name distros, though.
Well, perhaps I'm being unfair talking about distros plural: I'm mainly thinking of how the latest Ubuntu has disabled unprivileged user namespaces by default [0], seeing them as too big an attack surface to let everything use them.
It looks like Debian used to have them disabled, but then re-enabled them for the sake of web browsers [1]; I'm sure they'd re-disable them if they found some solution similar to Ubuntu's AppArmor one. For other distros, it's difficult to find up-to-date information on whether they enable or disable unprivileged user namespaces, since many have flipped back and forth over the last decade.
All that is to say that unless your program is given privileges itself (e.g., Docker), or can wheedle user-namespace permissions out of the packagers, there's no chance you'll be able to distribute namespace-using code and have it work consistently for most users.
It seems that until 2020, Debian used to block them as well [0]. So perhaps some of my perceptions are out of date. In any case, it means namespaces are unreliable to use for the typical Linux program.
Unfortunately it only works for systemd units, as far as I am aware.
Consider this use case: I have a test program which executes some unit tests, it creates some temporary files. I want the temporary files it creates to all be removed when it exits, even if it core dumps. But while it runs, I want those files to easily be accessible to subprocesses it spawns. I don't think systemd addresses this, because my unit test runner isn't going to be a systemd unit.
So really I was talking about a generic facility an arbitrary program could use, not just something limited to systemd units only.
> Consider this use case: I have a test program which executes some unit tests, it creates some temporary files. I want the temporary files it creates to all be removed when it exits, even if it core dumps. But while it runs, I want those files to easily be accessible to subprocesses it spawns.
For this precise use case, what if you deleted the last run's temporary files at the start of the test program?
It even works when you write a temp script and execute it in /dev/fd/...
However since you can't control the file name in that case you cannot control argv[0] for such temp executables which may be a problem for BusyBox-style tools.
Have you encountered other issues with using that approach for named but auto-cleaned files?
Example: suppose I am testing a program which reads in a directory tree. I have my test driver program create a temporary directory, with multiple files and subdirectories in it. I can’t use /dev/fd for that, since it doesn’t support directories
Related example: testing a program which expects a file name to match a certain pattern; can’t do that because /dev/fd file names are just numbers
Well if you can call a syscall you don't need a named file. You can call execfd.
One case where I need an executable named temp file is when I need to pass that named file to another process which in turn will call that file (so I don't have control on how that other process will call exec)
> Well if you can call a syscall you don't need a named file. You can call execfd.
Sadly you can’t exec an fd on macOS.
I don’t think Apple is going to change that, because I think it causes difficulties for their security/codesigning/sandboxing infrastructure, and I think they don’t see the difficulty and risk of making it work with that as being worth the rather limited benefit
I have a magic tmp dir I set up using a one-line find command that runs every minute from cron, that deletes anything in that dir with an atime older than some small value like 30 seconds or 2 minutes. That sounds expensive but since it's always running, the find work is always small, and this is all in ram tmpfs usually too, because ideally you want atime enabled in this dir, and ideally you do not want atime enabled most anywhere else.
It is used by all the application software for "fire & forget", so the app code can just generate the file and not have to hang around (and also, hang) while waiting to delete the file some unpredicatble time later. The file may or may not ever be used in some cases, and if used, it's used in some other process that the writing process doesn't know how long it will take unless you rig up some way to signal.
Every user of the special directory knows this special property of it, as this is it's entire purpose in the first place. It's not /tmp so no surprises. It eventually got used for all kinds of other things too because it's just so handy.
It works best with atime, which means mounting the filesystem with atime enabled, which I don't like to do in most cases, so ideally you make this it's own tmpfs in ram, and then the atime doesn't hurt much.
But it also works well enough with just ctime & mtime and a slightly longer TTL.
Even if a file is large and the user is on a slow link and the file is still downloading when the TTL expires, and the filesystem does not have atime, and so the reaper does actually delete the file while still in use, it's still ok most of the time even then, because the actual process that has the open handle still has it and can continue downloading the file until they close it, even while it has disappeared from view for the rest of the system.
But with atime, it's like magic, the file just naturally lives exactly as long as anyone is interested in it, and gets reaped 2 minutes after that, with no application process needing to keep track of it. The reaping happens regardless of crashed processes or reboots, graceful or ungraceful, etc.
It's something I did almost the first week over 20 years ago at a company I worked at from '99 to about '22, and got used for practically all temp files, and the normal tmp became the special case you only used in special cases. (actually the application software really never used the global /tmp, there were various other customer-specific and app-specific dirs)
Basically it was like an OS feature (for us) in that every system (of ours) always had this magic tmp dir.
This worked fine on old sco systems without even gnu find with it's handy -delete, let alone cgroups or even a ram fs. In fact it started there.
/var/tmp being cleaned on boot is probably fine despite contrary standards in the past.
But automatically deleting files - in either of those - while the system is running is going to break all sorts of things.
As a random example, `chromium --temp-profile` will have half the profile deleted (since some particular files aren't accessed frequently) while the browser is running. This looks designed to cause system-level UB.
Note that locks on directories are pretty rare; locks on a file within the directory are the de facto standard, on the assumption that only cooperating programs will access them.
Hanging my comment here. Sorry to slap it into your universe.
My knowledge says... is it fine?
My RAM is for applications, and I buy RAM accordingly.
Most in this discussion seem to miss something quite vital and important. Linux is excellent at caching most-needed, most-used data in RAM, eg buff/cache in 'top', and that should be given higher priority than some temporary file a user might slap in /tmp/.
50% of my RAM, randomly flushing out buffers and cache! Buffers and cache, which stores commands I use frequently, libraries often used by applications, and so on! Any server that is reasonably loaded, will show RAM fully used by buffers/cache, excluding active RAM requirements. Now I'm to give 50% of this up, having those buffers/cache flushed for... someone unarchiving a tarball?!
And my swap file, something for emergency application RAM usage, now being stolen by /tmp. We're basically taking the most expensive storage space on a system, RAM, and relegating it to... a few log files, someone untarring software, and also...
Hard limiting its size to RAM requirements?!
This is sheer madness. It smacks of people with desktops, with enormous amounts of free RAM, making decisions for servers.
This change is bad.
Thoughts:
* It is not relevant what systemd does, endless things in Debian and other distros diverge from what systemd does
* Debian isn't aligning with 'other distros' when doing this, as there isn't a consensus here. Debian isn't some hold out, in fact there are far, far fewer installs elsewhere doing this.
> Now I'm to give 50% of this up, having those buffers/cache flushed for... someone unarchiving a tarball?!
Nope, unarchiving tarballs doesn't write the tarball content to /tmp.
> It smacks of people with desktops, with enormous amounts of free RAM, making decisions for servers.
* Servers usually have way more RAM than desktop devices? Unless you mean VMs, in which case the VM can't decide what device /tmp lies in anyway.
* in any case it will be trivial to just not use it for your servers, if you prefer /tmp to be on disk. Just like I already used tmpfs for 2 decades in Debian, it is a single line in /etc/fstab either way.
Eh? Yes, if I untar a file in /tmp, it will go there.
My whole post talks about putting files in tmp!
And this change is a performance killer for that reason. Taking all that juicy, super fast RAM away from system caches and buffers, which yes is a performance killer.
You want sensible defaults. Telling people they can change a dumb default to something smart, isn't a viable response.
I mean my god, you say you've been using linux for 20+ years, so you surely know, doing this for a performance gain is absurd! We have mega fast SSDs, and an elegant filesystem layer, writing to disk is immensely fast compared to spinning disk days.
We gain almost nothing here, and lose RAM caching of libraries, data, buffers, which the kernel is very, very good at these days too.
It's a dumb default. Immensely so! And statement like "you can just" aren't the point.
This slows almost every single system down.
You know, this could be done right. Instead of stomping on everyone, just create a new tmpfs for just this purpose. Let people use it if they want.
And why on earth suddenly start forcing file cleaning. Debian has not had that default in decades, changing it now because someone else called redhat does it, isn't a valid excuse.
Not to mention if you did have a large tmp file that you actually want to persist to a different directory that is on disk you now have to wait for the damn file to be copied from memory to disk vs a single atomic rename operation, this can have major implications depending on the context.
I was really impressed how Boccassi "project managed" this change, gathering feedback, addressing feedback where necessary, and continued to push the ball forward even under lots of objection. Many people would have just given up and the status quo would have remained for another couple years.
According got the article Boccassi wanted concrete scenarios where things might break, but his justification for making the change was “upstream and other dists”.
Systemd now essentially control Debian both the OS itself, and though Boccassi, the development.
People were more interested in bitching about edge cases where maintainers / projects / authors were doing things they should not have been doing in the first place than they were about addressing those issues, so he addressed them himself.
The man literally put up when others would not shut up.
The fact that YOU don't have a particular use case doesn't necessarily mean much. Billions of people don't use linux and this all doesn't matter to them.
But perhaps 30% could be in what you call "edge case", since your statistics is entirely based on yourself.
Yeah so much naysaying. Especially the people storing important files in `/var/tmp`! It's like complaining to the council for collecting bins because you like to keep your keys in them. Wtf.
It's my bin. I paid a lot of money for it. I can keep my keys in it if I want and if it's convenient for me. I don't work for the council and I don't care what their agenda is.
Why not just have the installer _ask_ me what configuration I prefer? Is there some reason you have to force the change, announce a victory, then make me to go mucking with "defaults" after the fact?
You can already change the configuration in the installer, it's called manual partition setup. The installer doesn't need a clicky screen for every setting that may have changed over the last 30 years.
So if I do the manual partition setup it's not going to run the daemon that automatically deletes files off those partitions during runtime for me? It sounded from the article like this change is more comprehensive than just the partition.
I doubt it'll change the auto-cleanup. It probably is configurable with dpkg-reconfigure, or maybe it's just a service that needs to be disabled. Post-installation configuration is still a thing, and it changes every version. That's what versions are for.
> Why not just have the installer _ask_ me what configuration I prefer?
Because nobody bothered to actually put in some work to implement that. As I've said on the ML, if somebody does the work, I'll review it. But it's one of hundreds of different settings, and it's obviously not worth anybody's time to do this work, as it's largely inconsequential and trivial to configure via the supported config files. Complaining on social media is of course cheap enough that we get plenty of it.
Here the council* provides the bins that they collect on rubbish day. If you want to provide your own bin they'll leave it alone, if you leave the council bin there on rubbish day they'll take the contents away.
Likewise if you make /akirastmp, the OS will leave it alone, but using the directories the OS clearly makes temporary for things that aren't should be the configuration to require manual setup since it's the unusual one.
> Why not just have the installer _ask_ me what configuration I prefer? Is there some reason you have to force the change, announce a victory, then make me to go mucking with "defaults" after the fact?
It's the Poettering way. And GNOME, for some reason.
It's a recent part of our culture to look down on the "uninitiated" and treat them with extreme infantilizing pity almost bordering on actual bullying. I call it the "temporarily embarrassed future nobel prize winner mindset."
I put it down to the level of monopolization of industry and to a certain extent our culture. These people mean well and they're mostly just reacting to a really baffling labor market in the face of a failed and entirely captured internet revolution.
With the right pair of eyes you can look across the dunes of github into the early 2000s and see where open source culture peaked, crashed, and rolled back.
I had the chance to speak with a GNOME developer about this. Apparently GNOME is trying to be the little old lady desktop that's intuitive for grandma. Which is fine - if it's marketed as that, and not put on the same level as KDE or even tiling WMs, and not trying to influence the entire ecosystem.
Eh I've lost files because I was working in a ramdisked /tmp when the computer lost power. That's what one gets when working in a dir called "tmp".
Some people will see their stuff vanish from /var/tmp and that will be their day of learning. I guess they won't have a backup of files in there either. They were one misfire away from loss anyway.
Sure it feels different when the distro just ran over your misconceptions. On the other hand, when I encounter a stray tmp dir, I assume everything in it can be deleted with negligible consequences. So I tend to rm them without even a glance inside.
Because that increases complexity, and introduces new ways the installer can do something unexpected or fail. There's overhead to code; it has maintenance costs.
If half a dozen people in your town of 10,000 use bins for storing soup, but one year the council procures bins with holes in the bottom to keep rain from collecting in them (making them hard to move, and also providing stagnant water for mosquitoes, ie something that benefits nearly everyone)...
...what would be your opinion of a blitzkrieg of online and media criticism of the council for not considering the needs of soup binners...by people who do not store soup in their bins but are screeching about the needs of the people who do and they say will be affected by the change?
And then someone on the council has said "alright, fine, look...I doubt there's that many of these people. I'll go talk to them, who are they?" and the screechers admit they don't actually know any soup binners, so the councilmember looks them up and goes to the house of every soup binner and either plug the holes in their bin for them, gives them proper soup pots, or prints out a list of soup pots they can find on amazon...or finds out that they say "well gosh I had a soup pot in the attic but I never got around to using it, I'll just switch. Thank you for the heads up that the new bins will have holes in the bottom."
I mean, there was that period of time where amazon and other delivery companies would consider a bin a 'secure' place to put parcels while the recipient was out, even on bin days...
I've had cases where /tmp was full, usually by some runaway program that didn't clean up properly. The first indicator is usually that Bash auto-complete fails, followed by no one being able to SSH into the machine. I suppose mounting /tmp in RAM means this will happen more often (as RAM is usually smaller than hard drive space), but I don't see this becoming a giant issue.
Now, auto-cleaning /var/tmp... that's going to cause SO MUCH data loss across people I know. I see a parallel with keeping files in your Desktop: yes, that's not what it's there for and ideally you'd store things properly, but you don't go around deleting people's documents just because you don't like it!
I quickly ran a backup just now, but I would've been one of the people slightly effected.
I symlink some of the desktop cache files to /var/tmp, but since that never got deleted for me (on Debian and Manjaro) I also started linking things like my browser profile there. I kind of knew that /var/tmp may not be the correct folder for it, but if it works, it works you know.
A bit of background on my setup, for why I even want to create such symlinks. My home folder is on an automatically mounted USB stick that I transfer between systems (laptop and desktop). On shutdown, I rsync the home folder to a local backup folder, which gives me a small distributed backup system. Using the same home directory between systems with different distributions and CPU architectures (I'm using the pinebook pro arm laptop) works surprisingly well. But I'd rather not stress the USB stick with a bunch of frequently accessed cache files, hence the symlink. I end up also symlinking the browser profile, because that caused version incompatibilities between systems, and I'm ok with having local browser settings/history on each device.
I sort of do the same thing, See, I like my homedir on a nfs share. (my philosophy is build one really good drive, raid, backups etc, then everything can use that, rather than try and put a good drive into everything) However there is stuff I don't really care about but would like to be fast and don't want eating my bandwidth, caches mainly, Now unlike you I tried to stay out of tmp so I made a /var/home/ on a fast local drive that vaguely mirrors /home and symlink the caches into it. That way it makes a good dedicated space for when I do want a fast local filesystem.
As an antidote I had a hard time for a few years after I stopped using irix, I wanted(or at least my fingers wanted) the home directories to be under /usr/people
On the original Macintosh, the idea was that you'd move files you're currently working with onto the desktop, then return them to their original folders when finished.
The original Mac Finder even had a "Put Away" command to support this workflow.
I believe this idea partially arose by analogy with physical workflows, and partially as a way to more easily work with related files spread across multiple floppy disks: desktop items remained visible when a floppy was ejected, and the system would prompt for the floppy by name if you attempted to access a file on an ejected floppy (ejected floppies themselves remained visible on the desktop until you dragged their icons to the trash).
[Pedantically, the original (System 1.0) Mac Finder had a more general, slightly buggy "Put Back" command, removed in System 2.0, and finally replaced with the "Put Away" command in System 2.1. Source: past experience verified through testing at [1]]
> The /var/tmp directory is made available for programs that require temporary files or directories that are preserved between system reboots. Therefore, data stored in /var/tmp is more persistent than data in /tmp.
> Files and directories located in /var/tmp must not be deleted when the system is booted. Although data stored in /var/tmp is typically deleted in a site-specific manner, it is recommended that deletions occur at a less frequent interval than /tmp.
> The directories /usr/tmp.O, /var/tmp, and /var/spool/uucppublic are public directories; people often use them to store temporary copies of files they are transferring to and from other systems and sites. Unlike /tmp, they are not cleaned out when the system is rebooted. The site administrator should be even more conscientious about monitoring disk use in these directories.
This is also true for Solaris, which I used to admin many moons ago. So I'm not sure where the idea that /var/tmp gets cleaned on reboot came from since I have always understood it to be fairly static.
Every unix does its own thing, you just have to know your system.
In OpenBSD, /var/tmp -> ../tmp and /tmp is cleaned periodically and not preserved across reboots, but there are some specific exceptions and you have to dig into /etc/daily and /etc/rc scripts to know what they are.
I don’t like it, I tend to use /tmp for large files or directories that are just that - temporary.
Now they will either fill up my RAM or require a large swap partition (which I usually don’t have as it’s otherwise wasted space).
I really like making /tmp the default destination for downloads, so either I need the file and move it elsewhere, but usually it’s an archive and I just want a file from it. Saves me from an ever growing Download folder.
Those files are most dispensable and should not consume precious RAM.
Some file systems are even adding specific tmpdir support where fsync() is turned into a noop.
So they have all the advantages of tmpfs without eating into precious RAM/Swap.
So… don’t keep large files you want to control the lifecycle of in directories you don’t own? If you care about it, stop putting it in /tmp. You were always setting yourself up for a bad surprise by doing this, and now you’re surprised?
I might download an ISO or check out a repo I want to build just once. Or copy some (large) files there because I run a script on them and persevere the original in case something goes wrong.
I don’t care about those files once I’m done with them - so far they would only be gone with a reboot, which aligns well with my definition of done.
If you’re downloading large files (so large you’re concerned about an in-memory tmpfs) to boxes that you don’t own, stop just leaving them around in the first place.
Sure, /tmp is cleaned on boot. But if the host is rarely rebooted it’s pretty inconsiderate to just leave enormous quantities of unneeded crap lying around in shared directories.
/tmp isn’t supposed to be a garbage dump. It’s supposed to be for data that’s actually in use but transient or relevant to running processes. You’re still expected to be a good citizen on shared hosts and clean up after yourself when you’re done.
It isn't cleaned up at boot IIRC. Unless you leave your computer off for 30 days and then come back [1] :).
But it shouldn't be too hard to write a relativly simple systemd.unit file that does that at boot. After all the main part would be `Requires/After=local-fs.target` and something like `ExecStart=bash -c 'rm -rf /var/tmp/*'` I think (you'd need to double check what exactly to do if you want to do this).
There's also systemd-tmpfiles for this exact purpose (well, it's a more general purpose create-a-directory-structure config system, but this is one of its usecases).
Tmpfs is great. If you're a compiled language person and the build dir isn't in a tmpfs yet, give it a try. Faster than nvme and less prone to burning out from lots of writes.
Defaulting to /tmp in tmpfs means I can remove a line from my post install setup script. That doesn't matter much. But it also means less divergence between my system and the default, so there should be a reliability improvement from other people stumbling over problems with the ramdisk before me. So a win all around, happy to see it.
What's especially fun is using tmpfs as root on NixOS. That way your root partition gets cleared out on any reboot, so cleaning up your mess is never really more than a reboot away.
I've never thought about mounting a tmpfs folder for compiling stuff, but that's actually a pretty good idea.
> I've never thought about mounting a tmpfs folder for compiling stuff, but that's actually a pretty good idea.
Using tmpfs (or RAMDISK) for compiling and discovering that it’s not actually faster is a virtually rite of passage for every developer. :)
If you do try it, at least benchmark it. You’ll probably discover that you spent more time setting it up and copying files around to volatile storage than you’d ever gain back via minuscule compile time speedups. Compiling is not IO bound and your files are already being cached in RAM by the OS after the first read.
> If you do try it, at least benchmark it. You’ll probably discover that you spent more time setting it up and copying files around to volatile storage than you’d ever gain back via minuscule compile time speedups. Compiling is not IO bound and your files are already being cached in RAM by the OS after the first read.
You shouldn't need to copy files to volatile storage, just storing the outputs there. Maybe only storing the intermediate outputs there. Some build processes just dump intermediate outputs all over the place, but a lot of them have a directory for those, and it shouldn't take much to make that directory be on tmpfs or similar.
It’s actually not really about speed in my particular case; I mostly just don’t want lots of tiny intermediate files being written and rewritten on my SSD all the time. I would sleep better if I knew that they wouldn’t have any negative effects on the lifetime of my drive.
I have a tmpfs folder mounted for my downloads folder for my browser for similar reasons; not for speed but to reduce drive wear and automatic cleanup so that it doesn’t pile up for forever.
It's faster. I don't move the source code around though, the intermediate build directory cmake writes binaries into is under a tmpfs mount and the install dir is back on disk. Currently has 10gb in it with 30m of uptime. The gain would be less if I was more willing to trust ccache, or if the cmake dependency graph of the project was trustworthy - my default build deletes all the files from the last one.
> If you're a compiled language person and the build dir isn't in a tmpfs yet, give it a try. Faster than nvme
What are you compiling that’s actually faster in tmpfs?
A decade ago, people were surprised to discover that going from HDDs to SSDs didn’t impact compile times very much. The files are quickly cached in RAM anyway and the bulk of build time is CPU, not I/O.
Going from NVMe to tmpfs feels like an even smaller difference.
> But it also means less divergence between my system and the default
This is one reason I like Arch. It is almost as vanilla as can be (some exceptions may apply) while still being a "build it yourself Lego set" (build not in the compile sense).
Wouldn't that be the opposite? I like the base userspace to be very boring, where broken things have been stumbled over and fixed before I run into them. Everyone having their own bespoke set of pieces makes the divergence between what you're running and what other people are running greater.
Yeah, it's kind of hard to go back once you've embraced the "build your own distro" distros.
Arch, Gentoo, and NixOS Minimal are all pretty wonderful in that they give you the tools to build whatever system you want, without a bunch of extra crap that you'll never use.
I really dislike people using the "I have an inflexible scientific code that needs /var/tmp to be disk and persisted forever" argument to keep things the way they are. Debian's user base is far larger than a few inflexible codes, and it's straightforward to change the default.
Code and scripts should be well behaved and clean up after themselves. There’s zero reason to periodically go in there and wipe stuff out. Especially when some file systems don’t have a creation time attribute in the on disk inode and some archiving tools preserve the mtime when unpacking. A newly placed file can already be five years old.
Also the tone of some of the included quotes is so offputting. Their statements and arguments for this change don’t even acknowledge the fact that there are different views on this. This change comes across as progress for progress’ sake. Makes me want to make a 500TB /var/tmp and never delete anything.
> There’s zero reason to periodically go in there and wipe stuff out.
I swear I remember back when I first switched to linux (Ubuntu) back in the late 2000s, this was how it worked: Instead of a tmpfs or automated wiping on boot, a system crontab entry that would use "find" to delete files last accessed more than some time ago (a day? a week?).
Except for the risk of filling the disk if files are created too quickly, this would seem to be the best of all worlds to me - stuff can't stick around forever unless it's constantly being used. A download you access once would stick around for a day, while a heavy project that's using /tmp/ would stick around for the week or month you're using it, even across reboots, then get deleted later.
How do I disable the automatic file reaping? How do I configure the maximum age of temp files? Neither are mentioned. The bulk of the page is just a long list of passive-aggressive suggestions about how users are doing things wrong.
> Users who want /tmp to remain on disk can override the upstream default with `systemctl mask tmp.mount`. To stop periodic cleanups of /tmp and /var/tmp users can run `touch /etc/tmpfiles.d/tmp.conf`.
The discussions about swap in the LWN comments are interesting. I wasn't aware that at the "swap should be 2x RAM" tip was actually a hard requirement at one point (allegedly around Linux 2.4). I also wasn't aware that common recommendation was to run with swap again.
I have some systems (usually VMs) where RAM is much bigger than the root disk. For example, 500+ GiB of RAM and only 100 GiB of disk. I have other VMs where the disk is orders of magnitude slower than the RAM (e.g. EBS on AWS EC2).
I wonder what I should be doing for swap and vm.swappiness. Usually I just run with either no swap or a tiny 4 GiB swap file on /.
I think it makes sense to have swap be the smaller of 2x ram and 512 MB, unless you have specific needs.
On small memory systems, it really is useful to have 2x ram, because it can be pretty useful sometimes. On large memory systems, it's highly likely that your system is too far gone once it's used a significant amount of swap, so 512 MB seems like as good a place as any to stop. Swap used %, and swap in / out rates are a very good measure of system health, and if you have no swap, you lose out on those metrics. Ideally, for a small leak, you'll have enough time to get an alert about high swap use, and come in and inspect the situation before you start getting OOMs, but for a large leak, you want the system to fault quickly --- it doesn't do anybody any good to be up but deathly slow due to thrashing for a long time.
I remember the swap = 2x RAM guideline back in the day. When RAM got into the gigabytes it stopped making sense. If you're out of memory on a 64GB machine, a swap file is only going to (slowly) prolong the inevitable.
These days I set up servers without swap, and I have not run into any issues doing that.
A small swap space with zswap is where it's at. You get all the benefits of swap, plus reduced I/O and high-enough speed fetches that it just feels like extra memory. Well worth configuring, IMO.
I typically enable zram instead, on modern systems disk-based swap is going to be extraordinarily sluggish, and unlikely to be the make-or-break of recovering a system if you 64GB+ of RAM was already chewed through.
zram != zswap, which is unfortunately confusing. Unlike zram, which is a fixed-size in-memory block device that can be used for swap, zswap makes its own pool for compressed pages that you can resize at will, and sends compressed pages to your real on-disk swap. This significantly reduces I/O and gives you the best of both worlds between in-memory and on-disk swap. I'll be happy to share my config if you like.
I think they must be referring something in an unstable-series kernel. I don't remember anything like that, but I only ever used the stable kernels. (Back then, there were separate stable (2.2.x, 2.4.x) and unstable (2.3.x, 2.5.x) branches. "Linus forced a swap rewrite" sounds like an unstable branch thing.)
I always heard that the "2x RAM" thing came from some primordial BSD release.
On real machines I run with a super low swappiness of 1-5 or so, which effectively means it will always use RAM, basically until its too late. That way its difficult to accidentally get OOM killed, instead everything just slows way down when swap gets involved when RAM is full.
> instead everything just slows way down when swap gets involved when RAM is full
I have literally never wanted this and so I'm curious why you do. This is why earlyoom is a thing precisely because grinding your system to a halt is often worse than killing and restarting some processes.
High swappiness so you run slower but never lock up or low swappiness with an aggressive oom killer so you don't lock up are to me the only sane options. Why do you want your servers to be up but thrashing?
Yeah the trouble is in my experience it slows down to the point that the system is unresponsive and you have to reboot anyway. I don't think Linux has a good answer to RAM management other than "buy lots and lots of RAM" (I already have 32GB and it's nowhere near enough).
I wish it worked like Windows - I've never had to reboot due to lack of RAM there, and even when the system is unresponsive ctrl-alt-del is very reliable.
I've had Windows hang due (apparently) to OOM before now, most recently last week: already low on free memory due to other things, a runaway chrome tab chef through a huge pile of memory overnight. Come morning everything was at a crawl (task manager responded quickly but everything else took an age to register any event like a click or keypress. Bringing up chrome's task manager, sorting by RAM use, and navigating to the top of the list, to find the problem, took several minutes. Switching to that tab hung everything, even task manager. This was running in a VM so I could see from the host that the OS has hung: practically no CPU or IO use, what little there was was likely just "background noise" from hyper-v, also not even the simplest service accepted TCP connections nor would it respond to ping. Left it like that for tens of minutes to see no change, so resorted to a hard stop & start.
EarlyOOM, swap, and zswap is the answer combo you're looking for. With those in place and properly tuned, you'd have to have something go extraordinarily sideways for an actual issue to appear.
I use a swap partition 1/2 the size of my RAM, a zram swap partition 1/4 the size of my RAM, and 160 swappiness. Well, I say 1/2 and 1/4 size, it's actually 8GB and 4GB to a RAM size of 15.3, but it's close enough.
Why not a single swap space with zswap? I find it much easier to configure, and the performance difference can largely be eliminated with a little tuning.
I had zram set up by itself in the past, but on my most recent install I got paranoid and I decided to set up a swap partition just in case (though I dunno what happens when zram runs out of space). I suppose I may as well try zswap.
On an unrelated note: if you're the DrMcCoy from Twenty Sided and GOL it's nice to see you.
In the 2000s a common recommendation was for it to be at least 1x your RAM (so 2x was a common/easy selection) so that laptops would have enough space to hibernate (rather than just suspend).
While I generally appreciate Debian's careful approach (it is my go-to distro if I want things to be predictable with minimal upgrade frequency), it always blows my mind to watch open-source communities bury topics like this in committee for twelve years while commercial OS's just tend to go "It's better for the end user. Here's some money. Make it work. If it breaks some key software, make it work."
> Red Hat Enterprise Linux (RHEL) and its clones, as well as SUSE Linux Enterprise Server (SLES) and openSUSE Leap, still default to /tmp on disk.
So at least some commercial OSes, if you will agree that we could refer to the first two of these distros as such, are also staying with old decisions for a long time.
And even though RHEL in particular is know for its insane dedication to keeping API compatibility with backports of software, they still are in a position to change things around between major versions. And I believe they do do that quite a bit, which is why they also give so long time before EOLing old versions so that enterprise has plenty of time to adapt to the changes with major version upgrades.
Flip the question: most storage these days is some kind of solid-state. Why is it better to commit /tmp to a solid-state device (decreasing its operational life) when those files are, by definition, transient and must not be required for future correct program operation?
It's something the average non-developer does less than once per day, at most.
(And the average developer has enough savvy to change settings).
... also, most of the compilation tools I use end up dropping their intermediary files in a peer directory to the source tree or a build directory at the root of the project, not in /tmp. I'm not sure what tools people are using that are leaving compilation intermediates in /tmp.
Writing data to a disk based file means it will inevitably be written to disk shortly, unless you delete it so quickly the system doesn't get a chance to write it before it's gone. Writing data to a tmpfs file means it will never be written to disk unless the you write more data than you have RAM. So if your files in /tmp fit into RAM tmpfs will be faster. That is the only benefit.
Unfortunately tmpfs is not so good when the data does not fit into RAM. The first thing that happens is it writes the overflow to swap, which is much slower than writing to disk in the normal fashion so you lose the speed advantage. If you store so much you run out of swap the second thing that happens is your system dies.
This proposal comes with a bigger hairs. The issue is you can't trust all apps with clean up after themselves, meaning they will exit leaving crud in /tmp. That means if the system isn't rebooted /tmp tends grow slowly as this crud accumulates, which eventually means even in the best case the system will become unstable after enough time. Their solution to that is to just delete old files on the assumption they are crud. But that's a kludge as there is no way to be certain a file is crud of not, and deleting files that will be re-used makes the system unstable.
Which in the end means it depends on use cases. A typical desktop user with lots of RAM (at least 8GB, preferably 16GB) will probably see a benefit as their files in /tmp will fit into RAM. Typical means they don't do something that eats RAM and disk like editing video files, and they reboot their system occasionally. Other users will lose because tmpfs overflow to disk so their system will at best be slower, possibly unstable if they didn't know to allocate lots of swap when they installed the system.
Notice servers don't neatly fit the typical use category, and also notice variants of OS's that target servers don't use tmpfs for /tmp. Debian doesn't have a variant that targets servers, but whether /tmp uses tmpfs is easily configurable for sysadmins. Editing text files is their day job after all. In fact that ease of changing the default was the main argument for the change on the Debian lists. The end result is the change won't overly effect sysadmins of Debian servers either - it's just one more thing they have to change in what is already a long list.
TL;DR: server admins won't care about the change and typical users with big laptops doing normal stuff will be happy, but normal with cheap laptops or are doing unusual things will have their world turn to shit if they aren't comfortable with the command line.
That keeps the file in the page cache until it’s evicted, but instead of being written to swap it’s written to the fs.
With the swap partition usually being quite small and the fs today being all the storage there is on a personal machine.
Well, as I read that page it doesn't actually exist (yet), so that rather makes it a hypothetical at this point. Might be interesting in the future, but not today.
You're still going to often wait for the file to be written to the disk since many apps care about running sync() in the right places. With tmpfs you don't have to wait.
This is a fair point, but also an indication (again) that the sync primitives in posix are poorly designed. It should be possible to indicate that a file is a temporary and/or can be recreated from scratch. (O_TMPFILE isn't that)
Sync is not only about persistence, but also actual synchronisation. If you're writing to two files and want to ensure ordering, you have to (f)sync - doesn't matter disk or ram.
I'm pretty sure that's not the case unless you're synchronizing across kernels (eg. NFS close-to-open sync). Although it's true that sync primitives for network filesystems are also a mess.
Block devices and their drivers are free to reorder writes in absence of explicit synchronisation. The filesystem level is supposed to take care of keeping things from breaking. I would suspect that tmpfs itself prevents any reordering... but I wouldn't bet on it. Non-memory devices will happily reorder.
Sure, but this is only important if you're working across kernels (or need correct persistence across a crash+reboot, which is "across kernels in time"). Within a single kernel instance, two processes do not need to think about such synchronisation, or else just about all software written would be broken.
Oh really I thought my arch should only clear /tmp upon startup. No wonder I lost files in /tmp
The upstream defaults for systemd are to mount /tmp as a tmpfs and delete files that have not been read or changed after ten days in /tmp and 30 days for those stored in /var/tmp.
Not true if you enable zswap. The pages stay compressed in a RAM pool, and are only sent to the real swap on disk in their compressed states when that gets full. This significantly reduces I/O overall.
You can specify the size of the tmpfs and keep it well below the size of RAM, which will greatly reduce the likelihood of a long-lived tmpfs from causing swapping.
But more importantly, don't write large files to /tmp/. Back in the day we gave /tmp/ its own partition just so those files didn't fill up the root mount. Write big files there and it would fill up. Plus, big files benefit more from a faster drive, and back in the day we used to put the root mount on a slow disk, and have a fast big data disk which is better for large files.
I think an option to prevent tmpfs from getting shifted to swap (or disk) at all would be great.
In the article there’s some mention of the implications of ram and swap - this has caught me before - if you wget an iso to /tmp, you may be consuming more ram than expected. And sure, that’ll get swapped out to disk, but that might be unexpected if you’re intending on using ram for virtual machines etc, you may end up OOM because of this behaviour.
I did this on Fedora, and it left a bad taste in my mouth.
Is there a easy/non hacky way to configure /tmp to only delete files on regular shutdown? I sometimes have the occasional crash, and have been bitten by doing some temporary work in /tmp before such a crash 2-3 times.
You could do this pretty easily I think with systemd, it has a shutdown target that covers reboots and poweroffs. It would of course meaning not having tmp in RAM which is pretty common these days.
> Sam Hartman noted that ssh-agent created its socket under /tmp, but it would be better if it respected the $XDG_RUNTIME_DIR setting and created its socket under /run/user. Boccassi agreed and said that he had filed a bug for ssh-agent. Richard Lewis pointed out that tmux stores its sockets in /tmp/tmux-$UID, and deleting those files might mean users could not reattach to a tmux session that had been idle a long time. Boccassi suggested that using flock() would be the right solution to stop the deletion, and said he had filed a bug on that as well.
Ah, yet another group of Linux users looking to get ssh to adopt the XDG specification. Moving ~/.ssh to ~/.config/ssh has been repeatedly requeseted. XDG_RUNTIME_DIR is going to be even harder because it can be nonexistent on BSDs.
I only learned that disk backed /tmp/ wasn't the default a couple of months ago after mainly using ubuntu for years and wondering why I was quickly running out of ram on a fedora machine.
I prefer /tmp on disk. Nvme disks are fast enough, wear is not an issue nowadays. I occasionally extract large files there. So either the tmpfs would be too small or the machine would eventually start swapping random things to disk, for which I'd have to grow my swap partition first - it's only 1GB.
I mean I could start using /var/tmp or make a directory in $HOME and clean that up occasionally, but that would mean I have to change old habits. ;) so I guess masking tmp.mount it is.
Hopefully Ubuntu will align with this decision, so that MATLAB is finally forced to fix their shitty software. (I am aware that you can set the TMPDIR as an env var)
I have always assumed that /tmp means "temporary". I mean, it's right there in the name. So, a move to tmpfs means "automatic cleanup" to me, which seems just fine, and worth doing for that.
If we're doing because the systemd maintainers say "we have to do it because systemd", though, well...
How long till the systemd guys make package dependencies and installation as a service? You know, sponsored by the IBM/Redhat team, and signed by Microsoft?
systemd is a project which provides building blocks for an operating system. PID 1 is just one building block. Lennart has said this since at least 2013. How the systemd naysayers haven't understood this in at least 11 years, I'll never understand.
When I started in tech I worked for a storage vendor’s tech support. One of my first big cases was troubleshooting why a customer’s files older than a week were disappearing. Given the topic, I’m sure you can guess—they had it mounted under /tmp and the cleanup script was doing its thing.
Hopefully the new Debian version of this script is a bit smarter!
A competent sysadmin not only tracks their garbage, but others' garbage as well.
On the other hand, a sysadmin strives to be lazy, not by postponing problems, but solving them the appropriate way, and preventing it from happening again.
I'm annoyed that it is getting harder and harder to find what physical filesystems are on a system.
My goto commands "df -h" and "mount" are cluttered with all kinds of ephemeral filesystems, with no easy way to filter (say, a one-character flag). lsblk is close but .. lvm and snaps.
On my main-driver desktop box that I hamfistedly switched over to Void Linux (on a ZFS root, even) a couple of months ago:
The output of df -h looks pretty sane and understandable.
I see some entries for /dev and /run and /, some stuff for EFI, a bunch of mounted ZFS datasets, and it's all very readable with no line noise in any column.
---
Now, that said: I have some docker containers. My own previous goto command of mount reports a complete clusterfuck, and all of the clusterfucked lines are due to docker doing docker stuff.
I don't know how things like 4UYSHHPFBZQNFZ9Z698JDLUZWJ, LK33RZTN5ZNFGPJNDUHM0HTSD1, 2G0A4YYUVCMP9HZSM5UXOJW0BX, ZH29C5YNZHJ22QMY63EWV5RF3P, and 32XOCXBUY41QQJTPPA8HE6EVZ6 add any coherency or clarity[1], but I assume that since the computer quite easily knows what this shit means then it needn't be bothered with dumbing it down so that a human can parse it in any sort of memorable or repeatable or communicable fashion.
(Which is, of course, bad behavior on the part of the computer. A UUID is also awful, but at least it has some punctuation so that it can have some cadence if it must be dealt with as-is.
It's like we learned nothing from DNS[2], or as if translation layers can't exist even within a filesystem[3].)
Systemd has managed mount points on systems where it runs for a long time. It provides mount units an alternative to fstab mount directives, which offers some advantages (including parallelism of at-boot mounting, granular mount dependencies for services, automatic runtime mounting and unmounting) and disadvantages (harder to view at a glance than fstab, more complex).
What’s more, on systemd systems, fstab is dynamically converted into native systemd mount units at boot, so in many cases fstab is purely a facade/compatibility shim over systemd doing all the mounting anyway.
While not terribly prescriptive, “man systemd.mount” recommends /etc/fstab be used to manage “mounts for humans” due to its simplicity and accessibility. So it doesn’t seem like fstab is considered legacy or deprecated by systemd; rather, this seems like more of a porcelain/plumbing distinction.
I find that, unless there's an extremely necessary reason, it's almost always bad to break backwards compatibility. If there's any concept whatsoever of continuity between major releases, backwards compatibility should be the default, with changes being options.
Naturally this leads to more difficulty in testing and maintenance over time. But that extra work pays for the benefit of having a very long lived product and compatibility with past integrations.
At the end of the day you have to decide if you're building a product to be easier for the users, or the maintainers. Personally I'm on the side of the users.
In addition to `/tmp`, you can put your `~/.cache` on `tmpfs`.
Laptops have ridiculously large amounts of RAM nowadays. My Linux laptop setup barely makes a dent in the RAM, even when running 3 different Web browsers and other gluttonous desktop programs. `tmpfs` is a great use for excess RAM, reducing wear on SSD. (I also disable swap.)
I've also done things like build an entire large ecosystem of packages in `tmpfs`, when the build server happened to have mirrored 10krpms drives, and I didn't want the tons of intermediate files to eventually be synced to disk. (Even though, with disk, they would also probably hang around in Linux filesystem buffers, not reads hitting disk each time.)
If my ~/.cache was on tmpfs, I would need to download tens of gigabytes of stuff every day. I wouldn't be able to work. There are lots of huge files stored in there, mostly required by Python packages for Stable Diffusion and other ML stuff that insists on downloading huge model files and putting them into ~/.cache.
Agreed, if you're storing 25GB+ in `~/.cache`, then probably you won't put it on `tmpfs` on a laptop.
I didn't think of the massive ML models, because I store those in the home directory, where I can see them.
(Incidentally, since you mention Stable Diffusion: for an earlier version of SD, I went to some care to make sure I didn't accidentally lose the checkpoint file, because I didn't know whether it would be pulled from distribution. Then there were regressions in the SD training data.)
>I store those in the home directory, where I can see them.
That works for most things, and that's how it should be. But sometimes the dev just doesn't provide links and you either have to read their code to figure out the model URLs, or simply allow the automatic download that ends up in .cache.
8GiB is still very common, and it's not hard to find cheaper laptops with 4GiB... I don't think your average person's laptop has as ridiculously large amounts of RAM as you seem to think
I run Linux and have 8G and compile things. Works fine. Currently ~/.cache is 2G, but that's because I cleared out ~/.cache/go just yesterday after compiling ~1000 Go modules for some testing, which bloated it to >50G.
Fair. I read the comment as, more or less, "the typical laptop has so much RAM these days that putting ~/.cache on tmpfs would be a reasonable default" or something like that. But I guess it could -- and should -- be read as, "if you spend a bunch extra on RAM, you can get laptops with so much RAM these days that you can consider putting ~/.cache on tmpfs", which is reasonable enough.
This would break a bunch of little things in annoying ways. Like I have shell script tools that store shell scrollback logs in .cache, and I want that past reboot.
FWIW, I've been doing it for many years, and not noticed any problems.
If something is supposed to persist past reboot, I wouldn't put it in `.cache`. Though I don't know offhand what the official documented behavior of `.cache` is, and I can't immediately find that documentation (maybe some open desktop cabal thing?).
> $XDG_CACHE_HOME defines the base directory relative to which user-specific non-essential data files should be stored. If $XDG_CACHE_HOME is either not set or empty, a default equal to $HOME/.cache should be used.
It's fine to put ~/.cache on tmpfs, but doing it by default for the general case is going to cause a lot of hurt.
My ~/.cache could be rm -rf'd without too much worry right now, but that doesn't mean that persisting it isn't useful. For example on my system right now:
- Browser cache is useful to persist, especially with some larger sites.
- I put my Go module and build cache in ~/.cache, and while that can be deleted it's useful to persist because it make builds shorter, and avoids having to (re)-download the same modules over and over again. Note that at the moment my internet is kind of crappy so this can take quite a while.
- Some other download cache things in there, from luarocks, xlocate, few other things.
- I store psql history per-database in ~/.cache/psql-dbname. It's useful to keep this around.
- Vim backup files, persistent undo files, and swap files are stored in ~/.vim. The swap files especially are important because I want to keep them after an unexpected system crash.
Some of this is solvable by moving stuff to other directories. Others are inherently unsolvable.
I also have just 8G of RAM, which is fine but not fine for storing ~/.cache in RAM.
That's silly. Cache is there for preserving things across many runs of some application. Applications certainly use it in a way that assumes long term storage. It's not for short term temporary things. It's a cache.
It's not going to cause "problems". It's just going to massively slow down many use cases that rely on downloads.
I do have a notebook with plenty of RAM, so I haven't bothered with swap, but I'm new to Linux. Without swap configured, what happens when tmpfs uses all the memory that's been allocated to it? Thanks.
Very OT, but I don't know where else to turn to because the Reddit community has been of no help.
I used i3 and I absolutely love the way it feels. However I would like to configure it in a way that I can press a button and every app turns into dark mode. My main apps (Firefox, vscode) have an option to say "use system theme" i.e. if the os is dark, theme this app dark. Great! The problem is I don't know how to set the system theme. And every time I've asked around, I see people respond with GTK, QT and what not. I don't have those, I have i3.
If it is a gtk app it will use the gtk mechanism, if it is a qt app it will use the qt mechanism, otherwise your in luck, you get to use the native X11 mechanism.
gtk: hell if I know, the web sez there are css files, and there are probably some gnome tools to set the theme.
qt: no clue. and the web was unhelpful
native X11: now we are talking I know this one, X11 provides a database that applications can use for configuration and design. The main interface is the xrdb command. and all good application should include the pertinent points of their call tree to get you started in theming them. http://man.openbsd.org/xrdb
I ran gsettings set org.gnome.desktop.interface gtk-theme 'Adwaita-dark' and firefox isn't in dark mode, despite the fact that i have firefox configured to use the system theme and firefox is on GTK according to google.
Oh, Gnome 3.38. I'm not sure that version even supported dark mode, only dark themes. I won't comment on aging of Debian ;).
Btw, what version of Firefox are you using? Debian comes with ESR, so chances are, that firefox' "ui.systemUsesDarkTheme = 1" still works (it doesn't in newer releases).
This will get you 99%, but there are exceptions. I.e. Firefox is linked against GTK, but paints its own widgets with custom theming anyways. Telegram is linked against Qt, but has it's own theming/dark mode too.
The XDG base directory spec (https://specifications.freedesktop.org/basedir-spec/basedir-...) was supposed to solve all of this...