So, that is actually an interesting thought experiment, thanks for that.
However I'm not sure it is directly applicable here. There are two courses of action that could solve this problem:
A, an action that improves kernel builds
B, an action that improves several workloads
For A and B of similar cost, it makes sense to do action B in preference to action A.
Your argument speaks to A being of positive utility, but given a finite knapsack of effort smaller than the set of possible actions that fit in the knapsack, a greedy algorithm that places any positive item into the knapsack is not optimal.
I don't follow your reasoning here, but let me expand my observation further: Linus can A. improve his build system (low-hanging fruit which the Chrome numbers suggest could yield an order of magnitude better performance), or B. he can search among a variety of difficult unlikely-to-yield-major improvements (like yelling at Intel engineers 'make it go faster!') which will improve his build system and also other hypothetical loads (which are unlikely to be large gains if any at all; what, is Intel too ignorant to try to make the TLB fast?). He is claiming B is better in part because of the hypothetical loads makes it better in total.
Most people would consider A a more reasonable reaction, especially after hearing that Linus's best idea for doing B is apparently going all the way down to the hardware level in search of some improvement. We can see this by intuitively asking what people's reactions would be to a proposal to induce B if the equilibrium were already at A.
On the other hand, Linus is one of a handful of people in the world who may be in a position to get results by yelling at Intel engineers to 'make it go faster!'. This isn't because Intel is too ignorant to do things on their own, but because practically everything can be optimized further, and Linus may have enough sway to focus the engineers' attention on the problem that he wants solved.
Personally, I don't care much about the speed of the Linux kernel build system, but I do care about the speed with which page faults are handled by the CPU. Even if the chances of success are lower, if he is able to succeed in speeding up every page fault on future Intel processors, I would consider that a much greater good.
The real problem (as I see it) is that I think he's trying to optimize the wrong thing. His worst-case test is based on trying to repeatedly fault in an uncacheable page: every lookup TLB lookup fails at every level of the cache. Likely, Intel has chosen to optimize the real situation where page translations are cached when they are repeatedly accessed.
Intuition is probably not a good guide here. An improvement to the kernel build process is relevant to the thousands of machines that are used for kernel development; an improvement to the page fault speed is relevant to the over 1 billion devices that run the kernel. That's a pretty big multiplier.
(I believe the GPs reasoning is that "improve the build system" and "retard the build system" are not symmetrical, because both directions require positive effort to be expended).
On the other hand, if build system improvements truly are low-hanging fruit, then there are a lot of people out there who are capable of helping. A much smaller number of people are capable of productively working on page fault optimizations, so having someone like Linus address low hanging build system fruit instead would be a waste of talent.
We have a different prior expectation of the utility and feasibility of action B. In my view, Linus is uniquely suited to do something useful here, and his chance of success is high. In your view, Intel is already doing the best it can, and his likelihood of success is low. Our positions follow.
A isn't really very low-hanging though, rewriting the build-system for something as complex as the kernel would be quite the undertaking even just from a technical standpoint, let alone that you now have to change thousands of peoples workflow by having them install and use a new build-system (some of those people are probably dedicated to just maintaining the build-system too, so their job would radically change) as well as figure out a suitable build-system in the first place (which one would work well for the kernel? which one works on all the architectures people want to compile linux on? etc)
B from linus' perspective just means "wait a year for things to automatically get better (after throwing some money at it)" which seems like the low-effort solution.
However I'm not sure it is directly applicable here. There are two courses of action that could solve this problem:
A, an action that improves kernel builds
B, an action that improves several workloads
For A and B of similar cost, it makes sense to do action B in preference to action A.
Your argument speaks to A being of positive utility, but given a finite knapsack of effort smaller than the set of possible actions that fit in the knapsack, a greedy algorithm that places any positive item into the knapsack is not optimal.