I would suggest that the need for Englishy variable names is due to a weakness in programming languages and possibly the programming model itself. Why should a set of legitimate values for a computation benefit from how you refer to that set? Can that variable take on undesired values? Do you rely on that name and its comprehensibility to distinguish good from bad values? I sometimes find it hard to believe we still program this way.
We don't have to still program this way - you can write code with very strict types, with machine-checked proofs that it works correctly, etc, etc. We don't do this very often because it turns out this level of rigor is incredibly time-intensive.