I've been bugging them about this for a while. There are repos that contain multiple model weights in a single repo which means adding up the file sizes won't work universally, but I'd still find it useful to have a "repo size" indicator somewhere.
HF does this for ggufs, and it’ll show you what quantizations will work on the GPU(s) you’ve selected. Hopefully that feature gets expanded to support more model types.