Excellent software does not mean bug free of course. Apparently the size of the code base of the driver itself is comparable to that of a typical OS kernel.
Their hardware is pretty much best in class in both absolute performance and performance per watt, so I would say that it is a pretty objective evaluation.
> Their hardware is pretty much best in class in both absolute performance and performance per watt, so I would say that it is a pretty objective evaluation.
There are many more criteria:
- Openness: Open specification, existence of open source drivers, necessity for a firmware blob vs. open firmware, support of open standards
- Time the hardware is supported by producers with new drivers for new platforms (NVidia loves to cease support for older GPU chips).
- cost and cost-benefit ratio (the whole Intel vs. AMD vs. ARM flamewar ;-) )
- Existence of intentionally blocked features: Some hardware vendors intentionally block features, sometimes even with the possibility to unlock them afterwards (e.g. by firmware or fuses) if you pay them additional money
- Willingness to go to legal limbo in the interest of the users: For example when copy protection schemes for CDs were common in the early 00 years, producers of CD/DVD writers tried to surpass each other with capabilities of their devices to be still able to read out such copy-protected CDs. A modern example might be how other copy protection schemes for, say, audio or video are implemented: In software and written in such a bad way that it will (hopefully) soon be cracked or deeply dongled down in a security chip that is part of the hardware.
> (NVidia loves to cease support for older GPU chips).
As someone who deals with old GPU support daily, I challenge this! The most recent drivers still support Fermi chips (GT400 series) that were released over 7 years ago.
I'm not sure how long you expect a chip to be supported?
I have a pile of nvidia 5x0 cards that are now unusable under linux.
The rest of the hardware I bought that year still works fine (even stuff that's more obsolete than the nvidia cards).
AMD has usable (= works with modern Steam games, and suspend/resume) open source drivers, so I bet on them for the new video card. I guess we'll see how it worked out in 2022, or so.
Are you talking about the geforce 500 series[0]? They are still supported. Assuming the HW is still good, all drivers should still work. You can try with the NVIDIA driver included with the distribution (Ubuntu, Mint, etc), or download the one from nvidia.com
If it doesn't work, you can file a bug (see/use nvidia-bug-report.sh).
They "work", just not with modern software. I found threads where the nvidia devs and the open source devs discussed the issue. Apparently, the nvidia drivers violate the opengl spec and drop textures at unpredictable times (such as during suspend and mode switches).
The "fix" is to constantly spam the same texture at the card, or use some some non-standard opengl extension to see when the driver decided to drop the textures, then recopy them from dram to the card (why keep one copy in video ram when you can keep a second in dram for twice the cost?)
From what I can tell, people are sick of implementing this workaround, so newer software doesn't bother. This has been the status quo for over a year.
In practice, this means severe screen corruption when switching users or suspending/resuming.
I've never seen that particular bug before, but the whole thing sounds totally plausible. Since it only manifests in Fermi, it's likely an arch issue. I'm guessing the workaround in the driver was deemed to messy/expensive compared to the one in the application.
I'll take a cursory glance at the issue, but if you've seen NV devs discuss it externally, it's probably well known internally, like closed as WNF.
I totally see where all sides are coming from, though:
* NV: Why spend resources fixing old architectures when it's easy to workaround at application level.
* Devs: Why spend resources supporting some obscure old HW that doesn't work right.
* Users: Why spend money replacing perfectly good HW, because NV/devs are lazy?
The more time passes, the less likely the first two are to do anything about it; and in time the number of users with those cards drops below epsilon. The easiest option is to just get HW that works with the SW you need - like you did.
I hope you are right, but there is not enough transparency to tell. From the discussion it sounded like modern hardware will behave the same, and maybe they'll try to amend the standard.
I didn't feel like betting $100's to find out if it hits on new hardware, but you can readily reproduce it on ElementaryOS with the 500's. You have to manually install the binary drivers with the ubuntu/debian proprietary driver tool, since ElementaryOS recommends the opensource ones.
This isn't the only distro that hit it, just the one I landed on in the end.
Anyway, for me the easiest option was to switch to open source drivers (where this kind of thing has a better track record of being fixed), and Nvidia is a non-starter there.
> As someone who deals with old GPU support daily, I challenge this! The most recent drivers still support Fermi chips (GT400 series) that were released over 7 years ago.
My opinion on how long a device should be supported: As long as there is no open specification available, one is dependent on the vendor to deliver updates. And graphics drivers are very prone to security bugs.
So as long as devices still have a not tiny user base, the vendor has to provide security fixes for it. I would even love to say we should raise our standards and demand that a producer has to support their hardware up to the moment they release open specifications for it.
So you're talking about Tesla and earlier chips. Those are indeed only supported for the security fixes. The 34x.xx driver supports those chips, and they are indeed getting security updates.
I have personally fixed a couple of these issues[0], including for those "EOL'd" cards. The most recent posted drivers for these chips I'm seeing are 342.01 from 2016-12-14 for Windows and 340.102 from 2017-02-14 for Linux. That seems like it would check the "provide security fixes" box, no?
>has to support their hardware up to the moment they release open specifications for it.
I'd tend to agree, but as you can imagine this is quite a complex issue for us, so no comments here :)
Nvidia are the people that require you to log in to get driver updates. Their "geforce experience" is mostly horrible, apart from Ansel and maybe the streaming thing.
I'm not a windows user, so I wouldn't know (the nvidia drivers from my os are an apt install away), but AFAIR the 'geforce experience' is not required to get the lastest version of the drivers, only for non essential tools.
With regards to your thoughts about the hardware being expensive, I found the following [1] article interesting. I'm mostly referring to the vendor lock-in part with CUDA.
Nvidia's software is bloated, requires login for features, telemetry without notifying the user and security issues in their drivers. I wouldn't really use them as a prime example.
SolarFlare network cards are also pretty good and so are their drivers and user space network stack.
In both cases the quality of the software side is a significant drive for their hardware sales.
In both cases the hardware is very expensive though.