Competent companies tend to put a lot of effort into building data analysis tools. There will often be A/B or QRT frameworks in place allowing deployment of two models, for example, the new deep learning model, and the old rule based system. By using the results from these experiments in conjunction with typical offline and online evaluation metrics one can begin to make statements about the impact of model performance on revenue. Naturally model performance is tracked through many offline and online metrics. So people can and do say things like "if this model is x% more accurate then that translates to $y million dollars in monthly revenue" with great confidence.
Lets call someone working at such a company Bob.
A restatement of your claim is that Bob decided to launch a model to live because of hype rather than because he could justify his promotion by pointing to the millions of dollars in increased revenue his switch produced. Bob of course did not make his decision based on hype. He made his decision because there were evaluation criteria in place for the launch. He was literally not allowed to launch things that didn't improve the system according to the evaluation criteria. As Bob didn't want to be fired for not doing anything at the company, he was forced to use a tool that worked to improve the evaluation according to the criteria that was specified. So he used the tool that worked. Hype might provide motivation to experiment, but it doesn't justify a launch.
I say this as someone whose literally seen transitions from decision trees to deep learning models on < 100 feature models which had multi-million dollar monthly revenue impacts.