From the preprint - 100 input dimensions is considered "high", and most problems considered have 5 or fewer input dimensions. This is typical of physics-inspired settings I've seen considered in ML. The next step would be demonstrating them on MNIST, which, at 784 dimensions is tiny by modern standards.
In actual business processes there are lots of ML problems with fewer than 100 input dimensions. But for most of them decision trees are still competitive with neural networks or even outperform them.
The aid to explainability seems at least somewhat compelling. Understanding what a random forest did isn't always easy. And if what you want isn't the model but the closed form of what the model does, this could be quite useful. When those hundred input dimensions interact nonlinearly in a million ways thats nice. Or more likely I'd use it when I don't want to find a pencil to derive the closed form of what I'm trying to do.
Competent companies tend to put a lot of effort into building data analysis tools. There will often be A/B or QRT frameworks in place allowing deployment of two models, for example, the new deep learning model, and the old rule based system. By using the results from these experiments in conjunction with typical offline and online evaluation metrics one can begin to make statements about the impact of model performance on revenue. Naturally model performance is tracked through many offline and online metrics. So people can and do say things like "if this model is x% more accurate then that translates to $y million dollars in monthly revenue" with great confidence.
Lets call someone working at such a company Bob.
A restatement of your claim is that Bob decided to launch a model to live because of hype rather than because he could justify his promotion by pointing to the millions of dollars in increased revenue his switch produced. Bob of course did not make his decision based on hype. He made his decision because there were evaluation criteria in place for the launch. He was literally not allowed to launch things that didn't improve the system according to the evaluation criteria. As Bob didn't want to be fired for not doing anything at the company, he was forced to use a tool that worked to improve the evaluation according to the criteria that was specified. So he used the tool that worked. Hype might provide motivation to experiment, but it doesn't justify a launch.
I say this as someone whose literally seen transitions from decision trees to deep learning models on < 100 feature models which had multi-million dollar monthly revenue impacts.