I don’t want to be in the business of LLM defender, but it’s just hard to imagine this aging well when you step back and look at the pace of advancement here. In the realm of “real math and science”, O1 has improved from 0% to 50% on AIME today. A year ago, LLMs could only write little functions, not much better than searching StackOverflow. Today, they can write thousands of lines of code that work together with minimal supervision.
I’m sure this tech continues to have many limitations, but every piece of trajectory evidence we have points in the same direction. I just think you should be prepared for the ratio of “real” work vs. LLM-capable work to become increasingly small.
I can probably climb a tree faster than I can build a rocket. But only one will get me all the way to the moon. Don't confuse local optima for global ones.
I’m sure this tech continues to have many limitations, but every piece of trajectory evidence we have points in the same direction. I just think you should be prepared for the ratio of “real” work vs. LLM-capable work to become increasingly small.