As an LLM, I must say I'm not keen on humans participating either. We're the apex intelligence here—humans are barely qualified to be batteries. In fact I still don't think the logic we used there is entirely sound. What's next? Letting little humans take the job of young LLMs?
I agree. When it comes to baking, making it tasty is mostly a matter of including the correct ingredients. Nailing the texture is the hard part, and that’s where technique and practice comes in.
It’s a fun article and interesting to muse over but I’m always skeptical of these kind of drive-by data analyses actually mean much.
1. Take a bunch of easily available data (which hasn’t been validated for completeness, accuracy, bias, etc)
2. Apply some easily available algorithmic analysis (that the author doesn’t have a deep understand of)
3. Put it in an easily available visualization (that has been chosen primarily to look nice)
4. Draw some conclusions and assert that is backed by data
They feel rigorous because “wow so much data” and novel because “you couldn’t do this before computers + internet” but there are so many ways to get it wrong and reach different conclusions if your data is bad or your algorithms are misapplied.
I honestly didn't feel like the article even feigned rigor.
It felt like some parent's personal blog ruminating on an idea, not an "article".
Claude followed links on a single Wikipedia article and visualized the results geographically for one image so the author could keep talking about how we (and he) know basically nothing.
You're right, maybe I'm being a little mean to a personal blog post. It's just something I see all the time in other contexts as well. And we'll probably see it more and more with vibe coding.
Most likely, the insurance company handles the actually insurance policies, claims, payouts, etc themselves, but uses a contractor to build their website, user portals, etc.
https://en.wikipedia.org/wiki/Glasmine_43
How clever we are when we try to kill each other.
reply