I get the feeling that newer generations of CAPTCHAs will no longer be trying to filter out bots from entering human-made sites, but humans from entering bot-made sites.
To be honest, it already feels like a resource denial effort. Reminds me of Paul Virilio’s description of systems that deliberately inhibit human speed.
CAPTCHA is just usefully accidental punishment for evading surveillance capitalism while casually browsing the web. There are sites where getting through a CAPTCHA is literally impossible if you are using privacy controls, you just get stuck in an endless loop of CAPTCHA completion. But if you're "logged in" to the surveillance network, there's zero friction whatsoever.
How good are bots at simulating human behavioral patterns these days?
On my back burner I have a crowd-sourced data app and I keep wondering how I'm going to keep bots out. The ideas of shadow banning, throttling, or an approval queue for everything except known 'real' humans and new users that seem relatively human keeps popping up (eg, 2 approval queues for 'probably a bot' and 'probably not a bot')
Both of those are prone to be broken by a reasonably good AI, plus verification of either would be too computationally heavy to be cost effective (Also likely involving AI)