Something that strikes me in reading articles like this, is the distopian part often seems to be thinking about this:
p(job_capable | not_interview_capable)
That is, it's crazy that an interview could miss so many people qualified for the job.
However, I wonder if oftentimes companies are aiming for..
p(job_capable | interview_capable)
If p(job_capable | interview_capable) is high, and p(interview_capable) is pretty good also, then the company will probably get what it's looking for.
This means that the author is right to recognize the test is doing a bad job of measuring their job readiness. A reasonable instrument in this case doesn't have to measure everyone's job fitness (whether there are nasty side effects is another big issue).
The simple explanation makes sense, but I'm realizing there's a subtle point here that I should been more clear on.
They might not be filtering out the bad at the expense of the good. But filtering out some of the good in the name of saving the money it would cost to develop / administer more general assessments.
That's the p(interview_capable) piece whereas the trade-off you mention is the conditional probability (also important!).
I don't know about US, but where I live they hire you with 1-3 months "trial period" during which the company can fire you any time if you turned out unfit for the job. This is exactly your proposal.
However, I wonder if oftentimes companies are aiming for..
If p(job_capable | interview_capable) is high, and p(interview_capable) is pretty good also, then the company will probably get what it's looking for.This means that the author is right to recognize the test is doing a bad job of measuring their job readiness. A reasonable instrument in this case doesn't have to measure everyone's job fitness (whether there are nasty side effects is another big issue).