The cool thing about it is that the core of it is really just one page.
There’s a page in there with a list of types of tests and their respective r values, which is a number between zero and one that explains how well a given type of test predicts job performance based on this gigantic meta analysis the researchers ran. Zero means there’s no relationship between the test and job performance and one means the test predicts job performance perfectly.
Generally you want something better than .3 for high stakes things like jobs. Education and experience sits at … .11 or so. It’s pretty bad. By contrast, skills tests do really well. Depending on the type they can go over .4. That’s a pretty big benefit if you’re hiring lots of people.
That said it can be very hard to convince people that “just having a conversation with someone” isn’t all that predictive at scale. Industry calls that an “unstructured interview” and they’re terrible vectors for unconscious or conscious bias. “Hey, you went to the same school as me…” and now that person is viewed favorably.
Seriously this stuff is WELL STUDIED but for some reason the MBA lizards never care. It’s maddening.
Why would home gardeners optimize for yield and cost effectiveness? They can’t deploy automation or economies of scale.
You garden at home because you enjoy the flavor, freshness, and variety. Those are the perks. Miss me with those mealy, flavorless grocery store tomatoes.