Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been personally involved in validity testing for graphic designers, and while the validity coefficients were reduced they were still of practical significance, and had incremental validity over cognitive ability testing (which is always the best predictor, but tends to show racial bias). I will see if I can find any published research as I've not seen any and am now curious myself.


Is this an accurate description of what you mean by a cognitive ability test? http://en.wikipedia.org/wiki/Cognitive_Abilities_Test

If so, under which circumstances are they used? For a graphic designer, the natural "test" would be for fellow graphic designers and potential managers to look at an applicant's work samples, or to ask them to produce one. This method directly tests the applicant's ability to do the type of job, although there is no objective metric. You are relying on people's subjective assessment. How do cognitive ability tests compare?


Yes, in essence. When I refer to CATs I'm talking about measures testing g (http://en.wikipedia.org/wiki/G_intelligence). And I know it's hard to believe, but g-centric tests like cognitive ability test do a better job than other seemingly more relevant selection measures like work sample tests and assessment centers. The benefit of work sample tests, assessment centers, integrity tests, etc. is that their validity is decent and that a significant portion is independent of g-centric measures.

Here is a good article you can read on the subject: http://www.unc.edu/~nielsen/soci708/cdocs/Schmidt_Hunter_200...


The difficulty I have with that article is that I don't know how the jobs in these studies map to engineering or research jobs. I'm thinking of jobs where one has accumulated close to a decade's worth of knowledge before starting.

Related to this, I think it's important to consider that this correlation between g and job performance is conditioned on the fact that the person applied for the job. That sounds trivially true at first, but it means that the applicant felt like they were competent for the job (in the best case; in the worst case, it meant they felt that they had a chance of appearing competent at the job). In other words, what we're saying is, "Of the people who thought they could do the job, the smartest ones tended to do the best."

But if our candidate pool was everyone, I'm skeptical that g would still hold as a good predictor. I think I'm a bright guy, but I'm pretty sure I'd make a terrible nuclear engineer. And with that in mind, we may need to keep the non-g related selection around to prevent such a situation.


You've made a good comment. I can't specifically point you to a study with research or engineering (though I know there has been some that involved academic research performance as an outcome). The finding tends to be that 1) g is more, not less, important for jobs with higher complexity and 2) job knowledge acts as a mediator between the predicted relationship of g and work performance.

You are right to think that the results would be different if the test was just given on the general population. It's an academic consideration that tends to resolve itself in the field.

And you'll never hear me, or anyone else, suggest that g should be the only predictor for just the reason you describe. Biographical data (e.g., years experience, work history) is much better first hurdle.


The Wikipedia article talks a lot about correlation between performance in different subjects in school.

I've also read that two of the strongest predictors of performance in school are your parents' performance in school, and (after controlling for that) your parents' income.

Do studies of general intelligence usually control for the impact of parents' education and income?


> cognitive ability testing (which is always the best predictor, but tends to show racial bias).

I'm intrigued, is the "bias" because the test is unfair to one racial or more racial sub-groups or because the test is "fair" and that is how they actually perform or is it a language thing.

I do reasonably well on standardised IQ tests but I suspect if I did a German one I might struggle.


It's a bit of a mystery, truly, and bias can mean different things (e.g., slope vs. intercept predictive bias). Note also that predictive bias is not the same thing as mean subgroup differences (e.g., mean score differences of White vs. Black candidates).

This is a quick overview of the topic: http://www.siop.org/_Principles/pages31to34.pdf

An interesting thing is that the predictive bias is reduced for open-form response questions vs. select-one tests. It's an indication that there is more at work than just subgroup differences.


When my mother was doing sociology at Uni I read one of her texts (as you do) and it had an example of an IQ test been flawed, they gave the same test to two different groups of children and found that the poorer (working class as they where grouped in the study) children performed less well consistently.

One of the people looking at the results then looked at the breakdown of questions by group and noticed immediately that questions like "The cup goes on the a) saucer b) floor c) table d) shelf" where consistently "wrong" (correct answer was a) for the poorer group at which point he realised that working class children drink tea from mugs and saucers where a middle class thing.

The story might be apocryphal but its stuck with me since I was 12-13 whenever I run into any kind of standardised testing/results.


The latter. The races perform differently on "fair" cognitive tests.

How do we know the tests are fair? For a given test score, life outcomes like income and criminal conviction rate are the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: