> A professor like my Condensed Matter Physics professor would go on and on about 3 fundamental causes and then quiz students about the 5 fundamental causes, giving a zero to anyone who couldn't think of all the base 3, then grading the remaining students who thought creatively of 2 additional causes against each other on a bell curve. :)
To me, this sounds like a plausible argument against such tests.
The issue of creatively grading responses to an ill-defined question often pops up in discussion here about interview practices. In those discussions, typically someone will say “I don’t do a generic whiteboard interview. Instead I do [idiosyncratic thing x]. It really gives me amazing insight into the candidate.”
Then someone else says “Yeah right, a weird test with unclear metrics just gives you a big empty space to fill in with all your biases and pick someone who answers the way you would”.
Of course, both sides are exaggerated here. But it’s not clear to me that “creative” tests are necessarily any better.
To me, this sounds like a plausible argument against such tests.
The issue of creatively grading responses to an ill-defined question often pops up in discussion here about interview practices. In those discussions, typically someone will say “I don’t do a generic whiteboard interview. Instead I do [idiosyncratic thing x]. It really gives me amazing insight into the candidate.”
Then someone else says “Yeah right, a weird test with unclear metrics just gives you a big empty space to fill in with all your biases and pick someone who answers the way you would”.
Of course, both sides are exaggerated here. But it’s not clear to me that “creative” tests are necessarily any better.