Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do computer science research and publish regularly (in conferences, not journals, since that's how computer science mostly works -- you write a paper, look for the soonest upcoming relevant conference deadline, submit there, and get a response 2-3 months later). I think discussions about peer review often fail to explain all of the things peer review can accomplish:

1) Verifying that work is correct, assuming that the author is honest (e.g., you take their data at face value)

2) Verifying that work is correct, assuming that the author is malicious (e.g., you scrutinize their data to see if it's fabricated)

3) Certifying that the paper is "interesting" (universities, grant-making bodies, and other bureaucratic entities want some evidence that the researcher their funding is good, and somebody has to hand out the gold stars)

It takes time for even an expert to do 1), and it takes still more time to do 2). There aren't really good incentives to do it beyond caring about your field, or wanting to build on the thing you're reading. 3) can be done more quickly, but it's subjective, but a world where things are only assessed for correctness and not interesting-ness is a world where external funding bodies rely on other weird proxies like citation metrics or something to figure out who's good, and it's not clear to me that that's better.

My perception from computer science is that it should be harder to submit papers, because there are too many authors who simply rarely produce good papers and are clogging up the conferences with endless resubmissions until they get reviewers lazy enough to all say "weak accept".



Also sometimes reviewers point out interesting ideas you didn't think of because you always have tunnel vision by the point you submit a paper.


> My perception from computer science is that it should be harder to submit papers, because there are too many authors who simply rarely produce good papers and are clogging up the conferences with endless resubmissions until they get reviewers lazy enough to all say "weak accept".

It seems like the root issue here is pathological incentive to publish for career advancement?


That's certainly a driver for much of the pathology, however, I don't really see how that can be changed - I haven't seen any good proposals for what could reasonably replace the current bibliographic metrics for the various funding bodies and institutions. They do need some 'outsourced' objective metric because they aren't capable or willing to do in-depth review of each individual's work, and they won't trust the self-evaluation of researchers or their home institutions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: