But isn't that exactly why the submission should be anonymous to the reviewer? It's science, the paper should speak for itself. You don't want a reviewer to be biased by the previous accomplishments of the author. An absolute nobody can make groundbreaking and unexpected discoveries, and a Nobel prize winner can make stupid mistakes.
In subfields of physics, and I suspect math, the submitter is never anonymous. These people talk at conferences, have a list of previous works, etc., and fields are highly specialized. So the reviewer knows with 50-95% certainty who he is reviewing.
In many subfields, the submitter isn't even attempted to be hidden from the reviewers. Usually, even the reviewers can be guessed with high accuracy by the submitters
Inherent in the editor trying to "get the very best researchers to [review] the paper" is likely to be a leak of signal. (My spouse was a scientific journal editor for years; reviewers decline to review for any number of reasons, often just being too busy and the same reviewer is often asked multiple times per year. Taking the extra effort to say "but this specific paper is from a really respected author" would be bad, but so would "but please make time to review this specific paper for reasons that I can't tell you".)
I didn’t read the comment to mean the editor would explicitly signal anything was noteworthy about the paper, but rather they would select referees from a specific pool of experts. From that standpoint, the referee would have no insight into whether it was anything special (and they couldn’t tell if the other referees were of distinction either).
The editor is already selecting the best matched reviewers though, for any paper they send out for review.
They have more flexibility on how hard they push the reviewer to accept doing the specific review, or for a specific timeline, but they still get declines from some reviewers on some papers.
I know that’s the ideal but my original post ends with some skepticism at this claim. I’ve had more than a few come across my desk that are a poor fit. I try to be honest with the editors about why I reject the chance to review them. If I witness it more than a few times, they obviously aren’t being as judicial at their assignments as the ideal assumes.
When submitting papers to high-profile journals, the expectations are very high for all authors. In most cases, the editorial team can determine from the abstract whether the paper is likely to meet their standards for acceptance.
Doesn’t that just move the source of bias from the reviewer to the coordinator? Some ‘nobody’ submitting a paper would get a crapshoot reviewer while a recognisable ‘somebody’ gets a well regarded fair reviewer.
Full anonymity may be valuable, if the set of a paper's reviewers has to stay fixed throughout the review process
If peer review worked more like other publication workflows (where documents are handed across multiple teams that review them for different reasons), I think partial anonymity (e.g. rounding authors down to a citation-count number) might actually be useful.
Basically: why can't we treat peer review like the customer service gauntlet?
- Papers must pass all levels from the level they enter up to the final level, to be accepted for publication.
- Papers get triaged to the inbox of a given level based on the citation numbers of the submitter.
- Thus, papers from people with no known previous publications, go first to the level-1 reviewers, who exist purely to distinguish and filter off crankery/quackery. They're just there so that everyone else doesn't have to waste time on this. (This level is what non-academic publishing houses call the "slush pile.") However, they should be using criteria that give only false-positives [treating bad papers as good] but never false-negatives [treating good papers as bad.] The positives pass on to the level-2 ("normal") stream.
- Likewise, papers from pre-eminent authors are assumed to not often contain stupid obvious mistakes, and therefore, to avoid wasting the submitter's time and the time of reviewers in levels 1 through N-1, these papers get routed straight to final level-N reviewers. This group is mostly made up of pre-eminent authors themselves, who have the highest likelihood of catching the smallest, most esoteric fatal flaws. (However, they're still also using criteria that requires them to be extremely critical of any obvious flaws as well. They just aren't supposed to bother looking for them first, since the assumption is that they won't be there.)
- Papers from people with an average number of citations end up landing on some middle level, getting reviewed for middling-picky stuff by middling-experienced people, and then either getting bounced back for iteration at that point, or getting repeatedly handed up the chain with those editing marks pre-picked so that the reviewers on higher levels don't have to bother looking for those things and can focus on the more technically-difficult stuff. It's up to the people on the earlier levels to make the call of whether to bounce the paper back to the author for revision.
(Note that, under this model, no paper is ever rejected for publication; papers just get trapped in an infinite revision loop, under the premise that in theory, even a paper fatally-flawed in its premise could be ship-of-Theseus-ed during revision into an entirely different, non-flawed paper.)
You could compare this to a software toolchain — first your code is "reviewed" by the lexer; then by the parser; then by the macro expansion; then by any static analysis passes; then by any semantic-model transformers run by the optimizer. Your submission can fail out as invalid at any step. More advanced / low-level code (hand-written assembler) skips the earlier steps entirely, but that also means talking straight to something that expected pre-picked output and will give you very terse, annoyed-sounding, non-helpful errors if it does encounter a flaw that would have been caught earlier in the toolchain for HLL code.
I agree with a lot of this premise but this gave me pause:
>under this model, no paper is ever rejected for publication; papers just get trapped in an infinite revision loop
This could mean a viable paper never gets published. Most journals require that you only submit to one journal at a time. So if it didn’t meet criteria for whatever reason (even a bad scope fit) it would never get a chance at a better fit somewhere else).
Typically, papers are reviewed by 1 to 3 reviewers. I don't think you realistically can have more than two levels -- the editor as the first line, and then one layer of reviewers.
You can't really blind the author names. First, the reviewers must be able to recognize if there is a conflict of interest, and second, especially for papers on experiments, you know from the experiment name who the authors would be.