He mentions that there is a conflict of interest with recommending peer reviewers. While I agree this can be abused, I've often run into cargo cult science in AI when publishing something that is valid, novel, and in my opinion advances the field, because it is not aligned with how past work defined the problem when submitting to conferences where I cannot recommend more senior scientists as reviewers. Recommending people could help a lot to address this.
For example, in continual deep learning people often use time datasets in which they use a small amount of memory and incrementally learn classes and the algorithms cannot work for other incremental learning distributions. It's been very hard to publish work that instead works well for arbitrary multiple distributions, eliminates the memory constraint (which doesn't matter in the real world mostly), and shows it scales to real datasets. We have been able to get huge reductions in training time with no loss in predictive ability, but can't seem to get any of these papers published because the community says it is too unorthodox. It is far more efficient than periodically retraining as done in industry, which is what industry folks always tell me is the application they want from continual learning.
The confusing thing is that when I give talks or serve on panels I always have folks thank me and tell me they think this is the right direction and it was inspiring.
In my field the review system is way overtaxed with too many inexperienced people who struggle with having a broad perspective, so I think submitting to more venues would probably make things worse.
I'm not sure that the bias is only due to inexperienced reviewers. For example, even at a specialized venue like CoLLAs (I also work in CL), where you could send more esoteric research, you still see most people doing the usual rehearsal+class-incremental stuff. Most experienced researchers are also quite happy with the current state of the field. They may agree with your view, but their actual research and vision is much more aligned with the "popular" CL research.
In general, the whole deep learning field tends to oversimplify practical problems and experimental design in favor of overcomplicating methods. This happens at all the levels.
For example, in continual deep learning people often use time datasets in which they use a small amount of memory and incrementally learn classes and the algorithms cannot work for other incremental learning distributions. It's been very hard to publish work that instead works well for arbitrary multiple distributions, eliminates the memory constraint (which doesn't matter in the real world mostly), and shows it scales to real datasets. We have been able to get huge reductions in training time with no loss in predictive ability, but can't seem to get any of these papers published because the community says it is too unorthodox. It is far more efficient than periodically retraining as done in industry, which is what industry folks always tell me is the application they want from continual learning.
The confusing thing is that when I give talks or serve on panels I always have folks thank me and tell me they think this is the right direction and it was inspiring.
In my field the review system is way overtaxed with too many inexperienced people who struggle with having a broad perspective, so I think submitting to more venues would probably make things worse.