There are plenty of methods (cough report Bayesian likelihood ratios cough and publish your raw data cough) that are simple enough for even average scientists to use. They would rather use much more complicated statistics, though, because then they get to publish more papers with "significant" results.
If you can handle calculus, which most scientists take, then you can handle likelihood ratios, believe me.
But the incentives are terrible, which is quite a different thing from supposing that the average PhD is too dumb to learn good statistics if the incentives were strong.
It can be if its used incorrectly. This is why they said
reviewers needed to hold studies to a minimal standard of biological plausibility
There's two "good" ways to do this (as far as I can see). 1) Come up with a biologically plausible idea and test it, using statistics to look at results. 2) Find a pattern in the statistics and find a biologically plausible explanation.
The biology alone isn't enough, you need to statistics to back up and show actual results. However, using statistics alone and in the way indicated in the article (looking at every test and every subgroup, etc...) is exactly what your saying: avoiding rigorous thinking in favor of getting a result.