> No, without evidence that assumption is false. Correlation can only be evidence for an unexplained link
I did not say it was conclusive evidence; I said it was evidence. I'm well aware that "A is correlated to B" does not prove "A causes B" or even "A causes B or B causes A", but it is a data point in favor.
Saying "We should evaluate other evidence before we decide if A causes B" is reasonable skepticism. Acting as though "A is correlated to B" has no bearing whatsoever on the question of whether A causes B is another matter.
(Not that I actually disagree with most of your post, mind you! The real message of "correlation is not causation" is "don't overrate this specific data point; it's a common mistake". But the realist shouldn't underrate it either.)
> I did not say it was conclusive evidence; I said it was evidence.
But it isn't. The null hypothesis requires us to assume that there's nothing but chance at work, and let evidence force a different conclusion. The fact that A and B appear correlated is not by itself evidence of anything other than chance.
> I'm well aware that "A is correlated to B" does not prove "A causes B" or even "A causes B or B causes A", but it is a data point in favor.
No, this is false. Without testing a hypothesis, and without a careful examination of a mechanism, the correlation has precisely no meaning apart from chance.
Here's an example selected at random from a vast literature that tries to make this point:
Title: "Creating a phony health scare with the power of statistical correlation"
Quote: "In the United Kingdom, the more mobile phone towers a county has, the more babies are born there every year. In fact, for every extra cell phone tower beyond the average number, a county will see 17.6 more babies. Is this evidence that cell phone signals have some nefarious baby-making effect on the human body? Nope. Instead, it's a simple example of why correlation and causation should never be mistaken for the same thing."
I could link to a thousand similar stories, many being mistaken for actual scientific results.
> But the realist shouldn't underrate it either.
A realist -- a scientist -- always begins by assuming the association is the result of chance (the null hypothesis), and then examines evidence that might argue for another explanation. This is why all self-respecting scientific papers include a p-value. The p-value describes the probability that the result arose from chance, not the hypothesis under test.
Quote: "In statistical significance testing the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true."
Translated into layman's language, the p-factor is the probability that the observation -- the "correlation" -- arose by chance.
A properly educated scientist always assumes the null hypothesis is true, i.e. that the observation arose from chance factors. She then tests this assumption with evidence.
I did not say it was conclusive evidence; I said it was evidence. I'm well aware that "A is correlated to B" does not prove "A causes B" or even "A causes B or B causes A", but it is a data point in favor.
Saying "We should evaluate other evidence before we decide if A causes B" is reasonable skepticism. Acting as though "A is correlated to B" has no bearing whatsoever on the question of whether A causes B is another matter.
(Not that I actually disagree with most of your post, mind you! The real message of "correlation is not causation" is "don't overrate this specific data point; it's a common mistake". But the realist shouldn't underrate it either.)