It's a good thing that science can change. This is what distinguishes science from dogma.
The problem is not these supposed 'reversals', but rather how this information is being reported. Pop-science media reports modest studies as absolute truths to catch eyeballs. When one study merely suggests research in one direction, it's emblazoned as fact on the tabloids.
In the example you identified: "Fatty food is bad - no it's good" the issue is not the studies flip-flopping. Rather, it's a problem of regular people taking what are likely modest reports way too far and massively integrating them into their daily lives, without ever reading the study and understanding the context or scope of it. A study indicating that, say, there may be health problems associated with consuming excess saturated fat seems to compel people to follow no-fat diets. So I'd argue that the bigger problem is people reading oversimplified reports of scientific information
I mean, I'm not terribly familiar with this field, but the study itself indicates that whether high or low serotonin levels were contributory was "a matter of debate" and that "only a few studies have used molecular neuroimaging to examine serotonin dysfunction in SAD directly."[1] So this doesn't appear to be a reversal at all, but rather a study which helps clarify the role of serotonin in SAD.
> "The problem is not these supposed 'reversals', but rather how this information is being reported."
The reporting is terrible, but I think there is a problem with the research itself. There's little apparent awareness by many researchers of the limitations of single studies, perhaps because they have been heavily indoctrinated re: the significance of p-values, etc., or perhaps because their careers depend on it.
It's the researchers themselves writing "We show that ____", as if their single study with 20 participants all of whom are starving college freshmen CONCLUSIVELY proves such-and-such completely bizarre point which all previous observations have flatly contradicted. Then the media gobble it up, with all the authority of "science" to back them up, and when further research refutes the flawed study, nobody pays attention.
It's not just pop media though. There's a general attitude among many people I know who have advanced degrees in science that our current understanding is close to perfect, and that every correlation between two variables validates the entire theory that motivated the experiment. There's also a general attitude to just believe headlines and not look for authoritative or in-depth sources. The fact that pop media reports the way it does is a side effect of these underlying problems IMO. We ought to value accuracy in statements and healthy skepticism more than we do, I think.
> So I'd argue that the bigger problem is people reading oversimplified reports of scientific information.
And what about the AHA[1] and other policy makers and shapers? What leads them to make recommendations based on weakly established conclusions?
I think the pressure to deliver, and the lack of generalist knowledge are somehow involved here. In the case of food and drugs, I would also blame elitism.
I know nothing of the AHA's methods or qualifications, but in my view bad policies purportedly based upon science usually come from lack of these, in addition to lack of scientific literacy.
Alternatively, from a predefined goal, i.e. a political agenda.
I'm curious: in what way would you blame elitism in those cases?
To express it as a spectrum. Dogma has an almost infinite amount of resistance, science tries to reach to zero, but sometimes has too much (IIRC people became emotional about Einstein's theory)
Dogma frequently undergoes change, perhaps to a greater degree than science (which is generally the refinement of earlier theories and methods). See, for example, the Protestant Reformation.
The problem is not these supposed 'reversals', but rather how this information is being reported. Pop-science media reports modest studies as absolute truths to catch eyeballs. When one study merely suggests research in one direction, it's emblazoned as fact on the tabloids.
In the example you identified: "Fatty food is bad - no it's good" the issue is not the studies flip-flopping. Rather, it's a problem of regular people taking what are likely modest reports way too far and massively integrating them into their daily lives, without ever reading the study and understanding the context or scope of it. A study indicating that, say, there may be health problems associated with consuming excess saturated fat seems to compel people to follow no-fat diets. So I'd argue that the bigger problem is people reading oversimplified reports of scientific information
I mean, I'm not terribly familiar with this field, but the study itself indicates that whether high or low serotonin levels were contributory was "a matter of debate" and that "only a few studies have used molecular neuroimaging to examine serotonin dysfunction in SAD directly."[1] So this doesn't appear to be a reversal at all, but rather a study which helps clarify the role of serotonin in SAD.
[1] http://archpsyc.jamanetwork.com/article.aspx?articleid=23197...