Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A good example is that the background noise can apparently have an effect on the mice, including things that people might not even notice like a computer fan running in the corner of the room or whatever. Another example is timing, apparently you can get bad results if you're not running the experiments at the same time each day. A third example is the way the mice are selected for the experiment, apparently you can get better results if you practice picking up the mice and injecting them with saline each day for a week before you start injecting them with the actual drug. Apparently there are even people who will sing to their mice because they think it makes a difference. Not to mention things like whether your mice are waking up to natural light or an artificial light being turned on, how much ability they have to exercise, whether you are petting them each day, how much you're feeding them and how nutritious the food you're giving them is, etc.

Cultural experiences was probably the wrong term, but basically what's happening is that large portions of the protocol, which are apparently essential to obtaining the results, can't actually be written about in the paper, meaning the results can't actually be reproduced. (Although if you actually look at the Rat Park experiments and think of each environmental factor in terms of the larger whole, perhaps culture is the best metaphor after all.)

And even if you completely standardized the handling of the mice, there are all sorts of reasons why they mice model is still dubious beyond the many reasons discussed by the article. For example, some researchers have reported a 'tall left-handed blonde effect', where the results can only be duplicated by tall left-handed blondes. I think this is why historically such a large percentage of our medicine is derived from plants originally used by shamans or that otherwise had a historical ethnobotanical use, and why even today so many of the most promising compounds are also things that come to the attention of scientists based on anecdotal evidence. (E.g. MDMA for PTSD, LSD for cluster headaches, cannabinoids for treating cancer, etc.)



Very interesting, thanks.

My problem with this article and sentiment is that there is no good alternative suggested. I don't know enough about mouse research standards, but I would think that all these details should be targets of further research to understand the system better and make better protocols to control it, instead of the marathon of hand-wringing that I see in this article - most of it irrelevant to the model itself. Out of all the things discussed in the article, I literally cannot identify any actionable items. It's just a big discussion of various aspects of modern biology, framed in a contrarian manner.


I felt otherwise. I believe the author not only pointed out other alternatives but talked with those people trying to do things differently. Namely the course of action is simply to try different and more varied things.

The whole problem that is being decried is that of following a rigid, fixed formula that tries to generalize for all cases. That of using the mouse model for everything, even those things for which it is not the best fit.

In software terms this is akin to Netflix's implementation of Chaos Monkey. Mainly that one shouldn't only check for certain failure conditions that have been previously specified, but to also create a framework where you try to look for faults in areas that are outside of the expected.


I don't think this is like work where you should always go to the boss with a solution to the problem you're presenting to him.

Science is half about finding the problems in the present model and then everyone runs around and tries to find a solution to save it. They may even end up throwing the idea out and adopting a new one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: