Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Unusual applications of Bayesian reasoning [pdf] (albany.edu)
64 points by gwern on June 22, 2014 | hide | past | favorite | 20 comments


ET Jaynes is the best! Read the whole book!

http://bayes.wustl.edu/etj/prob/book.pdf


It's certainly worth looking at. I personally find Jaynes's acerbic commentary entertaining, although I imagine it grates for many. His overall view of probability is satisfyingly coherent, but I do not consider myself sufficiently expert to assess whether it is meaningfully better than the alternatives. And there are places where I believe he is just wrong, e.g. he seems to reject Bell's inequality and view quantum probability as just another case of limited information.

There are also times, like in the chapter linked in the parent, where his zeal is bothersome. He begins the discussion of ESP by saying it would be too dogmatic to assign a probability of 0 to ESP. But, when faced with the evidence, he just throws in a bunch of other possible hypotheses that can explain it:

Therefore, this kind of experiment can never convince me of the reality of Mrs. Stewart’s ESP; not because I assert Pf = 0 dogmatically at the start, but because the verifiable facts can be accounted for by many alternative hypotheses, every one of which I consider inherently more plausible than Hf, and none of which is ruled out by the information available to me.

Of course the choice of priors for those other hypotheses were subjective, and there's no limit on how many other hypotheses one might add to explain away unpleasant data. This strikes me as more rationalizing than rationalist.


Really? Do you actually think that Jaynes is being unreasonable when he assigns ESP a prior that is lower than "has some sort of trick" or some other thing that generally turns out to be the explanatory factor for a magician?

He's saying that ESP is an unlikely explanation. He's saying that it is Probably Something Else. The experimental data cannot distinguish them. That's why it's not compelling. It has very little to do with rationalization. It's a terrible test.


No, clearly the underlying mechanism that he describes is good. However, as he admits in the passage I quote, he is so unconvinced by the possibility of ESP that no tests of this sort could ever convince him. The issue isn't just that naive tests of ESP are easily cheated, either, because you can apply the same logic to more stringently controlled tests.

It just seems more honest to me to admit up front in this situation that you cannot be convinced of ESP (give it prior probability of 0) instead of playing these games to essentially shift the "dogmatism" onto the choice of alternative hypotheses and their prior probabilities (which seem chosen to enforce a posterior probability of ESP of ~0.)


Look. I'll explain it really simple. He's saying it has to be > 0, because he is unwilling to rule out ESP as a logical possibility. If he assigned it 0, no matter how good the experimental design was, and no matter what the test was, he would be unmovable from this position. 0 is forbidden if you want to accept it as a possibility. This is why he says this. He also is being honest about his appraisal of the general likelihood of ESP. It's very low. He doesn't bother to explain how low because it turns out it's IRRELEVANT.

You literally have to believe ESP is the most likely explanation among all the usual alternatives ALREADY in order to think it was ESP after ingesting the data, because the data in this test is not EVIDENCE for ESP, because P(E|sort of works ESP) and P(E|one of the assistants wore glasses and Stewart could see some reflections) or whatever your remotely plausible alternative - all look pretty identical.

It is not evidence, in the sense that you cannot do 500, 37,100, or a 1 million card guessing attempts in this setup with this level of detail and expect it to shift belief of a rational agent. The ratio between the priors of the usual suspects is going to look the same as your ratio of posteriors between all the hypotheses after the test.

In order to convince someone of ESP rationally, you need to demonstrate that it is extremely difficult to cheat under the test conditions, in the sense that P(E|trickery) goes down.

Your alternative theory ISN'T the null hypothesis. It's a garden variety non-magical trickster, which exist in great numbers, whereas nobody has yet seen even a single instance of ESP as it is normally meant.


> It is not evidence, in the sense that you cannot do 500, 37,100, or a 1 million card guessing attempts in this setup with this level of detail and expect it to shift belief of a rational agent. The ratio between the priors of the usual suspects is going to look the same as your ratio of posteriors between all the hypotheses after the test.

Exactly. When you suspect that the evidence is biased (or in other words, generated by a process other than genuine new physics or supernatural activity), more iterations of the process cannot give you much more evidence. What more iterations does is reduce sampling error from random variation, but it does nothing about systematic error. The idea that you can run a biased experiment 1000 times and get a much more accurate answer than if you ran it 10 times is an example of what Jaynes calls 'the Emperor of China' fallacy, which he discusses in another chapter (I excerpt it in http://www.gwern.net/DNB%20FAQ#flaws-in-mainstream-science-a... ).

That this is so surprising and novel is an interesting example of a general problem with null-hypothesis testing: when a significance test 'rejects the null', the temptation is to take it as confirming the alternative hypothesis. But this is a fallacy - when you reject the null, you just reject the null. There's an entire universe of other alternative hypotheses which may fit better or worse than the null, of which your favored theory is but one vanishingly small member.

What is necessary to show ESP specifically is to take all the criticisms and alternatives, and run different experiments which will have different results based on whether the alternative or ESP is true. (The real problem comes when it looks like the best experiments showing ESP are at least as rigorous as regular science and it's starting to become difficult to think of what exactly could be driving the positive results besides something like ESP: http://slatestarcodex.com/2014/04/28/the-control-group-is-ou... )


I understand all of this, and yes, plainly in the case of ESP cheating seems much more likely than genuine ESP. My point is that there's nothing in the overall process to prevent this sort of subversion:

* do an experiment;

* get result that doesn't comport with your beliefs;

* retroactively decide "aha, a hypothesis I hadn't considered!" and assign it a greater prior, thereby making it the thing for which you are actually generating evidence.

What I am trying to argue is that this last step is uncontrolled and highly gameable since there's no limit to the amount of possible hypotheses you could dream up (and thus you could keep fishing until you find one you like.) I don't feel that all of the maxent stuff later in the book does much to help you choose priors for this sort of thing.


Oh, part of that explanation assumes you know how to use Bayes' Theorem. If you don't, this doesn't look like a simple explanation, it probably looks like the ravings of a madman because I'm assuming you know a few of the properties and common manipulations of the formula.


> His overall view of probability is satisfyingly coherent, but I do not consider myself sufficiently expert to assess whether it is meaningfully better than the alternatives.

You're probably under-confident. I have read the first two chapters, and they just felt obvious to me. While I understand we could refine probability theory, I think we can say that anything that contradicts it is probably bollocks.

> And there are places where I believe he is just wrong, e.g. he seems to reject Bell's inequality and view quantum probability as just another case of limited information.

There is a part where he's definitely right, though: the so called "quantum probability" is not a probability at all. No matter how much it looks like a probability, it's something out there in the territory, and probability is in the mind. Besides, the idea that complex numbers (amplitude) could be probabilities is rather ridiculous. And of course, the then popular interpretation of quantum mechanics was crazy: it was either contradicting or denying the very equations that made so good predictions in the first place!

Now he could have criticized Many World as well, for it may seem it's a cop-out as well. We have reduced the Born statistics to an anthropic problem, but we haven't solved it yet.


Is that a full book? It seems that it has only first 3 chapters. However, I found the other chapters in the directory with the original submission, but they seem to weirdly overlap with content.


Oh, I didn't realize that. Well, the book is

  Probability Theory: The Logic of Science
  ET Jaynes
  2003
I highly suggest buying it or finding it from your library!


The full text of a draft version is out there, actually.

I bought my copy, though. :)


I very much like the way he lays out the groundwork. It answers fairly directly a lot of criticisms I've seen leveled at probability.


That chapter was very interesting... until I got to this:

"Scientists can reach agreement quickly because we trust our experimental colleagues to have high standards of intellectual honesty and sharp perception to detect possible sources of error. And this belief is justified because, after all, hundreds of new experiments are reported every month, but only about once in a decade is an experiment reported that turns out later to have been wrong."

Er.. what?

http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fj...

"Why Most Published Research Findings Are False"

(Ok, so Ioannidis' article was written a year later than the book, but that's a pretty nasty blow to the argument.)


A second problem...

"As a simple, but numerically stronger example illustrating this, if we toss a coin 1000 times, then no matter what the result is, the specific observed sequence of heads and tails has a probability of only 2^−1000"

No... if you toss a coin a thousand times, the probability of observing the exact sequence you just observed is 1. It will ALWAYS happen (that's what probability 1 means). Yes, if you had an independent specification for the sequence (like writing it down before tossing the coin, or converting the binary representation to ASCII and discovering it spells "Kilroy was here") then the probability would indeed be 2^-1000; but that would be a different case.


It's actually not a problem. You can come up with any number of hypotheses about coins. Some of them take the form, "This coin will produce <some specific output> in the next 1000 typical flips". That hypothesis and others with similar, more complex form, like the pair of hypotheses that predict flip 1001 after the same first 1000, GAIN CREDENCE when you perform 1000 flips that conform to them. Others of the similar form lose it. Other hypotheses of wildly different construction, like, that a coin is more or less fair, lose and gain credence according to whether or not they predict the observed result.

The fact that you didn't write a hypothesis down before you did the test has very little to do with whether or not the data supports the hypothesis. Hindsight bias matters, but only as far as it corrupts your experience. The machine with the infinite library of coin-flip-hypotheses updates just fine.

On a side note, coins are not fair, in general, and Jaynes actually goes into some detail about the process of cheating at coin flips.


Obviously, Jaynes was talking about a reasonable prior probability you had before you flipped the coin. And he was talking about flipping the coin in a way that the head/tail outcome is completely unpredictable for you (50/50) probability, minus epsilon for weird stuff like landing on the edge. He later addresses the coin flip problem more rigorously.

Besides, once you have observed the outcome, it's probability is not 1. You could have misremembered the sequence, or you could have missed a flip, or otherwise done your observation wrong. The probability of making at least one such error with 1000 coin flips in an informal setting is actually quite high.


Jaynes was a physicist.

If he was a biologist or a doctor, he would probably have written the paper you cite himself (medical research appears to be quite sensitive to crazy statistics).


"Unusual", really?


Unusual at the time. It surprised me, but I was reading LessWrong for 2 years before I read this, and have internalized the notion that probability theory is basically universally applicable.

But back then when Frequentism dominated, I believe we tended to limit probability theory to reproducible experiments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: