If we were in a simulation, would the speed of light be the processing speed of the universe as each area re-renders, and spooky action at a distance be two variables pointed to the same memory location, populated with a lazy-loaded value, with copy-on-write semantics?
edit: seems like it is lazy loaded, so revised my summary.
That's not a bad analogy, but you have to be very careful here because no classical analogy can be a perfect fit for entanglement. The wave function is deeply and fundamentally different than our classical reality, and there is no way to reproduce its behavior classically. Among the fundamental differences is the fact that classical information can be copied but quantum states cannot be cloned. This is IMHO the single biggest disconnect between the wave function and classical reality because the nature of our (classical) existence is fundamentally intertwingled with copying (classical) information. It is happening right now even as you read this. Information is being copied out of my brain onto the internets and into your brain. At the same time, all our cells are busily copying the information in our DNA, and so on and so on.
A classical analogy for entanglement: suppose I have two balls in a bag. They are identical in every way, except one is red and the other is blue. I randomly grab one in each hand and show my hands closed. Now the states of the ball are entangled: as soon as you see the color of one ball, that "determines" the color of the other. (Not claiming that this is a perfect analogy, but I don't see where it diverges from how entangled quantum waves would behave.)
> Among the fundamental differences is the fact that classical information can be copied but quantum states cannot be cloned.
The no-cloning theorem says that there exists no universal quantum machine that can perfectly clone an arbitrary quantum state. However, that does not preclude a machine that can imperfectly clone any quantum state, or machines that can perfectly clone some but not all quantum states [1]. (Clearly the information transferred to my brain is not a perfect copy of your brain's state, and your DNA is not perfectly copied every time.)
>They are identical in every way, except one is red and the other is blue. I randomly grab one in each hand and show my hands closed. Now the states of the ball are entangled: as soon as you see the color of one ball, that "determines" the color of the other.
This gets used to explain entanglement but it really has absolutely nothing to do with it. This is nothing that the ancient Greeks wouldn't have known.
Not to pick on you specifically, but do people really think it took a major revolution in physics in order to understand that if there are two balls, one is blue and one is red, then if you see one of the balls is red, you can conclude the other ball is blue?
It's something that I think humans can solve at the age of 3.
The failure in your explanation is right when you state that "one of the balls is red and the other is blue". The entire point of entanglement is that such a statement is not possible, that's a strictly classical interpretation. Rather, both balls are in a superposition of being both red and blue simultaneously, and it is not possible in principle to assign a color to either one of them until the moment a measurement is made.
> This gets used to explain entanglement but it really has absolutely nothing to do with it. This is nothing that the ancient Greeks wouldn't have known.
To be fair, this usually crops up in entanglement discussions to deomonstrate how it can't be used for FTL communication and not to actually explain what entanglement is.
> Rather, both balls are in a superposition of being both red and blue simultaneously, and it is not possible in principle to assign a color to either one of them until the moment a measurement is made.
I don't disagree, and (clearly) I make a measurement when I show you the color of a ball. Before I show you a ball, I would also say that the colors of the balls are in a superposition.
> major revolution in physics in order to understand that if there are two balls, one is blue and one is red, then if you see one of the balls is red, you can conclude the other ball is blue?
Entanglement is really just this simple — entanglement itself is a statement about a wave function, classical or quantum. The major revolution in physics is that transformations of the wave functions do not behave as we would classically expect. Entangled particles are a tool that we can use to measure those transformations (and get surprising results).
Entanglement is not a property about wave functions and really has nothing to do with waves. It's a logical consequence of the uncertainty principle and was ironically deduced by Einstein, Rosen, and Podolsky (EPR Paradox) as a way to argue that quantum mechanics is an incomplete description of physical reality. Being that it's strictly a consequence of the uncertainty principle, it applies equally well to non-wave function formulations of quantum mechanics such as the matrix formulation which does not use a wave function.
Entanglement is precisely the principle that a physical system can exist such that no part of the system can be described without describing the rest of the system as a whole. Einstein argued that this made quantum mechanics incomplete, the idea that somehow two properties of a physical system separated potentially by light years could not be decomposed into two physical systems that behaved independently of one another violated basic notions of local realism.
The issue is that as soon as you stated that one ball is red you have made a statement about some property of the physical system that is independent of the rest of the system. That is fundamentally what entanglement states you can not do. All you can state is that there are two balls that are in a superposition of being red and blue and there is no way to describe one ball as red and the other as blue, they are both red and blue simultaneously.
That is what entanglement is and that is the new principle that was neither known to the ancient Greeks or something that a 3 year old could figure out. Not the idea that if there are two balls and one ball is red and the other is blue, then if you see the red ball you know that the other ball is blue. Nothing about that ever baffled any physicist.
> Entanglement is not a property about wave functions and really has nothing to do with waves. It's a logical consequence of the uncertainty principle...
I don't follow, and I can't find anything online that makes this claim. Could you explain more?
Maybe we disagree about the definition of entanglement. I'll take one from Griffith's Introduction to Quantum Mechanics. On page 422, Griffith writes [1]:
> An entangled state [is] a two-particle state that cannot be expressed as the product of two one-particle states....
(There is no mention of uncertainty in this section either.) Here I read "state" to mean "wave function" which implies that entanglement is a statement about a wave function, as I earlier claimed. "Cannot be expressed as a product" means not independent, just like the balls in my analogy (or electrons from neutral pion decay).
When I say "see the color of one ball," I am collapsing the wave function of the balls by making an observation (in the Copenhagen interpretation). This is analogous to measuring an electron's spin. If you replace "ball" with "electron," "bag" with "decay of a neutral pion", "red/blue" with "spin up/down," and "see the color of one ball" with "measure the spin of one electron," that's a completely valid statement in QM.
While I believe that entanglement is genuinely something new and interesting, your explanation of it simply feels like a semantic difference. There is no way in which the universe you describe would be different from a classical universe, at least up to the limits of your description. I'm simply "not allowed" to say that one of the balls is red and the other is blue, before I've looked? It's just, what, against the law to say that? There must be more to it than that.
There has to be some observation that would be different in a universe with entanglement than in a universe without entanglement, and you haven't described what that difference is. There must be one out there, though -- it's just not clear to me what it is. Does it have to do with the fact that the fastest I can spread the message "I just looked at ball A and it's red!" is the speed of light, and ball B could be very very far away? But I thought entanglement doesn't actually allow FTL communication?
Isn't this distinction exactly what the article is about? By saying ahead of time, "one ball is red, the other is blue", you're describing a hidden-variables theory of entanglement. It may be unknowable (before measurement) which color the ball in your left hand is, but it has a color.
But Bell's theorem provides a very measureable counterexample to this type of explanation of entanglement. Sure, in the article they talk about electron spins instead of ball colors, but the analogy is that there isn't a well defined "color of the ball" before it's measured.
Of course, the analogy breaks down a bit: electron spin can be measured in multiple axes with somewhat complicated interactions.
> By saying ahead of time, "one ball is red, the other is blue", you're describing a hidden-variables theory of entanglement.
No, consider the case of neutral pion decay, which emits one spin up electron and one spin down electron. We can clearly say ahead of time one electron will be spin up, and the other will be spin down. But there is no hidden variable that determines which.
If there were a hidden variable, then knowledge of that hidden variable would let you predict which electron is spin up (which ball was red). In the macroscopic world, the hidden variable might be the state of my brain when it chose which hand to grab which ball. But if you replaced me with a robot, and that robot used the measurement of a quantum event (such as an electron's spin) to determine which ball to choose, then there is no hidden variable.
> No, consider the case of neutral pion decay, which emits one spin up electron and one spin down electron.
No, it does emit two electrons with total spin zero which is not the same thing.
> We can clearly say ahead of time one electron will be spin up, and the other will be spin down.
Let’s imagine that one was really up and the other was down. But you decide to measure instead the spins along a perpendicular axis. You would expect to find no correlation between them.
However, what you actually see is that if you measure both spins along any (common) axis they will point in opposite directions.
It doesn’t make any sense to say that before any measurement one was up and the other down. The red and blue balls analogy is very misleading and has nothing to do with entanglement.
> The red and blue balls analogy is very misleading and has nothing to do with entanglement.
this is exactly why classical analogies should not be used to describe quantum entanglement - it gives the layperson a wrong impression. Those analogies makes it easy for the layperson to imagine the hidden variable hypothesis, which is proven to be wrong.
OPs explanation is that entanglement is when there is a red ball and a blue ball and when you know which ball is red, you determine that the other ball must be blue.
My explanation is that entanglement is when there is no red ball or blue ball, there are simply two balls and the color of both balls is both red and blue simultaneously. It's not simply that one ball is red, the other is blue, but we don't know which one is which until we measure them. It's that fundamentally there is no red ball and blue ball, there are just two balls whose colors are in a superposition of red and blue.
I will try to come up with an observable difference but it's hard to do so with colors because the typical examples used for entanglement involve properties that can cancel one another out, so that two entangled particles exhibiting a superposition of two properties will, after many trials, end up forming some kind of destructive or constructive interference that would not be possible if those two particles were in a definite state.
Bell's experiment itself is readily understandable to most laypeople - and comparing the outcome to what you'd expect with e.g. hidden variables is really the easiest way to see why the red/blue explanation misses the point IMO.
Ah, but that’s the tough part - there IS a measurable difference in behavior of the universe between these two examples! (albeit hard to experimentally prove exists, but it has been!)
They really are in a superposition, not just ‘not known’ until one is measured.
Just like light was proven to (truly, actually) be both a light and a wave through the double slit experiments. It doesn’t feel right, but it is - and that is where the progress is made, and why the pushback on some examples. It hides the actual truth behind a misleading, but easy to understand example, that teaches people the opposite of what is really going on.
It could also be that we simply don’t understand something about light phase, and that’s causing us to get confused about superpositions. After all, the experiments aren’t on single photons, they are on beams of photons.
Not sure if we're confusing threads here - double slit experiments have been run on single photons and the results are pretty conclusive. Even a single photon is a wave that interferes with itself.
I would expect similar here. Intuition is terrible at understanding what is going on at the atomic and smaller level, or anywhere relativistic anything is happening.
Not confusing threads; the double slit experiment is often given as evidence of superposition. My attempts at replicating the experiment myself have been foiled because it inevitably goes to phase calculations on lasers, which I don't have any idea how to do. I keep looking for a way to do this famous and supposedly simple experiment, but haven't found a way yet. In any case, when I go deep on what is there (as a layman) it inevitably seems to result in phase measurements as the smoking gun proving superpositions exist at all.
I think you're having a pedantic moment. Nobody claimed that the red/blue ball example was some big unsolved mystery. It's merely to give people a taste of entanglement in a way that your average person can understand.
Isn't it true that if you entangle two particles, separate them, then measure one it'll tell you something about the other particle? That's all the example is trying to communicate.
>Isn't it true that if you entangle two particles, separate them, then measure one it'll tell you something about the other particle?
Yes that's true, but that's also true of things that aren't entangled. I assure you if I went to Socrates, showed him a red ball and a blue ball, put them in a bag, and took out a ball at random that happened to be red, Socrates would have no problem realizing that the other ball must be blue. I am sure if I went to my 4 year old daughter, she'd figure it out as well because nothing about quantum mechanics or entanglement would be needed to understand this.
What entanglement tells us is that if two balls had their colors entangled, then both balls are both red and blue at the same time and it's simply not possible to reason about one ball being blue and one ball being red while they are entangled. They are in a superposition of both colors and remain so until a measurement is performed.
Once the measurement is performed, they are no longer entangled and only at that point can you call one ball red and the other blue.
I think what OP means is, spookiness comes out of the fact that one particle that can be separated by huge distance from another particle, and both particles being in superposition of states, observing one particle can affect the state of another.
It is not about state, that you do not know, but state that is not yet there.
When one particle's state decomposes from superposition of states to a single state, given the assumption of quantum theory, it also affects the state of particle that is physically separated from the particle. That is the spookiness.
If we assume that the quantum particles are always in superposition of states, the question is how can one particle's observation can affect another particle at distance.
If you take out, indeterminate state assumption, then it is indeed missing the point of 'spooky action at distance'.
Yes but I think what you both are missing is that this example is meant for laypeople. Nobody has ever claimed that this is literally entanglement and I can say from first-hand experience that it's useful to bridge the gap to actually understanding entanglement.
Well, I am layman regarding in general, particularly physics. I get what you are saying, but the analogy lose the point of what makes entanglement nonsensical and spooky for anybody, layman or not.
As I said before, if they had an analogy of balls which does not have a color and when you see one ball and it gets color and the other ball which was in contact with it become colored magically too, it would be fine. I am ranting and I am sure educators can come up with better analogy.
The point is, spookiness is important for understanding the significance of why this is big deal at all and some people think that should not get lost in translation.
> I make a measurement when I show you the color of a ball
You “make a measurement” well before that, when you say that you have a red ball and a blue ball.
The point of entanglement is that until you make a measurement they don’t have a color. You could measure something else than color and you would also find a correlation.
But if they have a defined color the entanglement is broken. Sure, one is red and the other is blue. But if you measure anything else (a non-commuting observable, that is) there will be no correlatiom.
And not knowing which one is what (already defined) color is not a superposition. It’s just a mixture.
so true. I have a similar beef about the popular explanation for uncertainty principle: "well you see the light hits the particle very hard so we know where it was but we don't know where its gone now". urgh.
As someone who doesn't know any better than the explanation you have a beef with, I would love if you could explain in layman's terms why it's wrong and what a more accurate understanding is. I always thought that analogy was exactly how it worked, but it seems I have been misled unawares.
It presupposes that position and momentum have definite states that are just uncertain to us, while in most interpretations of quantum mechanics (e.g. Copenhagen and many worlds) the particle exists as a wavefunction, lacking a specific position or momentum, but instead existing as a probability density function in this space.
The uncertainty principle here then relates to how much this probability density function 'peaks' in position space or momentum space. A higher peak in one space results in a wider spread in the other. This is because position and momentum are Fourier transforms of each other.
So is it true that there are multiple properties of a particle---such as location, position, and maybe its spin---that are all described as wave functions, and therefore they can all be entangled? Can anything that is described by a wave function be entangled?
Since position and momentum (assuming that's what you meant, since you said location and position which are synonyms) have this sort of dual relationship, I don't think it makes sense to talk about entanglement with respect to them - they intrinsically have to be related to each other, and the position state (i.e., function) a particle is in fully determines its momentum state.
But it is possible to imagine usually unrelated properties of a particle being entangled, e.g. a two-peaked position function, spin up if it's over here and spin down if it's over there. So that's possible. Usually when discussing entanglement, though, we're talking about 'distinct'* particles. Electron A's spin entangled with electron B's spin. Not that it has to be spin, of course. But that's a common case because of how naturally this sort of entanglement occurs, for example, in atoms where electrons have to form spin pairs.
* This is complicated by QFT where particles are not exactly distinct, but exist as excitations in a particle field. E.g. there aren't two electrons but the electron field is excited by two quanta. At least, that's my understanding; I never went to grad school for physics, so I'm limited to undergraduate knowledge and some extracurricular reading.
> Since position and momentum (assuming that's what you meant, since you said location and position which are synonyms)
Yep, sorry, artifact of the editing process.
> they intrinsically have to be related to each other, and the position state (i.e., function) a particle is in fully determines its momentum state.
Sure, but the momentum doesn't determine the position (due to the constant of integration) so you can have two particles with the same momentum functions and different locations, and that leads to my next question...
> Usually when discussing entanglement, though, we're talking about 'distinct'* particles.
That's what I actually meant to ask but didn't phrase clearly: since position and momentum are described by wave functions, can you entangle the positions of two particles? or entangle their momentum?
> Sure, but the momentum doesn't determine the position (due to the constant of integration) so you can have two particles with the same momentum functions and different locations, and that leads to my next question...
There's no constant of integration since the integral will be over all of space (or momentum space).
> That's what I actually meant to ask but didn't phrase clearly: since position and momentum are described by wave functions, can you entangle the positions of two particles? or entangle their momentum?
No, I'm sorry, I'm not going to pull out heaps of regurgitated quantum information to back this up but that's straight-up wrong.
The red ball and the blue ball exist as physical objects, it is us, the observers, who are unaware of whether they are red or blue at either position. There's no superposition here. They are red, or blue, assigned randomly. Not both, not none. These are facts - properties - about the balls that are real, that exist, but we simply don't have that information at that point. It is meaningless that there is no observer that can 'see through' our hands to know which is correct.
Sorry, this is just wrong. Bell‘s inequality and the very related Bell-Kochen-Specker theorem [1] state that local hidden variables (one ball is blue, one is red, but we just don’t know it) are not consistent with QM.
The problem with your classical analogy for entanglement is that it doesn't match the data. Or rather, it only matches the data for quantum properties that are similarly blue or red.
The non-classical properties of entanglement start appearing once you start measuring combinations of the redness and blueness of those balls.
Let's say that instead of looking at the balls, you pass them through some machine that will let a red ball pass through with some probability P that you control; if the ball is blue, the machine will let it pass with probability 1-P. Let's say further that you have three such machines. You set the first machine to P=1. You pass each ball falling from this machine through a second machine, which has P = 0. You will never see a ball pass through to the end - if it were red, it would pass the first machine, but not the second; if it were blue, it would not pass the first machine at all.
But, let's say you now put a third machine between the other two, and you set P = 0.5. With classical balls, nothing changes - a blue ball doesn't make it past the first machine, while a red ball goes through the first, may or may not pass the second, and never makes it through the third regardless.
However, a quantum ball actually has a chance to pass through the 3 machines if you set it up this way. In fact, that chance is pretty large - more than half of the balls will start passing once you add the middle filter machine.
Still, this is easy to explain if we assume that the middle machine actually paints the ball instead of just detecting its color. This is where the entanglement experiment comes in: if you pass the pair of balls through the three machines, with ball 1 passing through machines P=1 and P=0.5, and ball 2 passing through P=1, you will find that sometimes both balls make it through, even though both balls can't be red at the same time, and they can't communicate about passing through the P=0.5 machine (you can repeat the experiment with the balls being taken arbitrarily far away before passing through the filters).
This is a great thought experiment, thank you. I'm not totally clear how the machines could work without actually taking a measurement, though. It sounds like you're saying the 2nd machine (P = 0.5) takes measurements (and therefore "paints" the balls), but the other two don't?
I've heard of the apocryphal "half-silvered mirror", but I don't get why reflection isn't an observation/interaction there either.
> I don't get why reflection isn't an observation/interaction there either.
I know this comment is going to get lost in the noise, but that is a really excellent point, one of the best that has been raised here so far. This is a point that is often glossed over, but it is actually really important, and quite challenging to explain without getting deep into the weeds. The answer is that passing through a half-silvered mirror is an observation/interaction, but it is special because it can be practically reversed by using additional mirrors so that you can get back to a state where you can no longer tell what the outcome of the "measurement" was. All measurements are reversible in principle, but some are irreversible in practice because the number of things you'd have to reverse is just too large. And in particular, by the time a measurement has affected the state of any macroscopic system (like a ball) it is absolutely impossible to reverse in practice, though not in principle. This process of becoming irreversible-in-practice is called "decoherence".
I agree with this. But it's worth pointing out that lots of good physicists don't. It's a statement of the Everett/relative state/many worlds interpretation, which is simply too weird for many people to accept. That's why there are about a dozen other interpretations of quantum measurement theory, which are all weird in other ways that I can't accept.
The weird part is precisely this: "you can get back to a state where you can no longer tell what the outcome of the 'measurement' was." In other words, at lunchtime you believed that a horizontally polarised photon hit your nose at 10am in morning, and you were right. Now it's dinner time, you don't believe that, and you would be wrong if you did. If the Everett interpretation doesn't pose a massive challenge to your ideas about reality and human identity, you haven't understood it. There are physics professors who picture photons choosing which way to go at a beam splitter, then transmitting the news backwards in time, because that seems more plausible to them.
Of course, interpretations are not science. Everyone agrees how an experiment would go: any attempt to reverse the interaction of the photon with your nose and brain would fail, because thermodynamics. From a purely scientific viewpoint, it simply doesn't matter how many other yous are superposed in parallel universes, because their existence or lack of it has no consequences that (any of?) you can observe. But scientists are as fascinated by this as everyone else is.
> It's a statement of the Everett/relative state/many worlds interpretation
No, it isn't. I've said nothing about many-worlds, only reversibility. And on that point everyone agrees.
> Now it's dinner time, you don't believe that
You really need to read the link above. It goes into all that in great detail. But the TL;DR here is that if it's dinner time, you haven't actually reversed the measurement, notwithstanding your current mental state with respect to the photon.
I don't like claims to authority, but maybe you should read the papers I've published about Bell inequalities too? :-p
It sounds like you're saying that, in principle, physical processes are all reversible. (Although that is often thermodynamically impossible in practice.) You're also saying that it's impossible in principle for someone to learn the result of a measurement in the morning, then unlearn it when the measurement is reversed during the afternoon. I don't see how there could be a self-consistent interpretation of quantum measurement where both those things are true.
How am I supposed to do that? You haven't provided and references and your profile is empty.
> in principle, physical processes are all reversible
Correct. This is a straightforward mathematical property of the Schroedinger equation.
> it's impossible in principle for someone to learn the result of a measurement in the morning, then unlearn it when the measurement is reversed during the afternoon
That's right. But that's not because it's impossible to reverse the measurement. It's because when you reverse a measurement you don't just "unlearn" the result.
That's right. That's why when this experiment is actually done, the mirror is typically rigidly mounted to an optical bench, which is sitting on the surface of a planet. If the mirror were freely floating in zero G, the outcome would be different. It is a worthwhile exercise to calculate how small the mass of the mirror would have to be before you would actually notice a difference in the results.
> I'm not totally clear how the machines could work without actually taking a measurement, though.
Here you're hitting on the heart of the Measurement Problem. In QM as it is understood today, unlike classical mechanics, there are two fundamentally different kinds of interactions between objects: quantum interactions and measurement. Quantum interactions are linear changes to the wave function, while measurements perform a non-linear update to the wave function (it becomes one for the measured value and 0 everywhere else).
Unfortunately, we do not have any theory so far that explains what is the difference between a quantum interaction and a measurement. The experiment I described works with 'machines' that interact quantically with the 'balls', but does not reproduce if the machines measure the state of the balls.
I will note that in the Many Worlds Interpretation, the measurement problem is somewhat different - it states that the state of the universe is always described by a wave function, but that parts of the wave which are sufficiently separated can no longer perceive each other somehow, usually called branching. Precisely when, why or how this happens are just as unknown, though decoherence seems to play a role
Valid solutions to the Schrodinger equation give you the wave function amplitudes in multiple places; the particles in these places can interact with each other still, even if they are 'the same particle'.
However, the wave function at different places interacts with the environment and start to shift in phase, eventually becoming unable to interfere with itself - this is called decoherence, and is a valid explanation about why and how we can't observe wave-like behaviors at large scales or in hot systems.
On the other hand, we can only postulate, based on observations, that when a particle interacts with a measurement device, the measurement device will show a single value with a probability determined by the amplitude of the particle's wave function at that point. We can postulate that the wave function collapses, or we can postulate that the device branches out into different devices in different worlds (enough such devices&worlds to achieve the probability distribution through observer selection somehow), or many other ways of formulating the Born rule. But whichever way you put it, this rule must be added to your system to predict experimental results, it does not derive from the Schrodinger equation.
>Valid solutions to the Schrodinger equation give you the wave function amplitudes in multiple places; the particles in these places can interact with each other still, even if they are 'the same particle'.
I suppose it's destructive interference. It's qualitatively interesting, but its observation is complicated by orthogonal states: when you multiply orthogonal states you get zero. If you can thoroughly dismantle the state to observe it, you still can do it only on microscale, then you'll have a problem lifting it to macroscale evading destructive interference while orthogonal states are all over the place. Anyway, Schrodinger equation describes behavior of quantum states with mathematical precision and the math is quite conclusive that a linear equation behaves in a linear way. When you feel intuition doesn't get you much, you can resort to math, that's why math is seen as an indispensable part of science, because intuition isn't guaranteed to work, which is exactly your case.
>it does not derive from the Schrodinger equation
MWI derives it from the Schrodinger equation. Observation is experience of the observer and can be calculated. Unless you assume that the observer is supernatural and is thus unknowable.
> MWI derives it from the Schrodinger equation. Observation is experience of the observer and can be calculated. Unless you assume that the observer is supernatural and is thus unknowable.
This posits the notion of an observer that only observes one outcome, whereas the SE predicts that an observer will observe several different outcomes with different amplitudes. The MWI is postulating that we should only look at each outcome separately.
Furthermore, it is not possible to derive the actual probability value from the wave function amplitude without some additional postulate equivalent to the Born rule, for example that the number of observers that observe one outcome is proportional to the wave function amplitude of that outcome.
The result of calculation of the state of observer is linear evolution: the state of observer splits and entangles with the observed state and each part observes the respective outcome. Ironically Copenhagen gave the same result for Schrodinger's cat experiment: even before measurement it's known what states are in superposition and those states are "dead" and "alive", and it's still known without measurement too.
>that the number of observers that observe one outcome is proportional to the wave function amplitude of that outcome
If you mean the number, the norm of each state of observer that observes the respective outcome can be calculated. The statistics over the outcomes can be calculated too.
Bell's inequality (as you allude to) describes how transformations on quantum wave functions cannot behave classically. But classical wave functions can certainly be entangled as entanglement is a property of a wave function, not transformations on wave functions.
I'm not sure what you mean by classical wave functions - I've only seen the term 'wave function' used for quantum mechanics. Are you referring to classical wave equations? I'm not sure how the concept of entanglement is supposed to apply to classical waves though.
No, you can't, at least not one that behaves like the quantum wave function does. Classical probabilities are real numbers between zero and one. The wave function takes on complex values, which allows you to add two wave-function values with non-zero magnitude and get zero, i.e. produce destructive interference. Classical probabilities can't do that.
So basically a box with two colored balls could be described by a "classical wave functions" (a real-valued wave function), where the values of the two balls are entangled, and then yes, your experiment would exactly describe what happens.
But this sheds no light on quantum wave functions and quantum entanglement.
The balls in a bag experiment is exactly the kind that does use hidden variables that are local. No information has to be transmitted in either direction.
Bell showed that the correlation is even greater than you can get using that sort of thinking. Reality is more like this:
Bob and Alice each get a pair of bags, one black and one white. They open one of the bags in which they get either a red ball or a blue ball.
If they both choose the white bags, their balls are different colors. However if either or both of them choose the black bag, the colors of the balls are the same.
If you think about it, there's no way to put the balls in the bags to satisfy these conditions in all cases. This is a simplification of what's going on with Bell's theorem.
Thanks for the example. Could you point me to a resource where it explains why the reality is like that? If that’s an implication of a formula of quantum theory(which the article also mentioned briefly), I would like to learn about it and be able to derive this implication myself.
The analogy you mentioned is exactly the wrong one - it suggest that it’s just a matter of a hidden variable.
A proper (but less elegant) would be: you have two balls with the same color or a pattern.
You take one out. If you check the color first, you will find the other’s color the same, but the pattern sometimes different. If you check the pattern first, you will find the pattern the same, but the color sometimes different.
> it suggest that it’s just a matter of a hidden variable.
I disagree. Suppose that I create a machine that chooses which ball to place in each box. This machine makes the choice based on some measurement of a quantum particle (electron spin). Then the colors of the ball are entangled with the state of the quantum particle, which cannot be described by some local hidden variable.
Only if you can completely isolate the balls so their states don't decohere. That is not practically possible to achieve, particularly since in your scenario you reach into the bag and touch the balls. As soon as you interact with the balls in any way, you become entangled with them and the behavior of the system becomes classical.
No. The only way you can actually observe entanglement is in an isolated entangled system (this is the reason quantum computers are hard to build). It is true that at a philosophical level there is no difference, but from the point of view of physics, which is to say, what is observable, isolation is crucial. Non-isolated systems behave classically, notwithstanding that they are actually quantum systems.
Would you claim that when Einstein developed his theories of relativity, they were invalid (from the point of view of physics) because their consequences were not yet observable? For example, Einstein used thought experiments to develop special relativity in 1905, but since kinematic time dilation was only experimentally confirmed in 1971, his work was not a contribution to physics until then?
The difference is that the limits on observing relativistic effects in 1905 were technological, whereas in QM you cannot observe the effects of entanglement in a non-isolated system even in principle. This is a fundamental constraint imposed by the theory itself. You can't get around it even with arbitrarily advanced technology.
That too could be a valid analogy if "randomly grab one in each hand" isn't actually random, but only appears random. This would be analogous to superdeterminism.
A more faithful analogy: I have two boxes with a small hole and ball inside.
If I look through the holes, I will see that one ball is blue and the other is red.
If I touch them through the holes, I will feel that one ball is hot and the other is cold.
However, I cannot look and touch at the same time. And once I check the color or the temperature of one of the balls (so I know it for both) the link is broken.
If I look first, I’l see that one is red and the other is blue. But if now I touch them, each one will by hot or cold with 50/50 probability. Finding the temperature of one doesn’t tell me anything bout the temperature of the other. And when I touch a ball I don’t know its color anymore: if I look at it again it could be red or blue with 50/50 probability.
Reviewing Bell's theorem - described in this article - has resulted in experimental evidence that all classic analogies in the style of "some state was embedded in each particle at the moment of entanglement and the measurement just revealed something about what was in that single particle locally at that time" can not be true.
Bell's theorem describes the highest possible upper bound of correlations for spin measurements along different axis if it was as you say. But it turns out that in practice they are more correlated than what would be possible according to Bell's theorem, ergo, that analogy (which, in general, is plausible and reasonable) is not compatible with the physical reality we live in.
The classical analogy don't model superposition, that's what violates Bell's inequalities, but it illustrates the correlation aspect of entanglement well.
As others here have already noted, this analogy is wrong. But I think people here haven't given a convincing example of a system that behaves differently due to entanglement than it would if the behaviour were simply conditionally random (the ball example behaves identically if the balls are entangled or otherwise just classically random).
The issue in finding a good example is that the effect of entanglement is rather subtle and hard to interpret intuitively.
Here is an example that may make it more obvious:
There exists a game that can be played cooperatively between two players that share two random bits. It is possible to win this game only 75% of the time if the bits are not entangled. If the bits are entangled there is a strategy for winning the game about 85% of the time.
The details of the game and a good explanation can be found here:
https://www.scottaaronson.com/blog/?p=2464
Basically, there is a game that involves sharing two bits, if they are entangled, it can be won 85% of the time. If they are not entangled but otherwise random (like the red and blue ball example), it can be won only 75% of the time.
If you wanted to make this quantum, one is red and the other is blue only if you're looking for a red ball when you stick your hand in the bag. If you're looking for a purple ball when you stick your hand in the bag, then one is purple and the other is green.
agree. The issue that is confusing in explanations of entanglement is the use of a statistical measurement to a model of an individual event. Entanglement is a state made after the observations of many events. Before there is a red ball and a blue ball, there is a precursor, purple ball. That purple ball splits and two balls wrapped in paper are created. They can travel great distances over long times to separate locations, A and B. When some humans start to unwrap them, they are amazed at the correlation-one blue is always matched by one red. So until the balls are unwrapped, how are the humans to describe them? They use the word superposition-unfortunately, many interpret that as a SIMULTANEOUS existence of both states in each ball. Whereas in reality there are many events - the humans at location A see both colors as do the humans at location B. That does not mean that individual particles assume both states. A similar confounding statistic was taught in grade school. The average family has 2.5 children. Yet half a live child was never born.
But aren’t these “informations” just representations of (something abstract) reflected in a bunch of quantum states of your neurons? And we humans decide there are homomorphisms between mine and yours and thus they are representing the “same informations”. But really they were fundamentally different. There are no copying. Only some kind of lossy compression mimicking.
At that point you would need to decide what ‘copying’ is, exactly. Making a terrible VHS recording of a TV show would still be considered copying by most, even if none of the relative pixels ever matched.
The difference is that we can agree upon a set of measurements and a procedure for comparison (eg using difference of Gaussians) to determine how much of “copy” the recorded is to the original. We can repeatedly conduct this experiment (copy->measure->compare) and with high confidence we’d obtain a numerical value that can act as a “proof” it is a “copy”.
My argument is that not only that we do not know a way to conduct such category of copy->measure->compare experiments for human subjects, even with advancement in BCI, etc, it is perhaps impossible to conduct such experiment due to some nature of consciousness that we do not yet understand concerning “information”.
I used the word “mimicking” earlier as apparently when concerning “information” with humans, a (somewhat) “conscious” act has to be performed for the whole phenomenology to be interpreted as that of “copying”. We are encoding “information” in an extremely none-“traditional” way as information is studied and made sense of in computer science.
Similarly, the notion of “semantics” opens up two categorically different sets of paths for inquisition in programming language theory vs linguistics. There is something mysterious and trippy about what “meaning” and “information” really are (eg in regards to qualia).
There are no propositions that "we are in simulation" would imply (unless someone fundamentally lacks imagination).
Being "in a simulation" doesn't imply that we're in simulation created by later humans, it doesn't give any indication how fine-grain the approximations are, etc. etc.
"We're in a simulation" fundamentally discard Occam's Razor in the fashion of the belief in God as controlling everything. And thus this belief has the same weight as belief in the Flying Spaghetti Monster [1].
> "We're in a simulation" fundamentally discard Occam's Razor in the fashion of the belief in God as controlling everything. And thus this belief has the same weight as belief in the Flying Spaghetti Monster [1].
You are using Occam's Razor incorrectly. A preference for parismony in problem solving is not identical with parsimony being the only state of the world.
As a side note, which directly applies to your comment, Occam's Razor was invented by Friar William of Ockham as a defense of divine miracles.
You are using Occam's Razor incorrectly. A preference for parismony is not identical with parsimony being the only state of the world.
"Everything is really under control of invisible stuff" make it impossible to use parisomy under any circumstances. It fundamentally discards Occam's Razor.
Occam's Razor was invented by Friar William of Ockham as a defense of divine miracles.
While I wouldn't personally accept a God that acts in the world, the argument is about having some sort of evidence based interpretation of the world. Flying Spaghetti Monster is response to arguments like "God makes the rain fall" etc, not to a God that appears in the world but a God that can essentially be evoked for anything and in any fashion.
> "Everything is really under control of invisible stuff" make it impossible to use parisomy under any circumstances. It fundamentally discards Occam's Razor.
You are fundamentally misunderstanding Occam's Razor. It is not a law - Occam's Razor is a preference for how to view the world, not a law that was violated. [1]
There are alternate rules-of-thumb, such as one by Ockham's contemporary, Walter Chatton. Chatton created Chatton's anti-razor in opposition to Ockham's Razor: "Consider an affirmative proposition, which, when it is verified, is verified only for things; if three things do not suffice for verifying it, one has to posit a fourth, and so on in turn [for four things, or five, etc.]. (Reportatio I, 10–48, paragraph 57, p. 237)" [2]
You are fundamentally misunderstanding Occam's Razor. It is not a law - Occam's Razor is a preference for how to view the world, not a law that was violated.
Yes, Occam's Razor isn't a law but a method of understanding reality. My point is that if you throw out Occam's Razor in total, not in one or another situations, you're left with nothing to understand the world with.
The "God wants it that ways" and "because it's simulation" can be substituted for any proposition at all under any circumstances and there's not counter argument to such substitutions. This approach is also "the paranoid worldview" - "because they want to think that" also has this "insert everywhere" quality.
And you're link describing the original ideas of William of Occam doesn't what you'd imagine. "Occam's Razor" is broad approach that's evolved over time and just takes that label for convennience. Virtually no one is evoking the authority of William of Occam or claiming to follow his Nominalism or whatever. The generally means that adding unneeded hypotheses should generally be avoided. If you can never follow that guide, you're in trouble.
The reason "magicians" succeed is because Occam's razor is a heuristic that we rely on intuitively. When the real explanation is super-complex, like "this magician spent 8 hours a day for months learning to hold a hidden card in an invisible way, followed by a year of engineering an under-the-stage lift hidden by mirrors, the reality isn't "disproved" by Occam's razor.
Same thing if someone engineers a super complicated method to murder someone while appearing to physically be at a different place. It isn't the simplest explanation, but it can still be correct!
The reason "magicians" succeed is because Occam's razor is a heuristic that we rely on intuitively. When the real explanation is super-complex, like "this magician spent 8 hours a day for months learning to hold a hidden card in an invisible way, followed by a year of engineering an under-the-stage lift hidden by mirrors, the reality isn't "disproved" by Occam's razor.
Sure. But if the entirety of reality is created by magicians, then one's concept of reality collapses. The "simulation" view point is indeed pretty much the idea that magicians control everything.
If you read the argument above, my point isn't that Occam's Razor is always correct but if you posit a world where it is generally/always incorrect, your ability to coherently understand reality collapses.
None of my arguments here have been classical syllogisms and so none of my arguments can be "logical fallacies". Your not responding to the meaning of my informal statements in a rather transparently bad faith manner.
Please stop trying to win a debate. You haven't actually addressed what I or anyone else wrote. If you want people to discuss with you, understand they expect you to do the same.
A logical fallacy is an error in reasoning that is based on poor or faulty logic.
Lol, it seems clear you are the one setting up a debate and attempting to win it. I actually am working on the implications of the "simulation" argument. All of my positions are based "plausible reasoning" with similarities to formal logic being only incidental.
"we're in a simulation" is at least something that might be ultimately testable with the right theory and experiment. FSM/God isn't w/o them choosing to 'reveal' themselves to everyone.
It's kind of interesting how people who would never consider a creationist explanation seem quite willing to embrace the idea that we're in a simulation.
Well, in my case, I don't believe our universe is a simulation, but I'm open to discussing the idea for fun and it does seem like a possibility. Whereas, most people that believe in creationism, believe it 100% to be the case and if you don't believe the same you are going to hell. I grew up in an evangelical Christian community and you can't really compare the two groups. Evangelicals are ready to die for this belief.
This is mostly the YECs (Young Earth Creationist - "the earth is 6000 years old" camp). There are other flavors like ID (Intelligent Design) that tend to hold things a good bit looser - and there are many different flavors of ID as well. But yeah, the YEC folks are completely "it's our way or the hellway!" and the Evangelicals have pretty much doubled down on YEC - that wasn't always the case, there used to be a lot of Evangelicals that were theistic evolutionists and had no problem with a 4.5B year old earth.
EDIT: maybe we need another word in this context besides 'creationist' since it has a lot of baggage in the culture at this point. What else to call someone who hypothesizes that there is some kind of intelligence behind the universe? The simulationists seem to fit into that category as do the various flavors of 'creationist', 'intelligent design', 'theistic evolutionist' and probably even Hindus, etc.
The term "deist" fits some of those items, although.
Interestingly, I think some of the distinction as to why this idea is more palatable is that it doesn't require "supernatural" or "magic" deities. The "creator" could be just like us. We already have evidence that creating virtual worlds is possible--we do it ourselves with games, so I think it takes a lot less faith, as we have a limited proof of principle already.
Also, most magical creationism is totally untestable. You can make some predictions about a simulation, though. If simulations are subject to constraints, which is likely, you should be able to ascertain, in the design of the universe, that items with the biggest O might be subject to performance optimizations. If you find lazy loading, caching, or other performance optimizations at the smallest scale (biggest O), which is what this might predict, you at least have some hints.
One is an assertion with no logic to justify it, the other is an assertion with a somewhat persuasive argument justifying it [1]. They are simply incomparable.
Of the 3 assertions in the abstract, the obviously false one is #2: "Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history". When you realize that running a simulation of the universe requires more processing power than is available in the universe, this is very obviously false.
I respect people who believe in a bearded White omnipotent homophobic God who lives in a sky palace more than I respect people who believe in this insane drivel about the probability of living in a simulation. At least the former were indoctrinated as their brain was forming.
Isn't it possible that our universe is really just an approximation meant to look as detailed as possible? You don't need a universe of processing power to simulate a universe. You just need to make it look believable enough that it fools whoever is in your simulation.
I agree with you, and even if it's not an approximation, it doesn't matter; we can't make assumptions about the size of a parent reality (and its limits on processing power) relative to our own.
The simulation hypothesis seems as theistic as the creationist hypothesis. Maybe the main difference being that with the simulation there would likely have been many creators (programmers) whereas the creationists would say there is one (although there are polytheistic creation narratives, so maybe not so different). Other than that, they both seem to fall into the theistic category since a higher intelligence is posited who created (the simulation | the real world).
> Of the 3 assertions in the abstract, the obviously false one is #2: "Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history". When you realize that running a simulation of the universe requires more processing power than is available in the universe, this is very obviously false.
I think you've expressed a number of confusions.
First, I think you contradicted yourself. The line you quote says that posthuman civilizations are unlikely to create simulations, but you say this is false because a universe simulation requires more power than available in the universe. So you're agreeing with the outcome while saying you're disagreeing.
Second, I suggest reading the the paper fully, because Bostrom explains that we don't need full universe simulations, we need only consciousness simulations (kind of like the Matrix). The very premise of a post-human civilization is that they have knowledge sufficiently advanced that they have algorithms to simulate human minds.
Much like how video games only render the part of the world that is visible to the players, so a consciousness simulation only needs to simulate minds and their perceptions of a macroscopic, classical world, they do not have to simulate a full quantum universe. Our brains are great at filling in information that we expect to be there, so even the parts that we directly perceivedon't need to be simulated with complete fidelity.
Frankly, I don't think you've given the argument sufficient thought, but by a happy accident you picked exactly the outcome that I think is most likely, and I elaborate on why here:
> Our brains are great at filling in information that we expect to be there, so even the parts that we directly perceive don't need to be simulated with complete fidelity.
Right, but the simulation is nonetheless bottlenecked by whichever system requires the greatest fidelity, and the more technology advances, the more of a problem that becomes. For instance, medieval times would likely be far easier to simulate than modern times, because the latter requires simulating our entire computer infrastructure.
And I think that infrastructure is harder to simulate than you'd think: I could use a solver on a big NP problem (we'll assume that P != NP), get a solution after an hour, and in theory, with some practice, I could probably check if the answer is correct in my head. So the simulator can't simply give me what I "expect". It has to actually compute the thing, and then it's clear that the faster our computers get, the slower the simulation has to run.
Alternatively, the simulation could mess with our minds so we never notice anything out of place, but at that point I'm not sure I understand the point of it. Might as well wonder if this is all a dream.
> For instance, medieval times would likely be far easier to simulate than modern times,
Agreed.
> because the latter requires simulating our entire computer infrastructure.
Maybe, that isn't clear. There are probably plenty of optimisations here too if given some thought.
> I could use a solver on a big NP problem (we'll assume that P != NP), get a solution after an hour, and in theory, with some practice, I could probably check if the answer is correct in my head.
Yes, but note that we very rarely solve NP or EXPTIME problems exactly due to the costs. We often solve them heuristically or approximately, which wouldn't pose a problem for a simulation either.
Then there's also the possibility that we are simply not free to choose the problem to solve. When running a solver for an NP problem, we just need to input any kind of NP problem, and a simulation could easily have large sets of precomputed solutions available.
> Alternatively, the simulation could mess with our minds so we never notice anything out of place, but at that point I'm not sure I understand the point of it
Depends on whether any such changes affects the point of the simulation. If the simulation is to test world-scale economic models, then isolated tribes wouldn't have much influence on those outcomes.
Then again, maybe the point is simply entertainment. Maybe we're just The Sims for post-humans, in which case there's no point anyway.
If the computer code running this simulation is that good to never have bugs, then the simulation is functionally identical to the meatspace real universe from our POV. So I don't know if there's any point thinking about it other than idle curiosity. But I do worry that for some simulation believers it could become an excuse to have less empathy towards fellow humans.
If "the simulation" and "reality" have the same properties, what would "being in the simulation" even mean? A thing/person/etc would "be" in both by definitions, be in neither by others, etc.
First, I think you contradicted yourself. The line you quote says that posthuman civilizations are unlikely to create simulations, but you say this is false because a universe simulation requires more power than available in the universe
No, they are saying the opposite. The argument that simulating the universe requires more atoms than the universe says that a later civilization would not simulate the entire universe. IE, #2 of the refutations really true.
When the simulator shows a previously unseen object, it must first simulate all its history accounting for all effects to ensure that the shown state is legit and doesn't expose the conspiracy. This state should also account for all future investigations. The easiest way to achieve this is to run a precise simulation, so it doesn't save any resources.
> When the simulator shows a previously unseen object, it must first simulate all its history accounting for all effects to ensure that the shown state is legit and doesn't expose the conspiracy
The simulation only needs to produce observations that are consistent with the knowledge of the first observer. Sometimes bit even that, as I describe in the blog post, because eyewitness testimony is known to be quite unreliable.
I'm not sure what sort of history you're thinking of specifically.
Existence of Neptune was conjectured before it was observed, the testimony came from instruments. If such consistency with contemporary observers was used, scientific revolutions wouldn't happen as observers would never observe what contradicts their knowledge.
I agree, there are necessarily some background facts that must be consistent with the environment. Science might eventually be able to trace the trajectory of the asteroid that killed off the dinosaurs, but that doesn't necessarily mean you would need to simulate every asteroid in the solar system since its initial formation.
The amount of information science could infer on many questions is strictly bounded and in those cases we could only reason stochastically. The data presented at time of first observation can then be generated randomly from the set of answers consistent with what's already known.
Maybe, but if you can segment out the unobserved items, and back-calculate it lazily, you could save a lot of processing and memory.
The place to implement an optimization like this would be at the items with the biggest O in your world, which is usually the smallest building block--what you'd have the "most" of, which would drive the largest memory and processing demands.
Fine, you got me: the assertion that is obviously true, but goes further in that it invalidates the need for any of this discussion. If your goal was to engage me in a thought-measuring contest, sure, you win: you've spent more time thinking about this utterly ridiculous nonsense than I have. Congrats?
Yeah, saying I think you're likely confused, as I did, is not remotely the same as calling subjects that interest some people "insane drivel", or "utterly ridiculous nonsense". You definitely need to recalibrate your scale IMO.
If you're not interested in philosophical discussions, then why engage at all, particularly only to denigrate people who like exploring thought experiments?
But how is the simulation hypothesis not positing a "god" of some sort (some kind of super-intelligence that they claim is behind it all)? It seems like the simulation hypothesis is a theistic hypothesis. Or do they assume the simulation just evolved?
Also, why the assumption that post-humans are running the simulations (as in the paper)? Couldn't it be any ultra-advanced civilization that's playing with an evolutionary simulation?
The simulation argument is exploring the likelihood that post-humans would simulate humans. Both post-humans and humans inhabit a universe with the same laws, so this isn't a fictitious universe created by a deity.
> Also, why the assumption that post-humans are running the simulations (as in the paper)? Couldn't it be any ultra-advanced civilization that's playing with an evolutionary simulation?
Sure, potentially. The paper makes no assumptions about the existence of other life forms, it instead extrapolates the likelihood of a simulation given the only intelligent life we know to exist: us.
Therefore you can see the simulation argument from that paper as a lower bound on the probability we live in a simulation. Positing the existence of other life forms that run random simulations can only increase the probability we're living in a simulation, assuming one of the other outcomes isn't more likely.
The problem with theistic hypotheses is that they start from the idea that a humanoid god is a simple explanation (since our brains devote a lot of effort to understanding humans, so humans seem misleadingly simple). The simulation hypothesis treats the idea of an intelligent entity running a simulation as a starting point, and the details of how such an entity would come to exist are taken as a serious point that needs to be explained, whereas with god hypotheses the matter of how that god exists in the first place is generally just waved away.
If a simulation exists, and there is evidence of it, then sure we could surmise that someone created the simulator - and would have some evidence of such?
I think the parent poster was noting that it is a pretty fundamentally different argument than say, positing the existence of a creator, because we exist at all - and that said creator has certain specific requirements of us regarding what we do on Sundays, for instance, or with whom and when we have kids.
> and that said creator has certain specific requirements
Is that a requirement of every flavor of creationism? Actually, maybe I shouldn't use 'creationism' in this context because that's a loaded term with a lot of baggage at this point. What else to call a hypothesis that asserts there's some kind of intelligence behind the universe that we see? Simulationists would seem to fall into that broader category as would old-school creationists.
Well, there are Simulationists which start going on wild flights of fancy about what said simulation creator intended/created it for, which yeah would start going into that territory pretty quickly.
Seems like first you'd need to have some kind of falsifiable evidence that we were in a simulation first before jumping there? Plenty of folks trying to do that though, without falling into the first case.
Personally it seems to have little to no real impact on anything I care about one way or another, so filed in the 'cute but who cares' bin.
It is. That's why the Big Bang is the scientific consensus.
Assuming any kind of simulation at all leads to more questions than answers - simply delegating the creation of the universe to the next turtle down. It's not a matter of how "persuasive" an argument is or isn't. It is the evidence the scientific method has produced from which we draw our conclusions.
I actually think the singularity is an interesting concept deserving of exploration. But "singularians" like Nick Bostrom (author of parent link) have some strange ideas.
A. The idea that intelligence beyond human beings would grant it's possessor power that are in ways absolute in very specific, rigid fashion. Human being can accomplish a lot of things. It's notable those things human beings do better than computers seem very tenuous. Humans seem to drive rather haphazardly yet humans drive much better than computers and driving overall seems a "bucket chemistry" sort of activity. Humans calculate much worse than computers and calculation is an exact, defined activity (arguable, the exact, defined activity). But for the singularians, transhuman devices will do the uncertain, tenuous activities that humans do but with "no mistakes". And for a lot human activities, "no mistakes" actually might not even mean anything. Despite humans driving better than computers, humans probably wouldn't even agree on what absolute good driving even means.
B. Simulation as exact map. Any human created simulation of some system is going to be an approximation of that system for the purpose of extracting particular phenomena. Some things are discarded, other focused on and simplified. A model of the solar has to consider conservation of energy or tiny deviations will produce instability over time since errors overall on unavoidable in current hardware. Even a simulation of a computer chip isn't useful unless one knows the chip's purpose is logical operations. But for Bostrom and partisans of
C. Incoherent ontology. If we could produce an exact model of a thing, which is the real subject and which is simulation? What if we could produce twenty "exact simulations", which is real? In a realm of unlimited hypotheticals and unlimited exact simulations, wouldn't a least a countable infinite simulations of "everything" exist. Which is real is quite a conundrum but this problem itself only exists in a world of multiplied objects which we actually have no reason to suppose exists.
Just realized you linked to an article by Nick Bostrom, apparently the same guy who posits the Fable of the Dragon Tyrant. Seems in general to hold opinions in contradiction with mine.
The simulation argument is definitely true, in the sense that one of the outcomes Bostrom describes must be true. I don't think he takes a position on which outcome is true, so I'm not sure what there is to disagree with there.
As for aging and life extension, I honestly don't understand how anyone could reasonably think we shouldn't stop or reverse aging.
Those are some bold claims that not even Bostrom makes. Irregardless, I would take one but for a fool for assertions without the backing of evidence: of which a hypothetical thought-experiment is not.
I can certainly comprehend why one would wish to become an immortal being incapable of death. But I just want to be human. Sure you can live forever, but at what cost? A fear for sunlight, garlic, and crosses? For me, "Death is very likely the single best invention of Life."
> There are no propositions that "we are in simulation" would imply (unless someone fundamentally lacks imagination).
Not true! It implies we might find performance optimizations, especially at the lowest level. Lazy loading, caching, pointers to constants, that sort of thing. It also doesn't discard Occam's razor. We actually have examples of simulated worlds (physics engines in games), so we know they are possible, unlike the flying spaghetti monster.
It implies we might find performance optimizations, especially at the lowest level. Lazy loading, caching, pointers to constants, that sort of thing.
Nah, as other have noted, no simulation could have a 1-1 relationship between data humans observe and data in a physical device that exists in a world congruent to what humans observe - because there aren't enough atoms in the reachable universe for this. So such simulation either compresses the actions it simulates using higher level constructs or its happening in some universe congruent to the world we're in. Any such machine is going be a product of a future we don't know about yet and so it's constraints could be wildly different. Moreover, since the standard assumption of this simulation foolishness is that future humans or future post-humans want to learn about their ancestors, one can naturally assume you mechanisms that compensate for any "glitches" that might otherwise be obvious. Which just adds to my original claim.
> Not true! It implies we might find performance optimizations, especially at the lowest level. Lazy loading, caching, pointers to constants, that sort of thing.
The issue is that you can only learn whether you're in a simulation if the simulator allows you to do so. Otherwise, the moment that you discover a performance optimization, the simulator could just pause the simulation, delete the discovery from your mind, and resume.
If we were in a simulation, it feels overzealous to make the assumption that the computing model would be anything at all like what've developed. Best assumptions you can make is that it follows some kind of consistent logic (though there's caveats here, too).
> that the computing model would be anything at all like what've developed
Perhaps. I suspect, though, that it would be subject to the same information theoretical constraints which would provide convergent evolutionary pressures.
It seems at least likely that some level of optimization would be useful if there is any type of cost (energy, materials, resources, space) to the computing substrate, whatever that may be, and that would lead to similar optimizations to what we might be able to imagine.
But quantum mechanics is exactly the opposite of what a programmer would add. At least as far as we understand it is (exponentially) harder to simulate quantum systems than classical ones.
Sure, using systems built from inside the system. In said theoretical world, they may have different constraints and physics after all. (Only kinda serious)
Practically, the simulator theory may be testable, but probably not. Every religion I’ve run across is pretty clearly not okay to even test.
Cellular automata have a built in speed limit, so it could be something like that. If one cell's state depends on only its immediate neighbors state, then logically no object can move faster than one cell diameter per frame. And if you had shared state between two non-adjacent cells in certain limited cases, that could create "faster than light" behavior.
You're probably interested in something more like the holographic universe hypothesis. Under that hypothesis, I believe "entangled particles" end up staying close to each other in the projected space. 3D space in that case would be an "emergent phenomenon" that isn't necessarily the "base data structure" of the simulation.
Speed of light would just be rule, like cellular automata rules, Planck distance is cell size and the rule is you may only move one cell per frame in any direction. Processing speed doesn't matter to us, it could take a million "years" to render a frame but we experience it in real-time.
As you say pointer to shared memory location is basically hidden variable theory, you could also move faster than the speed of light by simply updating your location to any value, I have done this in game hacking before you just need a WriteProcessMemory api, might get caught by anti-cheats.
Isn't it the non-local hidden variable model? The idea that if local hidden variables do do not explain Bell inequalities, make the hidden variables non-local.
I think this model kind of works and some scientists are working on it, but it is not the preferred interpretation.
It's not really immutable as you can change the parameters of an entangled pair. You just can't communicate any information by doing so, because you need a classical signal to make sure you don't read one of the particles the wrong way.
I could be WAY off, but if locality isn’t entirely true, and the “read success” is 33-67%, doesn’t that still leave quite a bit of wiggle room for communicating information in some fault tolerant method?
You get correlations - you can "understand what you read" once you have the measurements from both entangled particles, so you need another channel of communication (with the associated delays) to get that information.
One side doing their interaction may cause a "spooky action at a distance" (according to some QM interpretations), but if you have only one side of readings and don't know what the other party measured in their interactions, you can't tell anything about what "the other side" did, so it does not help communication at all because you still need to transmit as many bits in a non-quantum way until you can do anything.
Correlations only but no useable communication. You can both make a decision on the same random info that isn't determined until later when you are apart, but can't know anything other than that if they followed the plan they made their choice based on the same later-determined random info, correlated with your random info.
If they didn't follow the plan and measured orhogonal/same (can't remember which) spins, then your results are uncorrelated but you can't know until you meet back up (maybe barring superdeterminism that is also accessible to the individual).
If we agree before parting that one of us is going to Alpha Centari and the other is staying on Earth and going to assassinate either the President of Russia or America depending on the observed state on an entangled pair of particles, once I reach the star system.
Doesn't the traveler have more information than anyone else on ship about whether an assassination attempt was made in Russia or America? and have it faster than the speed of light? We don't have it with certainty, but we have shared knowledge that is unknowable to others and instantaneous.
I think you would have a shared private piece of correlated information between each other that wasn't determined until you made the measurement (though maybe no joint reference frame to say who made it first), but you can't choose what it was (communicate with each other).
The universe either had to break the light barrier to make the measurements correlated (predetermining the outcome isn't generally possible because you could choose how to make the measurement based on another quantum measurement from something outside of the other participant's then-current light cone), or make the same choice through superdeterminism (the other measurement and all others were predetermined too and exact simulation of entire future universe's measurement decisions was shared between every particle when they were within some distance at big bang or something). But even though the universe broke the light barrier, you yourself aren't able to use it for communication.
In the many-worlds interpretation you've both branched into the same branch of the multiverse, but couldn't choose which branch. You do have private knowledge of which branch and the consequences of that, assuming you both followed the agreed on procedure.
I think you can use what you are describing in a series of correlated measurements to set up a provably secure one-time-pad, and then do secure classical communication with it. But you don't communicate the actual bits of the pad, you just both get correlated ones.
You didn't gain any information after parting - you'd "know" just as much if your compatriot on Earth had given you a sealed envelope that said "Russia" or "America" inside.
You can take actions that will later turn out to be correlated with each other, but you can only find that out once you meet up again, bounded by the speed of light.
It did not prove that. It proved that at least one of three assumptions about the universe is violated: Statistical Independence, Locality or Determinism. The usual assumption is that Locality has to go, but that is not necessarily true. It is possible that Statistical Independence is not true for the experimental systems that have been studied.
Even ordinary interpretations of QM don't explain the source of randomness in QM. This randomness can be entirely deterministic which would make the entire universe deterministic. So superdeterminism can still be the case, even if it isn't the reason for entanglement correlations. The entanglement correlations would instead be due to shared RNG references across spacetime, effectively nonlocal.
My point is, locality and superdeterminism aren't mutually exclusive, they are independent. All 4 combinations are logically possible with current experimental evidence.
> The fact that superdeterminism can violate statistical independence is what lets it violate bell's theorem, right?
Yes, the idea is that the past of both the detectors, experimenters, and particle producer were once interacting, relatively recently in the scheme of the human race (let alone civilization, the big bang, etc.) These correlations would be responsible for the entanglement correlations, not a shared nonlocal RNG reference, but rather an inherited local piece of an old reference.
Nonetheless, superdeterminism doesn't have to be doing anything conspiratorial, but the possibility of doing so, lets it violate Bell/CHSH inequalities.
I'll never understand entanglement. Every explanation makes me wonder why it can't be used to instantaneously send a message. I never fully understand the explanations why it can't be used to do so. I don't understand how you can be sure about the state of the other particle, what if someone already measured it and then did something to it?
Imagine you have a pouch with a red and a blue marble in it, then take out a marble without looking at it and hand the pouch to a friend. Later, if you look at your marble, you instantly have information about the other marble at a speed greater than the speed of light... but you couldn't use that fact to send a message.
The only difference in quantum physics is that there are actually two parallel universes: One in which you took out the red marble & one in which you took the blue one. You don't know what universe you're in until you look at the marble, but still it doesn't help you to transmit a message to your friend.
(This is assuming the "multiple universes" interpretation- In the other interpretations there is "spooky action at a distance", but this action happens in EXACTLY THE RIGHT WAY to prevent you from transmitting a message to your friend)
The only difference in quantum physics is that there are actually two parallel universes: One in which you took out the red marble & one in which you took the blue one. You don't know what universe you're in until you look at the marble, but still it doesn't help you to transmit a message to your friend.
I don't think it is helpful to talk about multiple universes, that makes a strong implication towards a many world interpretation. It is better to say that the difference is that in the classical case the decision who gets which marble happens when one of the marbles is taken out of the pouch while in the case of entanglement we do not really know when the decision happens but it does provably work differently than in the classical case. It might be that the decision is never truly made, that both outcomes happen in two parallel worlds, it might be that the decision is only made when one party inspects their marble, it might be that it happens at the same time as in the classical example, ...we don't know.
> that makes a strong implication towards a many world interpretation
You say that like it's a shortcoming. :)
There are many who take the (very reasonable) position that the many worlds interpretation is the most epistemologically parsimonious one. Contrary to some misunderstandings of it, it doesn't "add" extra worlds; it removes the concept of "wave function collapse", and leaves all the other known laws of quantum mechanics completely unchanged. The "worlds" arise naturally as more and more particles in the environment become entangled with the measured system, and "wave function collapse" turns out to be the predicted observation of an observer who is themselves made out of quantum states.
The only difference between many worlds and the "standard" Copenhagen interpretation is that Copenhagen adds that, at some point, the entanglement process stops, and a bunch of states in the wave function disappear. And it doesn't specify how, or why, or how to calculate when it will happen. Those that advocate for many worlds would point out that this extra epistemological burden is questionable, given that the correct prediction is made without it.
My understanding is that "Wave function collapse" is an artifact of one of the many possible ways of describing quantum mechanics mathematically. There's really nothing to remove, is there?
Sure, but the Copenhagen interpretation is basically rejected as absurdist and is unnecessary for the exact same reason. It's also trying to give a physical explanation for an artifact of one of the many mathematical representations of quantum mechanics. Schrödinger's Cat is a reductio ad-absurditum to disprove the Copenhagen interpretation!
Contrary to some misunderstandings of it, it doesn't "add" extra worlds; it removes the concept of "wave function collapse", and leaves all the other known laws of quantum mechanics completely unchanged.
Yes, it gets rid of the collapse postulate, but no, it actually introduces many worlds. You can wiggle a bit around, claim that prior to the wave function collapse there are also many worlds in Copenhagen or whatnot, but in the end many worlds makes a metaphysical claim that two cats exist, one dead, one alive while Copenhagen claims only one cat exists in the end.
No, this is the misunderstanding that I'm talking about.
The extra "worlds" follow directly and exclusively from the existence of the various basis states in a wave function, and the laws of entanglement. No other postulates are needed.
Before the measurement/entanglement, the system and environment are independent, and can be written (|0> + |1>) ⊗ (|0> + |1>). After the entanglement, the wave function of the universe can no longer be factored that way, and the system and environment are in a joint state of |00> + |11>. The |00> and the |11> are the multiple "worlds", they show up— in both interpretations— whether you want them to be there or not.
Copenhagen doesn't want them to be there, so it says that one of the |00> or |11> goes away... at some point... because [waves hands and mumbles]. Many worlds merely declines to do this, and that is legitimately the only difference between the two.
The extra "worlds" follow directly and exclusively from the existence of the various basis states in a wave function, and the laws of entanglement. No other postulates are needed.
The many worlds are in the entangled state but then the collapse postulate reduces them to one world. If you remove the collapse postulate you put them back in. And sure, the collapse postulate is an awful solution breaking unitary evolution and you have every right to reject it, but that does not change the fact that many world introduces - or at least not removes - additional worlds that are not there in Copenhagen.
The distinction between "adding" and "removing" a postulate is an important, non-arbitrary one.
The "worlds" are there in both theories; Copenhagen adds a new phenomenon (non-unitary evolution) which makes some of them disappear at unspecified times. The "worlds" are direct consequences of suppositions shared with Copenhagen.
Many worlds has N postulates, Copenhagen has no fewer than N+1. One theory is a strict subset of the other's premises. It is not at all accurate to say that many worlds is the one that "introduces" suppositions.
I think it is not as simple as counting the number of postulates. The ultimate arbiter is physical reality and that decides whether removing or adding postulates is what brings you into agreement. The fact that the quantum state in some equation before wave function collapse looks like many worlds say nothing whether you should take this at face value or whether you are missing an additional postulate that gives you only one world.
If you argue that the collapse postulate is stupid because it is non-unitary and therefore in conflict with experimental evidence, then sure, I totally agree with this. But just because it is an additional postulate it does not per se make many worlds the better theory. Relativity without the constant speed of light is also a theory with one fewer postulate but it of course in much worse agreement with reality.
The problem with the interpretations is that we are currently unable to distinguish them experimentally which unfortunately adds much more personal preferences to the discussion then there should be.
Copenhagen sort of agrees with observation by introducing non-locality, which disagrees with observation of locality from other observations, which means Copenhagen disagrees with observation.
>But just because it is an additional postulate it does not per se make many worlds the better theory.
The postulate of collapse contradicts the postulate of Schrodinger equation and makes the system of postulates contradictory. By removing the contradiction MWI is strictly better, this in turn also removes non-locality and achieves a strictly better agreement with observation, like special theory of relativity.
> Yes, it gets rid of the collapse postulate, but no, it actually introduces many worlds. [...] but in the end many worlds makes a metaphysical claim that two cats exist, [...] while Copenhagen claims only one cat exists in the end.
If you are saying that many worlds makes a greater number of claims than Copenhagen, that's incorrect, as explained above. The claims made by many worlds are a strict subset of Copenhagen.
If you are claiming that many words "introduces" the worlds but Copenhagen does not, that's also incorrect, because the worlds (it seems we agree) are also there in Copenhagen. If they weren't, there would be nothing to "collapse" in the first place!
If you're not saying one of those things, then I'm not sure what that paragraph is trying to say.
> But just because it is an additional postulate it does not per se make many worlds the better theory
It absolutely does, given that both agree equally well with observation.
Each additional assumption in a theory is an opportunity to be wrong. Therefore, given two theories which are in agreement with observation, the theory with fewer unchecked assumptions has a higher probability of being right.
A theory which disagrees with observation (like your altered relativity example) has zero percent chance of being right, so those kinds of examples aren't applicable.
This is just a somewhat more rigorous way of explaining why Occam's Razor is so effective.
For example, take [the dragon in Sagan's garage][1]. We have two models of reality: A garage containing (a) a dragon, who is (b) invisible, (c) dodges touch, (d) floats in air, (e) gives off no heat (f) etc, etc. Or we have a world/garage where none of those things are true.
Both models agree with observation— any measurement we make will not contradict either "empty garage" or "undetectable dragon". And yet one of them is a better theory. How do we know which one is which? The Undetectable Dragon Theory has far more unchecked assumptions (a...f) than the Empty Garage theory (everything in Undetectable Dragon, minus (a...f)).
Same thing for (forgive the extreme example) conspiracy theories. Moon landing hoaxers' ideas agree with observation— they just pile on a mountain of unchecked suppositions in order to avoid contradictions. If we're holding ourselves to good standards of belief, we pick the world model without all those extra unchecked assumptions. This becomes crucially important when there's disagreement about which theory is the Invisible Dragon theory.
Of course Copenhagen isn't nearly as bad as either of those, but the point is to call attention to the epistemological weight of each assumption we add, and what strategy we use for picking between two theories that are not (yet) contradicted by data. Copenhagen is doing more epistemological lifting, and so if we want to be good skeptics and efficient world model-builders, we should require its unchecked assumptions to be checked before we prefer it over other, more parsimonious theories which also agree with the data.
> which unfortunately adds much more personal preferences to the discussion
I quite disagree. The question is of "which strategy to use for picking models of reality which are most likely to be right". There are objectively good and bad strategies for doing that, in the same way that there are good and bad strategies for designing an airplane, or winning at chess, or proving a theorem. Of course no strategy guarantees success, and a "good" strategy might occasionally perform worse than a "bad" one— But without advance access to the solution, we don't know what those exceptions are, so our best bet is to go with the strategy which performs best a priori. In this case, for the sake of avoiding accidental belief in Dragons, the number of unchecked assumptions is centrally important!
I've noticed that some people are simply allergic to the MWI view, likely because of a religious background. Strangely, my physics classes were about 80% fundamental, bible-carrying Christians, as were many of the "Fathers of Quantum Mechanics"!
Many mathematically and physically theories that are perfectly reasonable are rejected by such people out of hand because it doesn't mesh well with their preconceptions of "The Earth is Special", "I have a unique soul that is me", "Jesus came to us, here, specifically", etc...
I know I'll probably get voted down for this, but these are the literal arguments that I was given once I pressed some of my fellow students hard enough on why they reject MWI.
It's not because they investigated the logic of the situation, like you have. They just "feel" like MWI makes them less special and unique in Creation.
The problem with interjecting an implication that many worlds is true (regardless of how persuasive the arguments in its favor are) into a layperson's explanation of why quantum entanglement doesn't permit FTL communication are threefold:
1. It doesn't have anything to do with the topic at hand. It is, at best, a distraction.
2. It is impossible to design an experiment which is capable of distinguishing between a universe where many worlds is the correct interpretation of quantum mechanics or a universe where Copenhagen is the correct interpretation. Arguing about Copenhagen vs many worlds is literally no different than arguing about Jesus vs Mohamed. We simply do not have the tools to arrive at the truth of the matter.
3. You guarantee that everybody is going to stop discussing entanglement+FTL communication and will start arguing about interpretations of quantum mechanics. I'm not sure if anyone's started talking about De Broglie–Bohm theory yet, but it'll... oops I just did.
I say this as someone for whom many worlds feels truthier.
> It doesn't have anything to do with the topic at hand.
To the contrary, it provides an intuitive model that produces correct predictions. That is inherently valuable and extremely relevant.
> Arguing about Copenhagen vs many worlds is literally no different than arguing about Jesus vs Mohamed.
Strongly disagree— See further discussion[1] under my original post for support.
> You guarantee that everybody is going to stop discussing entanglement+FTL communication and will start arguing about interpretations of quantum mechanics
Uh. No. It definitely -- definitely -- doesn't do that. Many worlds, for all its many virtues, gives us "There's another dimension where you -- you -- are Batman!" Because that's what the phrase "many worlds" means to most people.
> > Arguing about Copenhagen vs many worlds is literally no different than arguing about Jesus vs Mohamed.
> Strongly disagree— See further discussion[1] under my original post for support.
So in that discussion, you made the argument:
> Both models agree with observation— any measurement we make will not contradict either "empty garage" or "undetectable dragon".
...which is literally the argument people make when they're arguing Jesus vs Mohamed. It's just that nobody agrees on which holy being is the empty garage and which one is the undetectable dragon.
Or, in the transactional interpretation, the other guy's marble sends a signal from the future back to your marble to change its color when you look at it.
I don't think it is helpful to talk about multiple universes, that makes a strong implication towards a many world interpretation.
You're correct but the point is "many worlds" or Copenhagen interpretation having no implications, they each describe the same mathematical/experimental results. They're just "ways to think about the results". They matter as much as whether you label the axes of a graph x and y or A and B. So any theory that "requires" many worlds is inherently not looking using the standard interpretation of many worlds and quantum mechanics.
Even if they have no observable differences, they still make different metaphysical claims. If I see Schrödinger's cat alive, many world claims that there actually also exists a dead cat while Copenhagen claims that there is only one alive cat. You may argue that the differences are irrelevant for all practical purposes but you don't get to claim that there are no differences between different interpretations.
You may argue that the differences are irrelevant for all practical purposes but you don't get to claim that there are no differences between different interpretations.
It depends what one considers "significant differences". If I go from caring about the practical implications of an interpretation to some other implications, I could make all sorts of distinctions. Explanation X might be written in French and explanation Y might be written in Spanish. Even if one is a translation of the other, you could say they're different in various ways. Or maybe one explanation contains swear words and makes the reader feel bad and so the reader might not "like" that explanation.
But my point above is more specific. Since the two interpretations have the same practical implications, a practical prediction can't really "need" one interpretation - the other interpretation gives you the result. This is the point about all the hidden objects/states explanations have classical analogues.
And if we're getting metaphysical, Copenhagen doesn't say live cat or dead cat but says superimposed state.
If you are only worried about analyzing a specific quantum system, then yes, for most part the interpretation does probably not matter. But I think in general the differences are very important, especially as we do not understand quantum mechanics and different interpretations will direct future research in different directions. If you believe in Copenhagen, you will try to figure out how to reconcile unitary evolution with wave function collapse. If you believe in Bohmian mechanics, you will think about the quantum equilibrium hypothesis. If you believe in many worlds, you might be thinking about energy conservation.
If you believe in Copenhagen, you will try to figure out how to reconcile unitary evolution with wave function collapse.
This is saying the interpretations "exist". The interpretation like intermediate values in some calculation process that are never returned. If the interpretations are true, they can be seen, not just "locally".
It seems like a lot of this basically involves people who've "suspended disbelief" provisionally, accepted a violation of their intuition provisionally but still are hankering for their intuition to spring back into validity. It's like people who accept that general relativity specifies curved space, understand the implications but are still expecting to somehow find a higher dimensional space that all this is suspended in 'cause that's what a fundamental reality feels like to them.
I am not really sure what you are trying to say. Are you talking about the difference between an interpretation as a way of thinking about something and an interpretation as describing what something really is?
It's like people who accept that general relativity specifies curved space, understand the implications but are still expecting to somehow find a higher dimensional space that all this is suspended in 'cause that's what a fundamental reality feels like to them.
General relativity works just fine without an embedding space but there still could be one. Not that you should think about it in this way without good reasons but only because it is more intuitive. But in the case of quantum mechanics we do - at least that is what I think - understand things so poorly that it is hard to judge what one should reasonably consider while making contact with questions about the fundamental nature of things.
"Interpretation", in the way that Copenhagen and Many-worlds are called interpretation, is just a way to add human meaning to an existing formalism. Like adding the label "energy" whatever quantity in Newtonian mechanics.
If many worlds is true, then there are actually countless parallel worlds, if pilot wave is true, then there is only one world. Those are metaphysical claims about the most fundamental aspects of the cosmos, I don't think it is fair to just file this under human meaning.
You take a marble out of the bag without looking and give the bag to your friend. Your friend also take a marble out of the bag without looking. Both you and your friend now look at your marbles and they will both be the same colour every time.
You can't send information this way because you don't know the colour until your friend has already taken a marble too. You cannot do anything with the fact that they are both the same colour unless you can control or know the colour in advance, which you can't. When you look at your marble and see that its red, your friend looks at their marble and sees that its red, all you know is you both have red marbles.
The only way to send information would be if you took multiple marbles, looked at them until you found one that's blue (say, the third) and then told your friend to look at the third marble. But since you can't tell your friend to do that without using traditional information sending, you may as well just tell them that the third marble is blue and forego the marbles altogether, you're not sending faster than light information anymore anyway.
This is, as stated, again classical correlation. Where have you used the fact that the observation of one marble affects the observation of the other?
Your above experiment could be done with just a simple bag and two marbles without the need to contort yourself around looking or not looking.
Here's an attempt to fix your example:
You and your friend each take a marble out of the bag, go very far away from each other and make an agreement to look at the marble at a given time in the future and not before or after. If you look at the marble before or after the agreed upon time, the result will be random. If you both look at it at the exact same time, the marbles will be equal.
Maybe you have many bags of marbles so you can do this experiment many times over. You decide to fudge some results by looking at some marbles before or after the agreed up time. You and your friend will see the same marble color for all marbles seen at the agreed upon time and potentially different marble colors for ones that were opened at different times.
How do you tell if your friend fudged the result? How does your friend tell if you fudged the result? The marbles have a 50-50 distribution of being red and blue, and the extra probability of it flipping to one color or no if it's fudged is lost in that noise.
If you then reconvene and compare notes on what you observed, you can see a very clear correlation of which experiments were fudged and which weren't but now you've done the work of getting in close proximity and destroyed any chance of faster than light communication.
I'm not a physicist and I don't have a deep knowledge of this stuff. This is a toy example and may or may not be a valid reduction of quantum entanglement. The above explanation is my current understanding, which could be wrong.
The only difference is that in QM the marbles don't really exist until you look at them.
Somehow they still manage to align themselves so if one person sees red the other sees blue.
Although it's even more accurate to say that if one person sees [colour] the other person sees [opposite colour].
The colours are random, but the relationship between them is fixed.
Very crudely (and rather misleadingly but never mind) this is why you can't communicate at FTL.
You need the other marble to know whether you had [colour] or [opposite colour]. And that info can't travel faster than the speed of light.
It's even more accurate to say there are no marbles anywhere - only interaction events between marble objects and people-looking-at-marble objects, and the API does not allow you to look inside either to see state.
(The state has to exist somewhere otherwise none of this would work. But Bell proves it's not inside the marbles. So it's "non-local" which is code for "we have no idea where it is".)
For it to be encoded at the cosmic horizon, it has to communicate with the cosmic horizon. It's hard to see it doing so, within the time frame of the experiments, without superluminal communication.
I think your analogy works for both many worlds and Copenhagen. You can view each universe in your analogy as states in the wave function. The two interpretations diverge only when the "observation" occurs. In the Copanhagen interpretation the other universe disappears. In the many worlds interpretation they both remain.
That’s not quite correct. There are no good analogies between classical objects like marbles or socks and entanglement.
In fact, Bell’s inequality was stated as a collaboration game that can only succeed if you use entangled particles. No classical object will get you the same results.
You still can’t communicate faster than light but the reason is more subtle. The article does a good job but for a deeper explanation I’d refer to Sean Carrol: https://youtu.be/yZ1KSJbJAng
All analogies are flawed because the underlying reality is different. They can still be useful if they can communicate some more abstract idea.
An analogy I like for entanglement is to picture two atoms that will both decay at the same time. You could place them on other sides of the planet and until one is observed to decay nobody learns anything because the timing is unpredictable. After the observation people agree with that timing independent of distance but can’t communicate anything because the timing was random. Still, having two people both knowing some fact at the same time which can’t be observed by outsiders is a useful in it’s own way.
What I like about this is it’s clear what’s going on is different from what’s being described, it’s describing a property of something, and it separates information from communication. On the other hand it’s got plenty of it’s own problems.
The problem with that analogy is it gives an illusion of understanding while being completely misleading about what Bell’s inequality actually tells us about nature.
The whole point of Bell’s inequality is that quantum entanglement is fundamentally different than classical correlation between two objects which have some opposite properties the observer simply does not know about before observing one of them.
It’s not helpful to use an analogy which teaches the reader the exact opposite of the point you are trying to make.
Your example with decaying atoms suffers from the same misunderstanding. Quantum entanglement is not about lack of information about some specific states, if that was the case, why would anyone talk about loss of locality?
Understanding entanglement and Bell’s inequality requires a completely different ontology than your everyday experience with classical objects. I highly recommend the video I linked above for an approachable explanation. It is not as simple as these analogies but at least it gets to the actual point of this result which tells us something profound about how nature works.
No so fast, Bell’s inequality only invalidates local hidden variables. It’s your interpretation that’s suggesting some local variable like a ticking clock was determining when those atoms would decay, but that’s not part of the analogy.
The many worlds interpretation is analogous to global hidden variables, and while out of favor, perfectly consistent with modern physics. That said, the core issue is IMO only a one dimensional property was correlated which hides a lot of the oddities involved.
You describe that as an analogy, but I always took that to be what it actually is (or at least one very simple example). Are you saying that that is how we interpret our experience intuitively, but we need a more radical account under the various mainstream interpretations of quantum physics (Many Worlds, Copenhagen, etc.)?
That’s right. Not only it’s an analogy, it is also a bad one and completely misleading, at least according to physics of the last 50 years. Note how the article frets about the loss of locality.
The only thing we experience from preforming at an experiment is the data it provides. As such from the data from existing experiments is where all the spooky action at a distance is actually observed.
I don't think this criticism is correct, at least in response to what was said.
Yes, a classical bag containing classical balls doesn't reproduce quantum behavior, because of Bell's theorem. But GP's description isn't classical; it explicitly invokes multiple universes. Once you've done that, quantum behavior is reproducible, because (just as Bell's theorem says) it's no longer possible to ascribe a single hidden state to the ball/bag system, because you can't eliminate the extra universes.
What exactly does it mean? This sentence always feel like as if these particals are sentient beings and can react to when someone sees them.
If observing means performing an experiment and finding out is this information, that an experiment to observe the actual state has been performed, sent to the other half? To know the state of other half another experiment had to be carried out on it.
How is it any different from picking randomly from a set of pairs? We won't know what other half is until one of them is observed.
Also you could look up some other, related quality instead and the result on the other side would still be the reverse. This cannot be explained using some hidden variable in the particles (like the color of your marbles), so it requires an action to happen in the distant particle dependent on which quality you chose to look up in the local particle.
Your discussion is inherently within the classical realm and so it doesn't explain the uniqueness of quantum phenomena. You easily have "classical many worlds" where unknown information makes a model "split" and gaining the information decides which split you get. That's still not quantum in particular.
Personally I prefer the superdeterminism arguement: i.e. the state of every "future" entanglement was already set "before" the big bang. The anthropocentric corollary is that "free will" is an illusion.
You shouldn't even bother responding to the person that's claiming to be a research scientist in his/her profile and while he/she wrote such garbage at an attempt to justify such nonsense. HN is overcroweded with people that are against the notion of free will being an illusion. Even dang has a bias. All comments like yours just get unfairly downvoted until they're not visible.
Yeah there is definite a SF hippie/beat woo vibe here at times. Maybe "consciousness" is fundamental and free will is real but so far we haven't see any evidence. I think superdeterminism is more parsiminous than multiple universes.
It can't be used to send a message because all you can do is measure your particle. Even if doing so changes the state of the other particle far away (which isn't really what's happening, but that doesn't matter), all the other person at the end can do is measure their particle.
Neither of you can choose what the state of either particle is. You have no control, so there's no way to transmit information.
What you can do is agree in advance that you will both take certain actions based on the measured state of the particles. There's no way to be sure the person at the other end actually does so though.
AIUI, yes, it has been found that, e.g. there are games where two players who can’t communicate but who do have a pair of entangled qubits and can each make a measurement, would have a strategy which is better for both players than if they didn’t have this resource.
(where the games in the version where they don’t have the resource of the entangled pair of qubits, are fairly simple games where each player receives some information and then has a choice between two options)
I find that this article [0] from Conway and Kochen is helpful. The authors do not really explain the paradoxes of quantum mechanics. Instead they reduce them to minimal fundamental axioms that have been tested and observed experimentally, even though they are arguably highly counter-intuitive (notably SPIN and TWIN). Based on those axioms, the authors show that you cannot send a message through entanglement. More precisely, they show that a particle has a free will, in the sense that the result of a measurement on it "is not a function of properties of that part of the universe that is earlier than this response".
Two balls are a box. Neither are spinning. The box gets “shaken up” and the balls hit each other. We know that one ball is spinning clockwise and the other is counter clockwise because angular momentum spin is conserved. The balls launch far away from each other. We know the spin is entangled in that one is clock wise the other is counter clockwise but we don’t know which is which until we measure. How do we use that to communicate?
> how you can be sure about the state of the other particle, what if someone already measured it and then did something to it?
Indeed, you are only sure about the state of the other particle in the instant just after they measured it. Whoever measures first instantly destroys the entanglement link, so if they chose to manipulate the particle after measurement, you will have no knowledge of these manipulations.
More generally, note that in quantum mechanics "reading" the state of a particle (i.e. performing a measurement) is drastically different than "writing" information by manipulating a particle. Most entanglement-related weirdness hinges on this fundamental asymmetry between "read" and "write" operations for quantum information.
Or even better than instantaneously, let's get messages sent to us from the future using a Ronald Lawrence Mallett time machine based on a ring laser's properties, such that at sufficient energies, the circulating laser might produce not just frame-dragging but also closed timelike curves (CTC), allowing time travel into the past. I cannot believe that Ronald Mallett's biggest challenge is getting funding for a feasibility test. Isn't it the greatest venture capital opportunity of all time?
Total layman, I’m sure I’ll get corrected if I’m wrong here:
Bell's Theorem is just a model showing how two sets of measurements of certain properties of entangled particles would differ in a “Quantum Physics” regime where there is spooky action at a distance changing the measured properties versus a “common sense” regime where both particles leave the entanglement site with those properties already set.
[0] is a fairly easy to understand graph of the correlation of the measurements at the two sites in both regimes, even if the math that generates those charts is beyond me.
Given that, just knowing what measurements you got only gives you an idea of what the other guy across the universe would see were he to measure your particle’s entangled partner - he has no more control over what those measurements actually are than you do, you just know that there’s a certain correlation between what you saw and what he saw. That’s why you can’t use Bell’s Theorem to communicate, neither one of you is actually controlling the measured property, there’s just a certain correlation between the measurements both of you got.
It seems to this layman that in order to communicate FTL using QM, you’d need a way to determine that the property being measured has already been “collapsed” (if that’s the right word) by spooky action at a distance, e.g if the other guy had already measured it. Bell’s Theorem gives us no way to determine if the other guy has actually made the measurements, it assumes that both you and he have both made those measurements.
You’re measuring the value, not setting it. Just like looking at a ball that was wrapped in a bag, seeing that it is red means your partner must therefore have picked the blue ball, but there’s no way you can control whether your partner had the red or blue ball, so the two of you can’t use that information to communicate FTL.
Not a physicist, but my answer to you is that usually superliminal speeds is the price that physicists are willing to pay to explain what is observed in experiment. I get your objection to the rather convoluted argument that special relativity still applies to message transfer, but I accept it. John Preskill explains the information within entanglement with an analogy to a book. Normally with a book, you can read one page seperately from all the other pages. Further, if you unbound the book, and randomly distributed the pages to your friends, you could put your heads together and reconstruct the entire book. With a "quantum book", the information is encoded in the correlations between the observables, and you can only see the information when all the pages of the book are together and in the correct order. If you look at a single page of the quantum book, it's purely random gibberish, and you can't derive anything about the book by looking at a part of it.
Look up how entanglement is done experimentally. It will always involve a technology which can be used to classically transmit information at a distance.
What happens in entanglement is that the two entangled objects receive say an entangled photon, it is at this point where the two objects are entangled.
Entanglement is a dance of the statistical limits and position of a particle/object given a specific space/energy configuration (initial condition). From this we know the probability of where it can be, what states it can assume, and the limits of both—given the energy it takes to traverse space and assume those states at once.
They are entangled because once information of the states of one of the entangled objects is measured (mainly by analyzing the exiting photon), we can apodictically discern the state of the other.
> I'll never understand entanglement. Every explanation makes me wonder why it can't be used to instantaneously send a message.
Say you and Bob share a bunch of entangled particles. Bob wants to send you a message using those particles, so he takes one particle at a time and encodes his information. How would you know he did so? At the very least, Bob would still have to send you a classical signal to say he did something.
There are more subtle arguments why this doesn't work even at the particle level, but that at least should give you an idea why superluminal communication won't work.
Funny, I never understood how you could possibly send a message using entanglement. Try to explain how would you do it, and either you will understand why it can't be done... or earn a Nobel prize. Win-Win.
You can measure a particle's spin to be up or down. But you can't choose to measure it to be up. It's random and up to nature. This is exactly why it can't be used to send information.
Reminds me of the in-lore comms system in Mass Effect. I think the comms in the ship were two atoms that were entangled, allowing instant messages no matter how many lightyears away the ship was from Earth.
Let's say particles have a 'direction angle' that we can measure with a detector that only gives 'up' or 'down' relative to a direction angle measurement. We can change this direction angle measurement with a knob to set what the measured 'up' and 'down' answers are relative to the detector's direction angle. Further let's say particles can be quantum entangled so that when when two detectors are placed very far apart, many light years apart, say, and measure a quantum entangled pair of particles.
When the two detectors are set to the same, but arbitrary, angle, the detectors give the same answer. This is normal correlation. Quantum correlation says that as one dial moves away from the other reference point, the correlation falls off as a sine wave, not a linear decrease as would be expected by classic probability.
To see how bonkers this is, do the following experiment:
Set detector X to be at angle 0 and detector Y to give a 1% error rate. Call that 1% angle 'a'. So a sample experiment run might be:
In the above, 1 could be an 'up' and 0 could be a 'down' detection, say. For concreteness, let's just say A and B ran 100 detections and there was one difference between them (giving 1% error), represented by the third differing bit in the above.
Now let let's change both X and Y by the same angle so the relative error rate between them is still 1%, this might give something like:
X and Y still have one difference in the above, but now with the 8th position changed. So far this is nothing unexpected from classical probability.
Now, we know that from X(0) to Y(a) there's one change, from X(a) to Y(2a) there's one change. Classic probability says that there can be at most two flipped bits from X(0) to Y(2a). Quantum mechanics predicts three.
To convince yourself, try making a list of bits such that there's one difference between X(0) and Y(a), one difference between X(a) and Y(2a) but three differences from X(0) to Y(2a). It's impossible and this is the heart of Bell's theorem.
Bell's theorem is a classical probability statement, generalized from my above statement that if |X(0)-Y(a)|=1, |X(a)-Y(2a)|=1 then |X(0)-Y(2a)|<=2. Quantum entanglement violates Bell's inequality.
The 0 reference point has to be arbitrary (in the above it should really be X(ref_angle + a), Y(ref_angle + 2a), etc.) and you have to assume no faster than light communication (that is, independence) to get the contradiction. There are some further subtleties with the above argument but hopefully that's intuitive enough to follow why quantum entanglement is so counter intuitive.
I see, I actually answered the wrong question, sorry about that.
I'm also a bit confused by why it can't be used to send information but here's a try:
In the above scenario, if the particle (pair) has completely random spin, one that can only be observed by detection and not by some sort of construction, then each observer sees a completely random bit, regardless of whether it gets "flipped" by the "non-local" observation/communication of the other particle. They'll only be able to discover the correlation after the fact, if they compare notes and thus have to meet up, destroying any non-local benefit.
Put another way, if you have a bit with probability p of being 1 ((1-p) of being 0) that you're communicating over the wire but the wire is so noisy as to flip it with probability 1/2, then you won't be able to recover what the transmitted information was.
You'll be able to discover the correlation between the bits if you compare notes after the fact but since the "wire" acts as a completely noisy channel, you can't recover the transmitted bit.
The subtler issue is that it's a counterfactual question. What would have happened if I had measured or been able to measure all three angles 0, a, 2a? In this case the bit string is the same except for the 1% difference. In other words, X(t) = Y(t), for all t.
The argument is essentially trying to construct a "hidden variable" model and showing that it can't work.
It's easy to understand (I'm being a bit hyperbolic) if you can believe that space and time are emergent properties of matter and not required for the underlying physics.
I think I prefer Felder's explaination more than Quanta's. It's omitting some details (eg. the angles) but is better at explaining the difficulties of Bell's Inequality--why it seems like spooky action at a distance and why it cannot be used for communication.
One thing I've not been able to clarify is whether Bell accounts for the possibility that passing through a polarization filter could effect the waveparticle in some way, like altering its polarization angle.
Yes, the "altering" you're describing is what the theorem would call a hidden variable. Seems reasonable, but when you do the math it's exactly he kind of theory Bell's inequality rules out. There's no way to set the "polarization angle" (or any other set of variables) such that they obey the probabilistic laws we've observed (without violating some other assumption, like single measurement outcomes, statistical independence, determinism, or locality)
Couldn't the passage through the first filter effect the polarization angle in such a way that it matches the blue line instead of the red line. One could devise a physical contraption to demonstrate this is possible by sending bar magnets through slits of magnets of the same charge. Any magnet oriented such that it's too close to a slit will reorient slightly. Visually...
That is, if both photons or magnets happen to pass through the first filter, their probability of both going through the second filter is boosted from the slight reorientation, and thus their outcome correlation will be boosted, which is what the blue line in the graph above shows.
My main contention is that it's possible to construct a physical apparatus using visually observable non-quantum macro objects (like pairs of bar magnets) that pass through such filters with the same correlations shown by the blue line in the graph above. Such correlations would apparently violate Bells theorem, even though the objects were obeying classical laws of motion.
And it's certainly conceivable that passing through a slit could change the orientation of a wave, whether mechanical wave...
Note that there is a very important property of entangled particles that is hardly ever mentioned in this kind of exposition, which IMHO casts a lot of light on what is really going on, and that is that entangled particles do not self-interfere the way non-entangled particles do. For more details see:
I don't think the paper justifies the statement as you put it, though perhaps you can point out what I'm missing. I don't think you can tell just from looking at the particle itself whether it has an entangled partner somewhere in the universe.
It is, however, possible to use the entangled partners to create systems with decidedly counter-intuitive properties that change the way the un-involved partner interacts. That's also the essence of Bell's Theorem.
It only works when you're controlling the experiment as a whole and thus not transmitting information faster than light... though you can set up the experiment in a way that makes the conventional transmission of information incredibly obscure. Bell's Theorem requires you to jump through a lot of hoops to exactly mimic that, which is why it took a long time to definitively rule out other interpretations of the experiments.
See section 4.2, and in particular the paragraph that starts "Here's the kicker..."
It is true that you can't tell if a single particle is entangled or not. But if you have an ensemble of particles all prepared in the same state then you can tell if that state is entangled or not. Non-entangled (a.k.a. pure) states have a preferred basis that produce self-interference. Entangled (a.k.a.) mixed states do not.
(The pure-mixed dichotomy is a little misleading because it depends on your point of view. A single member of an EPR pair is in a mixed state, but the pair as a whole is in a pure state.)
I agree with the GP. If you only have a single member of the pair, then you will see the same interference pattern in a double slit experiment than with a not-entangled particle.
It doesn't matter if the other particle has collided with a brick, went thru a double slit experiment, went thru a bad double slit experiment, or is flying to Andromeda.
(In spite that the calculation to get the correlation with any result in the other experiment may be much harder with the entangled pair than with two non entangled particles.)
What can I say? You're wrong. The math shows that you're wrong (as does the elementary argument presented in the paper). Find a physicist and ask them if you don't believe me.
I have at least 8 Physicist that I email/zoom regularly (at least twice per month). I also have half a degree in Physics, with at least 2 courses of Quantum Mechanics (all the advanced courses also use QM, but there are 2 courses only about QM). [I also have a degree and a PhD in Math, but it's not too relevant here.]
Anyway, I'll read the article thoughtfully and write a long comment tomorrow. Can you take a look tomorrow?
Just to be clear, the part you are wrong about is this:
> If you only have a single member of the pair, then you will see the same interference pattern in a double slit experiment than with a not-entangled particle.
This bit:
> It doesn't matter if the other particle has collided with a brick, went thru a double slit experiment, went thru a bad double slit experiment, or is flying to Andromeda.
is correct.
Also, there is an interference pattern in the results of a double-slit run on entangled particles, but it is not "the same" as you get with non-entangled particles, and the procedure you have to go through to observe this interference pattern is radically different.
> Here's how it works. We send a pair of EPR photons through a pair of two-slit
apparati each of which has a polarization rotator on one of the slits. On one side
of the apparatus (side A) we install a polarization filter which filters out
interference on that side and makes it visible. We can filter out interference on
the other side (side B) of the apparatus as follows: on side A we keep a record of
which photons passed through the filter and which were reflected. On side B we
keep a record of where each photon landed on the screen. We then take these
two records and combine them: for each photon that was passed through the
filter on side A, we take the corresponding photon on side B and note where it
landed on the screen. The end result is a (visible) interference pattern. It was
there all along, but the only way we can filter it out so we can see it is to combine
information from both sides of the experiment. And that is the last nail in the
coffin of superluminal communication via entangled photons.
Let's suppose you are in lab B and measure where the photons hit the screen. There are no visible interference patters. But just before you call to lab A, it's is nuked from orbit.
Now, you are unsure if the people that was generating the photons were sending pairs of entangled photons to A and B, or they were just sending pairs of normal photons.
Can you look at the data you collected in B and discover what the people in the generator were doing?
Now imagine the same experiment with a rebuild lab A', but you remove the polarization rotator. Now you see the interference pattern. And A' gets nuked again. Can you look at the data you collected in B and discover what the people in the generator were doing?
---
I understand that you can take a plane and collect all the pieces of A and A' are reconstruct them, after all Classic Mechanics and Quantum Mechanics without the measurement rule are reversible, so it's theoretically possible, but very impractical.
I think this discussed in section 5. For the measurement problem, I prefer the something-something-decoherence solution. I call it something-something-decoherence because there are still a lot of work to be done before it's clear if it's the correct solution.
---
About the experiment in 4.2:
I'm 99% sure after adding the polarizer at 45° there will not be interference. You can split the polarizer into two smaller polarizers with the same angle, one for each slit. The exchange the order of the rotator+polarizer in one slit to polarizer+rotator. Note that after the exchange, the new polarizer must be rotated 90°, so it's polarizer at "-45°". Now the first thing in one slit is a polarizer at "+45°" and the other is at "-45°", so they will select orthogonal states and not get interference even after rotating one of them.
I think this can be fixed using a quarter-wave plate, but the calculation is slightly more complicated.
[Sorry for the looooong delay. I missed your reply last week and I just saw it yesterday night.]
Now I'm confused. I had to read the link carefully and try to translate the experiment with polarizer to the experiment with the double slit.
I'm still not sure, but an important point is that in a usual double slit experiment there is a single slit before that acts like a collimator and ensure the photons have no preference for each slit. Let's suppose you see isolated in lab B:
* If you add the collimator, then all effect of entanglement are destroyed and you see the usual interference pattern.
* If you don't add the collimator, you have and ensemble of particles that go to each slit, and cause no interference pattern. It's not necessary to add a polarization rotator to one of them.
* If the experiments are far away, ¿does it count as an implicit single slit collimator? I'm confused here. I'd prefer a version with only polarization or other property that is not mixed with the setup of the experiment.
Yeah, it's confusing. I should probably write a new paper just on this topic because there isn't really a good explanation anywhere.
> translate the experiment with polarizer to the experiment with the double slit
Best is not to get too hung up on the physical details. What matters is that a stream of particles can get separated along two separate paths and brought back together, and this can produce an interference pattern. The particular degree of freedom along with the separation takes place (position, polarization, spin, whatever), or the details of how they are split and brought back together (two-slit, half-silvered mirrors, Stern-Gehrlach apparatus, whatever) is mostly irrelevant. What matters is:
1. When unentangled particles are sent through one of these split-combine setups they produce an interference pattern.
2. When you "measure" the degree of freedom along which the particles are split in one of these split-combine setups, the interference pattern disappears and is replaced by a non-interference pattern. (This is just basic quantum mechanics 101.)
3. When you send entangled particles through a split-combine setup what you get is a non-interference pattern, exactly the same as the one you get when you "measure" an unentangled particle. But...
4. If you go through a rather elaborate process (see below) you can separate the entangled particles into two groups, each of which exhibits an interference pattern, and these two interference patterns will add up to make a non-interference pattern. (Even more interesting, there is more than one way that you can do this separation, each of which will produce a different pair of interference patterns, but any given pair will add up to the same non-interference pattern.)
The "elaborate process" involves making measurements on the complimentary observable for one member of each entangled pair, and classifying the other member of the pair into one of two groups based on the outcome of that measurement. This is where the physical details get really complicated for anything other than polarization, where the complimentary observable is just polarization along an axis rotated by 45 degrees to the original.
Note also that the reason that #3 above is true is that measurement and entanglement are actually the same physical phenomenon. The mathematical description of an entangled particle and a "measured" particle is exactly the same.
> 1. When unentangled particles are sent through one of these split-combine setups they produce an interference pattern.
It depends on how you prepare the particle beam. With a laser or passing first though a single (centered) slit, you get a beam of pure state particles that cause interference.
If you use other method, you can get an ensemble that doesn't produce interference.
About the use of the polarizer at 45°, I'm still not convinced. I have to write it carefully, but now I'm super busy [1]. I hope to take some time to write it next month.
Anyway, it would be nice to replace the double slit experiment with another experiment, like a beam splitter. It's difficult to remember what the two slits and the screens do to my brackets. (I think I know now, but I must write that carefully.)
I'm no expert but I also lean toward superdeterminism. It's either that or the universe is not deterministic at all. Believing that the universe is only partly deterministic is the same as believing someone can be partly pregnant.
As I understand, superdeterminism isn't the same as determinism. Normally Bell's theorem would tell you that it's determinism, but a deterministic system still can arbitrarily end up with skewed statistics and thus look like anything and you can't expose it because it's skewed in a specific way. That's superdeterminism, it looks like not what it is by pure chance.
Superdeterminism is still determinism but an idea added to it for all particles sharing all future states. Superdeterminism is against the idea of real-time processing while Quantum Mechanics is typically imagined as real-time processing. Sure, determinism can illustrate a deterministic system producing infinite universes that are deterministically operating every possibility imaginable, where a local subject could get the impression things aren’t deterministic while they indeed are if viewing the whole system non-locally and that with what you’re describing doesn’t really fit the expression of chance.
In any determinism all particles share all future states simply because future states are predictable and there's only one future, which is total statistical correlation. In superdetermininsm these future states aren't any states, but particularly selected states that show skewed picture. And they ended up this way arbitrarily, which is a chance (maybe bayesian).
Gerard 't Hooft believes we would still have quantum computers faster than classical computers in a superdeterministic universe. However, the speedups would be more modest and factoring huge numbers in poly time would be out of the question
I've quoted it before but I will again just because I hate superdeterminism so much:
First, the logical flow: Bell’s theorem proves that no local, realistic theory can reproduce the predictions of quantum mechanics. It does so by considering a very specific situation of entangled particles being measured by spin detectors set at different angles. Critically, the angles of these spin detectors are assumed to be set independently from one another. ...
Experimenters have tried to ensure independence for all practical purposes with elaborate techniques: independent quasi-random number generators running with different algorithms on different computers are one very basic example. On more advanced experiments, they use quantum sources of randomness, and try to make sure that the choice is only made once the particles are in flight.
The trouble is that in principle, there will always be a point in the past at which mechanism used for the angle choice, and the mechanism used to produce the entangled particles were in causal contact with one another. (If all else fails, then the early universe will provide such a point.) The super-determination thesis says that any past causal contact can in principle provide correlation between the settings of the two detectors (or the detectors and the properties of the particles), and is the source of the violation of Bell’s inequality.
Here’s a deliberately ridiculous example. Once the particles are in flight, I throw in the air a box of Newton’s notes on alchemy. I select the one that falls closest to my feet. I roll two dice, and use them to select a random word from that page. I match the word with its closest equivalent in Caesar’s commentary on the Gallic wars, or the Iliad, or the complete works of Dickens, my choice of work depending on the orientation of the Crab pulsar at the moment of measurement. I use the word position in these works to select a number in this book A Million Random Digits (take the time to read the customer reviews). And I use this number to set my detectors. I repeat this for my other measurement runs, but I substitute in Dan Brown’s Da Vinci Code for Dickens every third go.
Superdetermination advocates would tell me that there is in principle a causal connection between my throwing the papers in the air, Newton, Caesar, Dickens as they sat down to write 300, 2000, and 150 years ago, the Crab pulsar and the RAND corporation’s random digit selection. And that it’s possible that these things have conspired (unknowingly) to make sure that my detector settings and a particle’s spin measurement is correlated in a particular way in my lab in a law-like way.
I can only reply that yes, it’s possible. I cannot prove it wrong. But I can find it unreasonable. And I would be tempted to call these people philosophically desperate.
> I can only reply that yes, it’s possible. I cannot prove it wrong. But I can find it unreasonable. And I would be tempted to call these people philosophically desperate.
I'd be tempted to call those people closet theists who are in denial, but maybe you're more polite than I am.
What I mean is that they have something in their system that is playing a god-like role, but they're "scientific", so it can't be God.
By the way, I would say the same about the "universe is a simulation" people.
Your argument is kind of hand wavy and while the same can be argued about quantum mechanics if not attempting to give the theory much thought. Similar to who you’re replying to as well. There exists a few ways that superdeterminism can make a universe such as our own while making humans believe in quantum mechanics and similar to the few ways that quantum mechanics can make a universe like our own with humans believing in superdeterminism. Both theories use logic to express how they do it and the only significant difference is real-time processing vs predeterminism.
> And that it’s possible that these things have conspired (unknowingly) to make sure that my detector settings and a particle’s spin measurement is correlated in a particular way in my lab in a law-like way.
Yes exactly, which is to say that your instrument calibration dicated by that elaborate randomization process, just ensures that the particle will arrive in a specific configuration, which is a purely local, realistic phenomenon.
Sabine and Palmer recently explained how superdeterminism can be understood easily as future input dependence:
Edit: despite superdeterminism annoying you so much, I bet you're perfectly fine with general relativity in which time is just another space-like coordinate, and the correlation you describe is a perfectly well-defined path along a closed timelike curve. An interesting inconsistency if true.
I think s/he is simply against anything that postulates things aren't happening in realtime. That's what the problem boils down to regarding superdeterminism vs quantum mechanics. Certain people are okay with predeterminism while others want every moment to have been processed when it happened maybe "process" isn't the best way to express it but it gets the point across.
Is it reasonable to say the universe might be superdeterministic, but in the example of choosing measurements for an experiment(or almost any other example imaginable), it might as well be truly random as the causal links affecting the instruments isn't likely to be 'conspiring' in some way to impact the results of the experiment?
e.g Anything could be predicted with absolute knowledge of the starting state of the universe, and infinite computing power, but in most practical cases the causal connections between seemingly unrelated objects is irrelevant and as good as random?
I think it stops being science at that point though. For example, if someone made a quantum computer powerful enough to factorise large numbers then that would appear to disprove superdeterminism. However, proponents could always argue that the computer only works because the universe conspires to make the human entering in the numbers to be factorized enter specific values which the computer will then know the factors of.
I'm not a physicist though, so I might have something wrong here.
> I would be tempted to call these people philosophically desperate.
Wouldn't it be equally valid to say that the Quora commenter is philosophically desperate to avoid the natural conclusion that there is an entity that is able to influence the actions of Newton and Caesar etc. and the commenter themselves?
"This is the first of a set of papers that look at actual Einstein-Podolksy-Rosen (EPR) experiments from the point of view of a scientifically and statistically literate person who is not a specialist in quantum theory."
Couldn't that 'hidden variable' be just another dimension?
So while those particles might have been separated in space by a large distance, on that fifth dimension they haven't moved and are still sitting side by side.
The distance between two points is basically just the Pythagorean theorem: x^2 + y^2 + z^2. If you have extra dimensions, you just add more terms to this. Now, if you know that e.g. x is very large then that puts a lower bound on the distance:
Distance^2 = x^2 + y^2 + z^2 + ... > x^2.
In other words, if x is large there's no way that the two particles are still somehow close together.
Put two iron pellets on a very thin, flexible fabric. Underneath the fabric, put a magnet so each pellet is stuck to one pole of the magnet through the fabric. Tug the magnet "down" in the third dimension, which will fold the fabric and move the pellets so that from their perspective (stuck to the 2D surface of the fabric) they're moving far away from each other. They also maintain a constant and very short distance in the third dimension, but they can't take advantage of it because I forgot to mention the fabric is impervious to them.
This is only true for cartesian spaces. If the low-dimensional manifold is "folded" or "crinkly" inside a higher-dimensional space then you can indeed have particles that look far apart on the manifold but which are actually close together in the higher-dimensional space.
(My terminology here is largely wrong; this is not my field. Perhaps an expert can correct me.)
Bell's theorem is strong: No local hidden variables can explain quantum entanglement.
Your proposed fifth dimension is exactly such a variable – there is no set of values it can take on that explains the quantum phenomena we can easily and reliably observe.
The fact that you think of it as a distance itself isn't important to Bell's theorem. Such a theory is cannot even obey relativity in our existing 3 dimensions (obeying the speed of light in your extra dimension would be even stricter!)
Sure. There are assumptions you need for Bell's Theorem to apply. Statistical independence, single-outcome measurements, locality, determinism... All of these feel right to our scientific brains, but something's got to give.
So yes, you can throw out Bell's theorem if you're willing to throw out statistical independence with super-determinism. It's a tough leap to make, though. Super-determinism implies that there aren't really any laws of quantum mechanics. The fact that billions of quantum interactions have been observed in labs to obey consistent probabilities over time is pure coincidence. The fact that two entangled electrons always have opposite spin is pure coincidence. Nothing about the laws of the universe say you couldn't observe the other thing, it's just that the initial conditions of the universe prevented us from making measurements any time we would have measured otherwise. It's a tough pill to swallow.
My comment above is just saying you can't throw out Bell's theorem by saying "maybe there's a fifth dimension". I don't think GP meant a fifth dimension plus superdeterminism.
From what I can gather, in a single world view, it seems that thinking of reality in an information oriented way results in more accuracy. So you might have interactions that aren't dependent on distance, like with entangled particles. Taking on this view is a bit of a mind bender from a normal spatial view of the world though.
there are many ways to explain the result and all are extremely strange. particles communicating at FTL speeds? particles actually touching in an unknown dimension?
It is also strange, but I think the many worlds hypothesis makes the most sense to me. Then there is no spooky action at a distance, the observer is just always in a particular version of reality that all states, even distant ones, conform to. I am not a physicist!
If you "measure" the bits per character in the base 45 alphanumeric encoding used in QR code, you'd get 5.5 bits per character as 11 bits is used for two characters.
How is it possible to have information less than a bit, a partial bit? What is that ".5" part? Isn't a bit indivisible?
Only in the context of a character doublet is all information expressed. To know the "half bit" part, you cannot "look" at just one character, you have to look at the total. The information is shared between the two characters. Measuring the bits-per-character is only useful when considering the whole system. The "partial bits" is information smeared across the system. Changing the middle bit may change one, or both, characters.
Here's a 11 bit example, where the middle bit is changed and it changes both characters:
(11101001010 vs 11101101010, or '/L' vs '%8' encoded)
The same principle applies to information theory and cryptography. Security can be measured in "partial bits" because it's measured across something larger.
5.5 bits is also the average information content of a single run of the GHZ experiment. In this setup three parties independently choose a binary detector setting and each observe a binary outcome. The first two parties observe an independent random bit regardless of their settings. If an odd number of the parties have their setting "on", then the third party also observes an independently random bit (6 bits total to record, 3 for the settings and 3 for the observations). But if an even number of of the three settings are "on", then the third party's observation is completely determined by the other 5 bits. When the settings are chosen randomly these two possibilities are equally likely so on average it takes 5.5 bits to record the results of the experiment.
What is the prevailing theory to explain quantum entanglement? Must there be another dimension we cannot access or measure that is not subject to the laws of relativity? (I understand the laws of relativity break down at the quantum level but please ELI5)
I think people get confused when they think that each object has a wave function. This is not correct. The universe has one wave function. The wave function consists of a bunch of possible states along with the coefficient for each state. You can think of each state as being a distinct snapshot of what the universe might look like - including for example the position and spin of each particle. In the example of two electrons shown here, the wave function has non-zero coefficients only for states where the two electron spins are in opposite directions.
When we make a measurement, the state of the universe appears to collapse, meaning any state that is not consistent with that measurement disappears. This means the other electron is left in the opposite spin state. (Important aside here, some people believe the wave function collapses, "Copenhagen interpretation" and some people believe the wave function doesn't change but the the brain of the observer correlates/entangles with the electron, "Many Worlds Interpretation". Either way there is an operational collapse of the wave function.)
A special case for a wave function is when the coefficients are arranged so that state of one particle, say particle 1 spin, is symmetric no matter what the state of another particle, particle 2, is. This special case is when particles are NOT entangled.
I think what I get hung up on with explanations like this is, what changes once the wave function has collapsed? Are there observable characteristics before wave function collapse that become different after the wave function collapses?
Maybe this question just reduces to “how can we tell the difference between two entangled particles having always been in some state (but we didn’t know it) vs. being simultaneously in both states until we make a measurement?”
Based on other comments in this post, it seems like the answer may be: Bell’s theorem proves that classical explanations have an upper bound on correlations between the particles, but quantum mechanics predicts a correlation the violates the classical upper bound. And we can experimentally test the correlations in practice.
I want to add to my above comment. Non-entanglement is a special mathematical case, but it happens quite often. If the two particles never interact in any way, then the special condition will be true and they will not be entangled. There is another case where the particles _appear_ not to be entangled. This is when the wave function is so jumbled that even though the particles are entangled you can't detect it. This is called a decoherence. This also happens quite often and is why macroscopic quantities don't exhibit entanglement and hence quantum behavior.
Quantum entanglement falls out of the quantum mechanics, so in some sense, the prevailing theory to explain quantum entanglement is quantum mechanics.
Of course, it's unintuitive and unsettling, so you could generate other theories about other dimensions if you like. But as far as predicting the results of any experiments we can do, QM is all you need.
Also, there are two very different theories of relativity, the special and the general. Special relativity is taught in 1st year undergraduate physics, you really only need high school math & physics, plus an open mind, to understand it. This has E = mc^2, twin paradox, length contraction, time dilation, speed of light as a limit. It's actually a pretty small topic, it usually doesn't have a separate course because it wouldn't fill a one semester course. QM is fully consistent with Special Relativity.
The other is general relativity, which revises gravity in light of special relativity. This is a much bigger topic and typically taught in grad school, although there are some undergrad texts now that don't require math as advanced as the grad school ones. QM and GR are incompatible, and the search for a "quantum theory of gravity" is a key plank in any "theory of everything."
It's easy to explore QM and SR, because it's easy to accelerate fundamental particles to near the speed of light. Here's a video from 1962 where electrons were accelerated, they measure the time between passing two points (to get speed), and heat energy deposited on a target (to get kinetic energy) to show how SR works. Nothing QM specific, but shows how easy it is to get quantum particles moving that fast, so you can do experiments on them: https://www.youtube.com/watch?v=B0BOpiMQXQA
Combining gravity, which needs great mass, with QM, which needs small space scales, is "hard" to do in a lab.
There isn't really an "explanation" for quantum entanglement. It is a fundamental property of the universe, arguably the fundamental property of the universe. But the Right Way to think about it IMHO is this: the quantum wave function is not defined over physical space, it is defined over configuration space. A wave function defined over physical space is a special case that pertains when you are dealing with a system consisting of a single particle, in which case physical space and configuration space are the same. But as soon as you add a second particle, this physical intuition breaks down.
The prevailing theory that explains quantum entanglement is precisely the theory of quantum mechanics, OP. If you're genuinely curious, I strongly encourage you to obtain an undergraduate degree in physics, which will equip you with the mathematical and theoretical background to see how the one explains the other.
Not necessarily. Bell's theorem assumes statistical independence, but that means that either spooky action at a distance is real, OR that experimenters do not have complete freedom to configure their instruments (aka superdeterminism).
> really permits instantaneous connections between far-apart location
The phrasing in this article is tricky, as it wasn't FTL communication that was proven; just that there are correlations between things that would require FTL communication, were they classical processes. This is an important point: https://xkcd.com/1591/
One reason I often hear astrology is not taken seriously by the scientific community, as in findings like 'athletes often have aries rising on their birth chart' are ignored and not evaluated further, is because there is no empirical foundation for the communication of the effects.
There actually is empirical foundation for the communication of the effects![0] But the model is strictly simpler if you remove the astrology from it; astrology has no additional explanatory power, and its novel claims (claims not predicted by any other model) are wrong.
> Such children are more likely to be picked for school teams. Once they are picked, players benefit from more practice, coaching and game time — advantages denied to those not selected, who are disproportionately likely to be younger for their selection year. Once accounting for their biological age, the older players might not have been any better than later-born children when they are first picked. But after becoming part of a team, and being exposed to training and matches, they really do become better than later-born children who might be equally talented.
Sorry if my language was unclear, I meant to highlight this line from the wiki article:
"There is no proposed mechanism of action by which the positions and motions of stars and planets could affect people and events on Earth in the way astrologers say they do that does not contradict well-understood, basic aspects of biology and physics."
Relevant here because it essentially says "there is no empirical basis for spooky action at a distance" which has been grounds for dismissal of such action-at-a-distance claims like 'the relative positions of celestial bodies influence events on the earth'.
This kind of empiricism has been used as grounds to not critically evaluate these claims. Everyone is certainly free to have their own reasons around why they do not want to evaluate such claims. For example some people only want to consider things that are easily falsifiable and subject to particular scientific practices. The wiki article goes on to mention how Carl Sagan refused to disavow astrology on these grounds (i.e. gravity is weak so stellar influence writ large ought to weak) while still leaving room for a disavowal if it were on firmer grounds. I do think your point about simplicity is salient here.
> 'the relative positions of celestial bodies influence events on the earth'.
Who's claiming that‽ The relative positions of celestial bodies have influenced all sorts of events. For instance, the horoscopes in the newspaper, or photographs of the night sky.
No, what's in doubt is astrology, which is a much more specific set of (wrong) claims.
I'm not a Physicist. From what I understand, Bell's theorem only covers local hidden-variables and that this can still be explained using non-local theories like Bohm's. Can someone shed a bit of light on the non-local theories and if Bell's theorem addresses those as well?
Bell was inspired by Bohm's theory. Having seen that a rational theory of existence was compatible with QM, Bell wanted to know if the nonlocality could be removed. Bell's work showed that it could not be removed. Bell was a proponent of Bohm's work. Bohmian mechanics is a perfectly sound theory, proven to be consistent and in agreement with quantum mechanics results whenever QM has results.
EPR (and more relevantly, Bohm's version of it involving spin entangled particles) showed that either one has hidden variables or nonlocality. Bell's work showed that local hidden variables is not possible. Hence, nonlocality is the conclusion and the argument to demonstrate the incompleteness of quantum mechanics falls apart.
The Many Worlds interpretations kind of gets away from this by exploiting another assumption in that work: that results from experiments actually happen in the sense that the other outcomes don't. Most people at the time would have just assumed that, but MW points out that experiments need not have definitive results, instead we have multiple results of experiments correlated with multiple versions of experimenters who think they have results. Thus, the correlations we see only need to be reconciled when the two groups of experimenters get together to compare notes, a perfectly local notion. Of course, some descriptions of MW, with branching of the universe, is rather nonlocal, but a proper formulation of MW is possible without any of that.
By the way, a key feature of Bohmian mechanics is the reliance on position as the "hidden variable". The spin of a particle is not an aspect of the particle itself, but rather of the wave function guiding the particle. In the spin experiments, the correlation with the environment once measurement has happened, separates the overlapping spin parts of the wave function and the particle is then being guided by that singular spin. Hence the appearance of the particle behaving as if it had that spin, but the particle is not actually spinning, whatever that would mean for a point particle.
Bell's theorem is a mathematical theorem and has narrow scope, it addresses local hidden variables theories and says they won't fly, it does nothing else, and doesn't prove spooky action at a distance either - that's author's addition.
MWI allows you to have entanglement without “spooky action at a distance”. However, it requires exponential blowup in representational complexity of the universe, which also feels aesthetically displeasing.
Does the article do justice to the hidden variables hypothesis?
In case of the hidden variables, the spin is a (3-dimensional?) value that is identified by the measurement result. In case of quantum theory we have have a probability distribution. How is that probability distribution different from a hidden variables, except that it's not a straight number but a function instead?
Speaking as a programmer, is the difference between hidden variables and quantum mechanics that the former postulate a real-valued property whereas the latter speak of something like a monad?
Speaking as a lay person, I think the difference might be that it's specifically about local hidden variables. If two particles are coupled, there's no per-particle hidden variable?
Seriously, what's the difference between a non-local variable and a deterministic function that yields a pseudo-random value?
From what I understand, we have some function f that can be evaluated by the measurement and we have another, entangled function g, that can also be evaluated but will yield the opposite of f.
Now instead of assuming that f is truly random and somehow communicates its value instantaneously to g when evaluated, we could also assume that f and g contain a copy of the same pseudo-random number generator and the same seed.
In both cases the interpretation of the model is weird, of course. But I don't see a fundamental difference here that wouldn't allow for hidden variables. It's just that the hidden variables would be have very non-trivial domains.
edit-too-late-to-edit: I remembered something that makes me think it's actually more complicated.
If I write 0 and 1 on two different pieces of paper, then flip a coin to decide which paper to give you, we have "entangled" unknowns. When I reveal my paper, we instantly know what's on yours. The joint distribution can't be described per-particle, but we don't consider it spooky. So I think there's something more to it.
Because your example does not contain superpositions. With entanglement the pieces of paper are both 0 and 1 at the same time until they are observed - and only then is it that the superposition of both particles become resolved.
Tomayto - tomahto, right? Physically speaking, what’s the difference between an unresolved superposition and a different kind of unresolved random variable?
I think Einstein would've considered nonlocal hidden variables the same as spooky action at a distance.
>How is that probability distribution different from a hidden variables, except that it's not a straight number but a function instead?
Because for entangled particles (separated by a large distance! but also any entangled particle) their PDFs will be correlated in a way that is impossible to define for just a single particle. This makes physicists uncomfortable because of relativity and things happening faster than light.
I believe the QM interpretation is that probability distributions are to be taken literally - a flipped coin under a napkin is both heads and tails with P=.5
Hidden variables on the other hand acknowledges the probability, but contends nevertheless that the coin is actually in a specific but unknown state.
Quantum teleportation does not allow for transmitting classical information faster than light. What it allows for is, using an entangled state, along with transmitting classical information, to send another quantum state across the distance, but it still needs the classical information to get there first.
This is how quantum teleportation works.
Alice and Bob have entangled qubits A and B, and also Alice has a qubit C with some state that she wants to send to Bob , and Bob has some other Qubit D.
To do this, Alice does some operations between qubits A and C, which includes a measurement. Alice then sends the classical information which is the result of the measurement to Bob. Bob then does some operations with the qubits B and D, where the operations he does are chosen based on the classical information signal he received from Alice. Iirc these observations include making a measurement. As a result, the qubit D is now in the state that the qubit C was in originally. (Alice no longer has access to the state that C was in.)
So, in a sense, the state teleported from C to D, using the combination of the entangled A and B, along with sending some classical information. (The classical information sent is random, and to anyone snooping on the message sent, gives them no useful information, because they don’t have Bob’s qubit B to use it with.)
Apologies that I don't have time right now to clarify further, but I highly suggest you read the Wiki for entanglement and then reread the articles with that context.
These simply do not do what you think they do. Information cannot, under any currently theorized model of physics, be transmitted faster than light.
At the speed of light, it would take between 4 minutes and 14 minutes to travel between Mars and Earth. A quick google says fiber optic cable can transfer data at roughly 70% of that speed. It may not be possible to get the latency you're describing. Mars is far away.
There's no spooky action at a distance. Let's imagine we have an entangled qubit system that consists of a superposition of the states (0,1) and (1,0), i.e. either part A is in state 0 and part B in state 1 or vice versa. When we perform a measurement on the first part of the system and obtain 1, it simply means that we have "branched" into the (1,0) state of the system. This branching is usually irreversible because of the decoherence caused by the measurement (which itself is just an ordinary quantum process). There is no information exchange or any type of exchange between the two parts of the system going on, we simply branch into a part of the probability space defined for the system. The question whether the other branches still exist then leads to either the "classical" interpretation of quantum mechanics or the "many worlds" interpretation. The latter seems to be favored today as we know that there's nothing special about the measurement process that causes the collapse of a wave function (it's a quantum process in itself), but in the end there's not really a way to test this so it's really more of a philosophical question.
Articles about "spooky action at a distance" should really mention this, as we have a much better understanding of the measurement process in quantum mechanics today than Einstein et. al. had when they wrote their paper.
What you're describing could be done with classical physics. Have one penny and two lockets. Place the penny blindly in one of the two lockets. Take one locket across the world. Opening it instantly lets you whether the other locket has contains the penny.
And the point of this description is this is not what's weird about quantum entanglement.
What's weird about quantum entanglement is you have two different measurement types that are non-orthogonal and they combine according to quantum logic rather than classical logic [1]. Having a particle in a state of this sort can't be explained by any analogy to discreet events occurring beforehand.
Edit: Whether this is "spooky action at a distance" is in the eye of the beholder. One thing is isn't able to be is fully reducible to actions happening on something like a "classical time line" but another thing is isn't to able to do is transmit information.
If there's one single phrase I wish I could erase from history it's "Spooky action at a distance." Ugh. It bugs me a lot that Quanta made these misleading statements that just continue the confusion over what should be a more widely-understood core feature of the universe we live in.
Tangentially, I wish "interaction" would come to replace "measurement," especially in the context of decoherence. The universe is branching *constantly* everywhere as various quantum systems interact.
edit: seems like it is lazy loaded, so revised my summary.