How does ALiBi compare to rotary positional embeddings? That method makes similar claims. I find ALiBi much easier to understand, but that’s probably not the best reason to chose it over other methods.
> I'm not sure if Burton was a Luddite who didn't believe in statistics or the scientific method, or if he didn't care about learning from his tests.
It's possible. It's also possible that he had a good sense of how test results are presented by program managers to Congress, and was trying to accurately convey the situation. Congress typically doesn't have time to delve into the details of the test, they get top-line results like "the Bradley did not catch fire when shot by an RPG" even though the footnotes would talk about the water in the tanks.
More generally, there is a tendency even today to make test results look good through judicious selection of test conditions. Program managers will refuse to do tests where "we already know the answer" - but only when we think the system won't work. We do plenty of tests when we have high confidence the system will work. So you get headlines like "86 of 105 hit-to-kill intercept attempts have been successful" [1], without the context that we never attempted the shots that we think we would miss, even if those scenarios are tactically important.
I'll grant that there are several motivations for testing like this, but let's not pretend that they are all purely technical.
In the case of hit-to-kill intercepts, terminal guidance was proven and reliable 30-40 years ago (at least), it is a mature capability. That is no need to test that it can hit the target per se if the rocket can precisely respond to the guidance commands.
What changed is that they later attached that terminal guidance to new high-performance rocket motors that pushed the materials science requirements to a point where it was difficult to get the rocket to respond precisely to guidance commands and the terminal guidance package itself suffered ablative damage due to extreme acceleration. As such, all of the tests for the last 20+ years have been tests to determine if the missile components materially degrade or fail in-flight, regardless of what they are aimed at. The nature of the target and test environment are almost irrelevant to this question -- hitting the target is pretty strong evidence that the materials didn't fail.
The link you provide seems to say that the cost of wholesale PV + storage is $85-$158 (bottom figure, line 3). Am I misreading that?
I’ve noticed that many solar+storage installations these days are 4-hour storage, so not sufficient for baseload. I think the number would be higher if we were shooting for baseload from our storage.
It's not obvious how to translate this figure into a standalone plant cost. The COB study is quoting the acquisition cost premium of a nuclear ship over a non-nuclear ship. I believe that estimate is the premium for taking a non-nuclear design and adding a nuclear reactor to it, so it does not include a lot of plant infrastructure that is part of the cost of civilian power stations. Also I would think the requirements for the containment structure and safety systems would be significantly different.
Fair enough. Currently each America-class ship [1] uses 2 GE LM2500 turbines [2]. This type of engine has been produced in huge quantities, more than 2000 to date. The unit cost is about $11 MM [3], so it's negligible compared to a naval nuclear reactor.
Where a nuclear reactor comes ahead is the fuel cost. Which is zero (it's included in the construction cost, and there's no refueling for the lifetime of the ship). A conventional turbine burns fuel, and over the lifetime of a ship it can add up to quite a lot. It can certainly get above $1BN.
In fact, the CBO analysis shows that the total cost for 5 such ships would be $14.2 BN using conventional engines, and $14.8 BN using nukes ([4] page 6 top table). That was when the oil price was $86/barrel. Now it's $110, so just because of that the calculus can switch to favor the nukes.
They don't want to exacerbate racial disparities, which is noble. They could have used that as their sole justification, but for some reason they also try to say that the crime datasets are inaccurate. Their reasoning for this is surprisingly weak. Regarding the National Crime Victimization Survey: "And there are troubling signs of this [racial bias]: in the 2019 survey, people reporting crimes were more likely to describe their offender as young, male, and Black than would be expected given the representation of those groups in the population."
They want to store the state of received photons on „quantum harddrives“, transport them to the same place, read the states out and let them interfere locally.
This is the mentioned paper from december 2020 about hour-long optical storage (by Chaun-Feng Li et al.):
The state (phase) of the received photon will depend on the position of the receiver. Even if you have a perfect quantum hard drive, you still need to maintain the position of your receivers to within a wavelength. That will be hard to do if they are separated by a long distance.
Still other work to be done, and worrying about what happens in the fibre optics, but it is becoming possible, especially if the coherence does not need to be long lived.
The directive to strike a positive tone could be nefarious or benign; we can’t tell from the info in the article. Every research paper has a tone that depends on the personality of the author - I’ve seen it even in my dry field of quantum optics. Every result has many implications and how you discuss them, which implications you emphasize, and the language you use will affect the tone, even if you are not explicitly furthering an agenda. A given article could have a range of legitimate tones, depending on who wrote it. Asking a researcher to be on the positive end of the legitimate spectrum is benign. It can of course go to far.
I happen to agree with you, the observer is a quantum system must get entangled with the quantum system, but that still doesn't explain probabilities. If you prepare a system - say sqrt(1/3) spin-down + sqrt(2/3) spin-up, and then observe it, repeatedly, your subjective experience is that you saw spin-down 1/3 of the time, and spin-up 2/3 of the time. I don't understand how purely unitary evolution can explain this. Does it?
> I don't understand how purely unitary evolution can explain this. Does it?
What's the alternative? Assuming unitary evolution and some fairly common-sense axioms about how we'd expect subjective experience to behave (things like: we never experience being in a branch that has amplitude zero; if we experience being in a given branch then we continue to be in that branch), the Born probabilities are the only model anyone's ever come up with for how our subjective experience should go. So what's there to explain?
The alternative is non-MWI theories, which typically introduce the Born rule via new axioms.
Regarding what's to explain, it's quantum randomness (which distills the Born rule objection). Our subjective experience is that we see spin-down 1/3rd of the time, and our theories say the result is otherwise impossible to predict, even in principle. But a deterministic theory cannot produce a random outcome, even a subjective one.
> Our subjective experience is that we see spin-down 1/3rd of the time, and our theories say the result is otherwise impossible to predict, even in principle. But a deterministic theory cannot produce a random outcome, even a subjective one.
Whyever not? What else would you expect the subjective experience of being in a state like 1/sqrt(3)|x> + 2/sqrt(3)|y> to be like?
What would you expect "experiencing along those other bases" to look like? If you expand along a different basis you just get something like: half a chance of experiencing (1/sqrt(3)|x> + 2/sqrt(3)|y>), and half a chance of experiencing (1/sqrt(3)|x> + 2/sqrt(3)|y>), so it amounts to the same thing.
> the structure of the wavefunction is that it divides cleanly into those two branches, and that's true in any basis.
It's not. It only has this Schmidt decomposition in one basis. In other bases there will be cross terms among the basis elements. What you're doing is privileging Schmidt bases as ones that give experiences. In another basis with states w,z say the state will be:
|w>|w> + |z>|z> + |w>|z> + |z>|w>
So you won't be able to give this clean "experience" reading unless you posit we can't experience in things like the w,z basis here and only in Schmidt bases, but then you run into the problem that for real macroscopic systems they won't admit a Schmidt basis.
This seems like the kind of a "vague" Many Worlds where one doesn't look any deeper than pretending a macro-device is a qubit (e.g. no thermal states etc) and looking at one basis. There's a reason properly developed MWI is nothing like this such as the Spacetime State realism of Wallace and Timpson.
Why one would believe in quantum state realism at all is a separate question.
>Of course you can
No you can't, it's a direct consequence of the Kochen-Specker theorem. If the device is treated quantum mechanically and it enters an entangled state of the form you gave then you cannot perform conditioning as the Kochen-Specker theorem, via the non-uniqueness of Hilbert space orthogonal decompositions, prevents an unambiguous formulation of Bayes's law. I can link to papers proving this if you wish.
The fact that we do experiments where we can condition is, in light of this theorem, a demonstration that our measurement devices do not enter into the kind of CHSH states you're giving.
> It's not. It only has this Schmidt decomposition in one basis. In other bases there will be cross terms among the basis elements. What you're doing is privileging Schmidt bases as ones that give experiences. In another basis with states w,z say the state will be: |w>|w> + |z>|z> + |w>|z> + |z>|w>
The state's evolution will be completely equivalent to (a linear superposition of) the evolution of |x>|x> and |y>|y>. That's a physically observable fact that's independent of your choice of basis (it's less obvious in the |z>/|w> basis, but it's still true).
Any physically valid concept of "experience" would have to behave the same way. If your state is equivalent to a linear superposition of "experiencing x" and "experiencing y" then it can be characterised completely in terms of "experiencing x" and "experiencing y", and that's not dependent on your choice of basis (though it may be easier to see in one basis or another).
> No you can't, it's a direct consequence of the Kochen-Specker theorem. If the device is treated quantum mechanically and it enters an entangled state of the form you gave then you cannot perform conditioning as the Kochen-Specker theorem, via the non-uniqueness of Hilbert space orthogonal decompositions, prevents an unambiguous formulation of Bayes's law. I can link to papers proving this if you wish.
> The fact that we do experiments where we can condition is, in light of this theorem, a demonstration that our measurement devices do not enter into the kind of CHSH states you're giving.
I don't know what you're trying to claim here. All the available evidence is that measurement devices, being ordinary physical objects, follow the laws of quantum mechanics, and that includes conditioning behaving as entanglement; if you've got evidence that that's not the case then a Nobel prize awaits. Non-uniqueness is a red herring, because choice of basis does not and cannot change experimental predictions; the basis exists only in the map, not the territory.
The device has to have its contextual observable algebra develop a non-trivial center, not just be entangled as is mentioned in section 5 of the paper I linked. It's well known entanglement alone isn't enough which again is why entanglement alone has been called "pre-measurement" since the 1980s.
Note how this involves hard mathematics, not vague talk about "obvious features of subjective experience". I'll also note that this is a general feature of discussions about this stuff among non-physicists online, especially programming communities like this one, the knowledge is stuck in the late 1970s.
> The state's evolution will be completely equivalent to (a linear superposition of) the evolution of |x>|x> and |y>|y>. That's a physically observable fact that's independent of your choice of basis (it's less obvious in the |z>/|w> basis, but it's still true).
Of course the state can be written in the form |xx> + |yy>. I never denied that. The point is that it can be written in other forms. So it's equally correct to say we'd "experience"
|zz> + |ww> + |zw> + |wz> as to say we'd experience |xx> + |yy> so there's no reason to say we'd "obviously" experience only the latter. Your argument is just "that expansion is always available", but since other expansions are also always available I don't see what the force of this argument is.
Even worse in QFT there isn't an expansion of the form |xx> + |yy> available due to the Reeh-Schleider theorem so your whole construction is moot anyway. Again where is this paper deriving the Born rule from unitarity and basic facts about subjective experience.
> I don't know what you're trying to claim here. All the available evidence is that measurement devices, being ordinary physical objects, follow the laws of quantum mechanics, and that includes conditioning behaving as entanglement.
I'm claiming a consequence of a well known theorem from Quantum Probability. See section 4.2 of this paper https://arxiv.org/abs/1310.1484
Quantum states without superselection (e.g. the entangled states of the form you are considering) leave Bayesian conditioning undefined. As the paper mentions this is a direct consequence of the Kochen-Specker theorem via non-unique orthogonal expansion. It's not a red herring but a rigorously proved theorem.
I don't know what the "Nobel prize" remark is about as it is well known that entanglement doesn't give well-defined conditioning. That's why entanglement with the device alone is called "pre-measurement" in most papers in measurement theory following terminology introduced by Zurek in the early 80s. A good example of the issues with pre-measurement alone is here https://arxiv.org/abs/2003.07464. You can't just treat the device as simply entering some CHSH or GHZ style entangled state and think that solves everything about measurement. It doesn't via the theorem I gave in the paper above (and other issues).
> Of course the state can be written in the form |xx> + |yy>. I never denied that. The point is that it can be written in other forms. So it's equally correct to say we'd "experience" |zz> + |ww> + |zw> + |wz> as to say we'd experience |xx> + |yy> so there's no reason to say we'd "obviously" experience only the latter. Your argument is just "that expansion is always available", but since other expansions are also always available I don't see what the force of this argument is.
If there's a simple description of the wavefunction that's valid then there should be a correspondingly simple description of our experiences that's valid. The fact that there's also a more complicated valid description of the wavefunction is neither here nor there. It's like looking at a basket of 4 apples and asking why your experience doesn't correspond to there being 6 - 2 apples.
> Quantum states without superselection (e.g. the entangled states of the form you are considering) leave Bayesian conditioning undefined. As the paper mentions this is a direct consequence of the Kochen-Specker theorem via non-unique orthogonal expansion. It's not a red herring but a rigorously proved theorem.
Ok, I take your point, saying that we can just condition is overly flippant: if there are cross terms (i.e. entanglement) then classical conditional probability doesn't always accurately describe the behaviour of a system, and of course that's true for a system that includes experimenters inside it. But if we treat an experimenter's conditioning as creating entanglement, like any other QM interaction, and treat the subsequent evolution of the system quantum-mechanically, then there's no problem.
> A good example of the issues with pre-measurement alone is here https://arxiv.org/abs/2003.07464. You can't just treat the device as simply entering some CHSH or GHZ style entangled state and think that solves everything about measurement. It doesn't via the theorem I gave in the paper above (and other issues).
That paper amounts to nothing more than redefining "outcome" as something that cannot be in a superposition, and then using this to argue that it makes their unfounded notion of decoherence physically meaningful. If we assume that experimenters are physical systems that can undergo superpositions like any other, then of course Bell-style "no hidden variables" results apply when those variables are the outcomes of experiments. Big whoop. (Would you find the following argument convincing: "Pre-measuring the polarisation of the photon might have one of two possible results, so it doesn't have an outcome according to any reasonable notion of "outcome". Therefore if any observer has measured a photon's polarisation, a physically meaningful process of decoherence must have occurred"? Put like that it's hopefully obvious that this is nothing more than asserting the primacy of the Copenhagen interpretation).
> Note how this involves hard mathematics, not vague talk about "obvious features of subjective experience". I'll also note that this is a general feature of discussions about this stuff among non-physicists online, especially programming communities like this one, the knowledge is stuck in the late 1970s.
Look, I'm not a big fan of credentialism, but I do have a master's in this from a reputable institution. If working physics has found a compelling argument that there's something mysterious about measurement or experience, then that knowledge hasn't made its way as far as even taught postgrad courses, yet alone the wider public, and the blame for that has to rest with the physicists. (I rather suspect that there's no such argument that has reached any significant consensus among working physicists, and that that the "late 1970s" view in the public sphere reflects that).
Those are the same states so I'm not sure what you mean.
The point is that there is no reason to select out any particular basis over another. You can't just retreat into "well this is the only basis I can experience" because the human sensory apparatus would be able to select out a range of bases in a full unitary account and also the ambiguity of basis decomposition means you can't perform conditioning which we do all the time in experiments.
> Those are the same states so I'm not sure what you mean.
I mean that if you decompose along a different basis than experiencing x/experiencing y, you just get an ensemble of states each of which is a superposition of experiencing x and experiencing y. So you end up with the same thing.
It's like looking at an entangled state (because that's exactly what it is) - if we have a two-particle state like 1/sqrt(2)(|x>|x> + |y>|y>), that behaves like the first particle being in |x> and experiencing the other particle being in |x>, or being in |y> and experiencing the other particle being in |y>, and it might look like that's an artifact of this particular basis decomposition, but it actually isn't - the structure of the wavefunction is that it divides cleanly into those two branches, and that's true in any basis.
> You can't just retreat into "well this is the only basis I can experience" because the human sensory apparatus would be able to select out a range of bases in a full unitary account
A system that's freely interacting will become entangled; whatever we consider ourself is constantly interacting with the rest of ourself, almost by definition.
> also the ambiguity of basis decomposition means you can't perform conditioning which we do all the time in experiments.
Of course you can, and it works exactly the way you'd expect - we already do experiments where some isolated apparatus inside the experiment does something if it detects one thing and something else if it detects something else. Choice of basis is a tool for understanding the wavefunction, not a physically real thing.
See my reply above. You're just declaring we only experience Schmidt bases for no particular reason. Where are you getting this "clear connection" between experience and the decomposition in one particular basis. Do you have a reference?
There are other alternatives such as the derivation of quantum theory within the GPT framework and many other axiomatic derivations.
I've never seen the Born rule derived from unitary evolution and axioms for how subjective experience should work, so I don't even see this as one of the ways.
To paraphrase you, how do we get from probability amplitude to observed frequencies if there is no collapse?
This is were we have to invoke philosophy. Specifically how does consciousness interact with time? The common-sense thinking is that our soul is tied to our body and is traveling forward through time with it. Another way of thinking is that the soul is tied to a given position of the space-time-probability. It does not travel. You today is not the same as you tomorrow or yesterday. The you that observes spin up is not the same you as the one that observes spin down. Your soul is perceiving reality from a randomly chosen vantage point among all the possibilities with which have a compatible body. If we condition on those bodies belonging to experimenters who have observed frequencies, then we get the distribution.
No it can't. There have been many attempts and they don't work. The Born rule is independent of unitary evolution. The closest one can get is to declare that the quantum state is fundamentally a statistical object (i.e. the only information in it is observation probabilities) and then with certain assumptions about the size of the state space you can show that the Born rule is the only possible rule for connecting the state to statistics consistent with the unitary dynamics.
So under the assumption that the state encodes probabilities, state space assumptions and consistency with unitary evolution you get the Born rule. However this is not the same as the Born rule arising dynamically from unitary evolution alone.
Isn't your subjective experience just one probabilistic eigenvalue of a particular combination of operators corresponding to your observation? How does unitary evolution break down here?
It's not unitary evolution breaking down, just that the Born rule isn't a consequence of unitary evolution. They're separate independent hypotheses. In most derivations of QM from an axiomatic basis they're consequences of separate combinations of axioms.
Thanks. Do you by chance have a good source for a gentle introduction into axiomatic QM? Like undergrad level is fine, I've taken basic QM and worked through Griffith's intro book on my own, and I've had a lot of math.
I'd love to read more but my google results aren't turning up a good definitive introduction.