I understand the non-repeating patterns. I just don't see how a regular 3D lattice can produce such a pattern. Unless the light source creating this shadow is a point-source rather than a parallel one?
I guess I'm just looking for confirmation on this thought: A parallel light shone through a repeating 3D lattice will always produce a repeating 2D lattice.
I took a class under one of the authors with "sets, logic, computation". I found this textbook to be incredibly good for both reading and as a reference. It presents things very straightforwardly (the language is simple) and clearly (makes good use of tables and diagrams to give the full info needed to understand something). The chapters are short and digestible. I forget if the text includes exercises, but I found the exercises in that class very good as well. I was actually looking for this text again, so that I could use it as a reference while learning Homotopy Type Theory, and here it is!
- the brain predicts what its upcoming input will be,
- quantum biologists ask if the human eye is sensitive to quantum effects, and
- measuring quantum information under different bases result in a different quantum state of not only the measurer, but of the world being measured.
I wonder if it is possible that the brain uses its predictive power to change the basis under which the eye measures photons, resulting in different perceptions as well as a different reality. A bit of a crazy idea but I don't see any reason why it shouldn't be the case other than if it is shown that biological sensory organs are simply not that precise.
I can think of a few reasons to think that's not the case. For one, Rhodopsin isn't really under neurological control, so there's no way for top-down predictions to alter it. For another, what possible benefit would changing your evaluation basis give you? The whole point is that no matter what basis you pick everything works out the same!
The author's argues that the 50-50 model of chess is less true and a dead-end.
If this is a matter of which model to use then fair enough, you may wish to use a more exact model than the 50-50 model; but the author is not arguing about which model to use, they are arguing about which model to pursue building upon.
Building upon any model of interest is not a dead end (because it is being built upon!). Even if the underlying principles of the model of interest need to be changed to accomplish something else in the future, it is still useful to develop the model. Approximate truths can also have deep meaning and are sometimes even more generalizable to multiple areas of reality than exact answers. Approximations are no less true than trying to be exact, they are just saying a different thing. Neither is inferior to the other, or at least if exactness really is better than approximation, this is not a good argument for it.
Another commenter pointed out that some models need to be thrown out in order to make room for the new (e.g. the earth-centric view of the solar system had to go at some point), and I think that's valid and hard to argue against; and it seems to align with what the author is saying. But the work done upon the old models was certainly not worth nothing. For one thing, the work done upon the old models is what made the new work possible. I think perhaps the author's issue is that they do not acknowledge that the 50-50 model of chess has value.
p.s. to the author if they read the comments: I actually enjoyed reading your thoughts even if I disagree with them.
Tbh, I didn't think he gave that message. If anything, he even identifies the 50-50 model as a local "maximum" (i.e. it has value). To me the main point was to have a mindset that is willing to challenge the possibility of whether such an optimum could actually simply be a local one.
The thing I think the original commenter was trying to get at is that spacetime (the mathematical model) and quantum mechanics (the mathematical model) for the most part are in no way unified. Things that happen in quantum mechanics are not necessarily explainable using spacetime.
Although in general physicists believe a lot of things from spacetime for QM, like that information can't travel faster than light, it isn't actually baked into QM.
There are several textbooks[1] covering extensions of this approach to curved spacetimes generally, and one will encounter the https://en.wikipedia.org/wiki/Klein%E2%80%93Gordon_equation#... in graduate school settings. Quantum mechanics on some specific curved spacetimes are exactly Klein-Gordon solvable.
The Standard Model of Particle Physics incorporates the Poincaré Group, which is the group of Minkowski spacetime ("flat spacetime") isometries. This means that the Standard Model particles are the same independent of where and when they are in empty flat spacetime (3 spacelike and 1 timelike translation invariances), their orientation (rotational invariance about the three spacelike axes), and they transform reliably under boosts (differences in constant velocity along the spacelike axes). So the Standard Model is defined by flat spacetime: and the causal structure of flat spacetime (in which the constant c plays a key role) is very much baked in. Studying modern quantum mechanics mostly means looking at patches of effectively flat spacetime in which there are particles of the Standard Model occupying different locations in the patch, boosted relative to each other, and interacting via the other mechanisms captured in the Standard Model's formulation.
We inhabit a type of curved spacetime in which General Relativity guarantees at least an infinitesimal patch of exactly flat spacetime around every point in the universe. In regions with only gentle curvature -- like laboratories on the surface of the Earth, or in space probes in the solar system -- any curvature corrections to the assumed exactly flat spacetime of the Standard Model are tiny, because the region of effective (rather than exact) flatness is very large compared systems of particles under experimental study of their quantum behaviour. "Pretend it's flat" works exceedingly well in practice. When a researcher has to consider curvature (e.g. in relativistic massive objects like neutron stars) she or he can continue to work perturbatively against a flat-by-definition spacetime.
The problems arise in the difference between a guarantee of a microscopic region of flat spacetime and an infinitesimal region of flat spacetime: if the radius of curvature is small compared to the spatial extent of a particle, things get ugly quickly, especially as we take the wavelength of the particle smaller (which means the energy of the particle climbs, and that is the sort of energy which creates spacetime curvature, so we get a nonlinear feedback, and classical General Relativity and Quantum Field Theory in Curved Spacetime make annoyingly different and incompatible predictions about what happens as one takes the limits of high particle energy and high curvature. Fortunately this incompatibility seems likely to occur only hidden inside event horizons, so it is a problem for the theories rather than a practical problem for all of us who are not actually rapidly approaching death within a black hole -- there also may be consequences for as-yet-undiscovered ancient tiny black holes, or stellar black holes many trillions of trillions of years in our future, so again we can take our time to understand the theoretical conflict).
A bit more technically, the problem arises in perturbative approaches to QFT on curved spacetime: at low energies and low curvatures we have a fairly small number of correction terms which can be written out as a https://en.wikipedia.org/wiki/Taylor_series which we can truncate because the higher-order terms are demonstrably irrelevant. As we increase energies and curvature, irrelevant terms become marginal, then relevant; additionally, we start having to add more non-irrelevant terms. https://en.wikipedia.org/wiki/Renormalization allows us to squash some of these terms together, but eventually we get an overwhelming growth of non-ignorable corrective terms and lose the ability to make predictions using this approach.
This breakdown in perturbative renormalization ("perturbative quantum gravity" [2][3]) gives a useful qualitative definition of "strong gravity": it's where the perturbative approach breaks down. In terms of Feynman diagrams, it's where loops of gravitons enter into the picture; a bit more colloquially, it's where "gravitation's self-gravitation starts becoming non-ignorable".
Although not a dead area of research, looking for ways to make renormalization work for strong gravity is less fashionable than looking for non-perturbative quantum gravity that (a) matches perturbative quantum gravity right up to the weak-side boundary of strong gravity, including classical General Relativity in weak gravity, (b) is calculable in practice, and (c) solves other non-gravitational problems that plague high energy particle physics that are amenable to testing, since we probably can't extract observational or experimental data from regions of strong gravity.
Additionally, classical General Relativity fairly generically predicts the presence of gravitational singularities in spacetimes with significant amounts of matter[4]. Such singularities destroy the total predictability of the entire spacetime from a total sample of all the variables on an arbitrary slice across the whole space at a given time coordinate. In other words, there's an incompatibility between classical General Relativity and traditional initial values surfaces approaches to solving physics problems. This problem worsens in the presence of Quantum Fields because of Hawking Radiation: instead of a literal singularity there is instead a trapping structure that evaporates (or mostly evaporates, in "remnant" proposals) in the far future of most black hole systems. But the matter that is released in the evaporation cannot be predicted from the matter that was thrown into the black hole before evaporation, and at late times we lose the ability to account for quantum entanglements that existed when the black hole was growing. Unfortunately there are numerous black hole candidates in our universe, which we also know is filled by matter representable by quantum fields. Although this is not strictly an incompatibility of General Relativity and Quantum Field Theory, quantum physicists are very keen on preserving https://en.wikipedia.org/wiki/Unitarity_(physics) which is lost in black hole evaporation as is understood today. Preserving unitarity in the presence of strong gravity is a fourth goal of modern research into quantum gravity.
Perhaps the above commenter is talking about the perceived scale or speed of time? Order of events certainly seems to matter.
At the same time, order of events is also a certain way of modelling something. There may be alternative models which can be meaningful without the order of time. But most (probably all) models are designed with the way we perceive time in mind.
I disagree that people should NOT be attached to their code.
Having a sense of ownership for what you write can lead to higher quality systems where people are willing to stand up and fight for what they believe is higher quality.
BUT this importantly depends on the ability to compromise, admit being wrong, and change based on new information from the people who have a sense of ownership over their parts of the codebase. Like you said.
People that get really attached to their code tend to be the people not smart enough for you to want them fighting for their opinions.
Good engineers promote good ideas and general approaches to code base structure. They don’t give a shit if people go in and make changes as long as they don’t compromise the whole architecture.