Proof is a word that shouldn't be used lightly. I don't think anyone would see this as proof that quantum foam doesn't exist. The laws of physics allow for quantum foam to happen, which may lead us to start looking very differently at the structure of the universe. However, we now have some strong evidence from measurements over a reasonably large distance that quantum foam is not an immediately noticeable factor in the universe.
I kind of liked quantum foam as a possible way of explaining gravity observation that are not explained by the amount of visible matter and thus attributed to 'dark matter', because 'matter' is the only thing we know that would cause gravity. This tells me I shouldn't get too excited about that possibility.
The article is a bit dodgy in this regard. It just means that at a quantum level, the energy fluctuations (as observed in this experiment) are not big enough to cause significant space-time alterations.
So, it is foamy, but not as foamy as other people thought (the idea of virtual particles having no gravitational contribution sits uncomfortably in my mind).
The average would be unchanged, but the whole point of their measurement is to measure the resulting statistical broadening of the event.
That is, if the burst lasted 1 second (I made this number up!), you'd expect all photons to arrive within 1 second of each other. But if space-time foam had a strong effect, you would expect the same average time, but you might expect the spread of the data to be larger: maybe 5 seconds (also made up). (Essentially, randomness like this would be expected to increase the standard deviation of the arrival times.)
So by measuring the spread of the photon arrival times, you can put an upper bound on how big the space-time foam effects can be: the actual spread is (very roughly) the sum of the actual length of the event plus the spreading due to space-time foam. These folks are claiming that for certain models of space-time foam, the photons they observed arrived too close together to be consistent with the existence of that foam at the expected scale (assuming it's not a statistical fluke).
If the situation was chaotic (which I would presume a system with many random perturbations to be) the expectation is for the repeated small changes to have a significant impact.
My guess would be this would be true only if they were coherent, and the resonant frequency were some multiple of the Planck length. The many small random perturbations would largely cancel each other out and the remainder would be insignificant (not energetic enough) to meet the lowest energy requirements to nudge a photon by even a miniscule amount.
The article has an updated section on that question.
>I asked Robert Ohsfeldt about this, who responded that the adjustment factor was based on fatal injury rates relative to the average. Hence, the adjusted numbers shouldn’t be seen as hard numerical estimates of life expectancy, but rather as a way of understanding the true relative ranking of the various countries on life expectancy excluding fatal injuries.
Last time I checked in C, all pointer types where the same storage size. There's no need to define the same data structure multiple times by using templates. You just use casts.
... which is fine for creating data structures containing pointers to things created elsewhere, but it doesn't work if you want your data structures to contain more complex data types, without the overhead of another layer of pointer redirections.
Nope. There is no need in C for all pointers to be the same size. An implementation is free (and there indeed, exist some) where a char* is a different size from an int. You can* safely cast any data pointer to void, and a void back to any data pointer. POSIX requires that function pointers can do this; the C standard doesn't require it.
Now, that said, it is possible to write generic link list routines (I've done it) but the devil is in the details.
For the same reason that if you put a hunk of chocolate in the oven and called it "hot chocolate," people would be confused that it's not a warm beverage of chocolate and milk.
That is, the phrase "virtual machine" is usually assumed to be the name for a piece of software that pretends to be some particular hardware. It is less commonly used to mean a "virtual machine", that is, not a noun unto itself, but the adjective virtual followed by the noun machine.
The term "virtual machine" is already pretty overloaded. This isn't referring to virtualized hardware in the VMWare sense or a language/platform virtual machine in the JVM sense. Rather, it's talking about how C's abstraction of the hardware has the Von Neumann bottleneck baked into it, so it clashes with fundamentally different architectures like the Burroughs 5000's.
Often true, but not exclusively so. One can sometimes create a process partitioning solution to a problem by exploiting speculative execution. You start workers which speculate the previous execution will result in their being run. If that speculation turns out to be false you discard their results, it if turned out to be true you continue with them. There was a talk at ISSCC about doing speculative branch prediction this way, two compute units where one proceeeds as if the branch isn't taken while the other proceeds as if it is, and when you finally get the result back that says which would have happened you retire the other thread, making it available for the next branch.
Java is a language for people good at resisting temptation. It gives you all sorts of ways to make your code look nicer, more elegant, but the only way to be productive in Java is to avoid all of them.
In my experience, you can be more productive in Java than in any other language, because it has by the best tools and libraries. However, don't use reflection, don't use cloning, avoid automatic (de)serialization, don't doc needlessly, don't separate interface and implementation when you don't have to, use refactoring and code generation, don't feel bad about code duplication.
Most of all, always consider that coding is cheap. Changing a constant? Easy, not hard. Changing interfaces? Easy, not hard. Introducing an alternative implementation? Easy, not hard. Changing names? Easy, not hard. Adding new types? Easy, not hard. You don't need to think about easy things in advance, think about the problem instead.
I'd expect languages that require far less code for pretty much anything, and have far more compile-time safety to be more productive both at the prototyping stage, and at long term software maintenance.
I never quite understood fitness. I bought some weights 2 years back and found a light exercise routine I like enough to do it daily. Works like a charm.