I enjoyed cassettes. I still have a bunch, and a cassette player in my car. And I never use them over streaming audio from my phone. Why? Because this is 2017 and I'm not a hipster.
Initial conclusions indicate an error in production that
placed pressure on plates contained within battery cells.
That in turn brought negative and positive poles into
contact, triggering excessive heat. Samsung however
stressed that it needed to carry out a more thorough
analysis to determine “the exact cause” of battery damage.
I seem to recall an article not too long ago (perhaps here on HN?) that analyzed the Note 7 battery failures; their conclusion was along the lines that the phone was designed too slim for the battery manufacturing tolerance.
So in some percentage of the phones the battery will be under constant mechanical pressure because the battery is slightly bigger than the space allocated for it. This pressure may eventually force the plates inside the battery into contact with each other, or at least close enough to cause leak currents and overheating.
> Does turning them off prevent them from catching on fire?
Not necessarily. Assuming the cause is as Samsung describes it, it can happen even with the phone turned off. But overheating to the point of thermal runaway is more likely at higher voltages (during charging, or with fully charged battery), and probably more likely during high current drain (aka using the phone).
For more technical details, I suggest the Electrical Engineering Stackexchange.
Because Java was originally designed for set-top TV boxes and appliances, where it's kind of a big deal that you don't need to know or care what OS or processor each appliance is using internally.
When the appliance market didn't pan out, they went for web browsers and Java applets. Bytecodes were a feature because browsers didn't exectute native code, and because it allowed for sandboxing to limit the attack surface.
Even when Java became more popular on the server than in the browser, the "write once, run everywhere" was considered a major feature: The same bytecode could be distributed everywhere; no need to maintain a heap of different build environments for different CPU architecture and OS combinations.
I'd say the appliance market did pan out actually. BluRay players all contain an embedded JVM, as do many other kinds of set top box, as do of course all Android smart TVs.
Abstracting the CPU has worked out pretty well for the Java platform. Look at how easy the 64 bit transition was for the Java world vs the C++ world. Visual Studio is still not a 64 bit app and yet Java IDEs hardly even noticed the change. The transition on Linux was just a disaster zone, every distro came up with their own way of handling the incompatible flavours of each binary.
In addition, a simple JIT compiled instruction set makes on the fly code generation a lot easier in many cases and it's a common feature of Java frameworks. For instance the java.lang.reflect.Proxy feature is one I was using just the other day and it works by generating and loading bytecode at runtime. On the fly code generation is considered a black art for native apps and certainly extremely non portable, but is relatively commonplace and approachable in Java.
I was commenting on "why did they design Java that way in the first place", as opposed to (say) Go.
I agree that once the primary use of Java moved outside the browser, there was no particular reason to not give the option of AOT too. I'm not sure why Sun was so adamantly opposed to the idea.
If I recall correctly, Sun really wanted to stick with JIT on Java Embedded too, they just couldn't get it to run fast enough on embedded hardware. For desktop and servers, they considered bytecode interpretation and JIT "fast enough".
Sure and actually that is where mobile OSes are moving.
We now have bitcode on iDevices, DEX on Android and MSIL/MDIL on WinRT.
Still, both iDevices and Windows Store take, what I consider the best approach, to do AOT on the store for each supported target.
As Google found out, using AOT on the device doesn't scale. I just don't get why they went back to an overly complicated architecture of Interpreter/JIT/PGO → AOT, instead of following the same path as the competition and serve freshly baked AOT binaries.
> imagine that the algorithms you write had to work (perhaps slightly differently, but essentially the same) when hit with bit-flips in the code. How would one do that?
Gray code [1] does it for integers: Any two consecutive values are one bit-flip apart.
Instructions could perhaps be encoded similarly: Say an n-bit gray code represents an instruction. Each instruction has an n-bit ID, IDs are spread evenly over the space of n-bit values, and each instruction code resolves to the ID with the smallest Hamming distance [2].
Then most bitflips would yield essentially the same program, as long as the number of instructions is small compared to the number of n-bit values.
Our CPUs wouldn't run that directly, but I think it should be possible to construct simple bytecode instruction sets where practically any permutation of bits gives a syntactically valid program, and where bit sequences with a small Hamming distance in most cases resolve to similar programs.
On the basis of having had a CFS/ME diagnosis for the past several years, and spending some time reading what I could find about it:
There's no consensus yet, but there is a bunch of interesting research going on. Most of the theories I've seen fall in two broad categories:
- An issue with the mitochondria, the energy production in the cells, that for some reason produce just a fraction of the energy they normally produce. So the fatigue is a result of insufficient energy production. I'm partial to this class of theories, since it seems to match so well with my experience. I particularly like the one I saw just a few days ago, explaining CFS/ME as a kind of evolutionary hibernation where the cells shut down in response to a real or imagininary threat, in the hope of outlasting the threat rather than fighting it [1].
- An issue with the immune system, which for some reason remains hyperactive even when there is nothing to fight as far as anyone can tell. So the fatigue is a result of the immune system consuming an inordinate amount of energy, just like if you were having the flu. According to these theories it might be an autoimmune disease where the immune system is effectively fighting ghosts [2], or the immune system might be busy with an actual threat that the standard tests don't pick up [3].
I don't know if both groups of theories can be true at the same time - e.g. if one is a cause and the other is an effect, or maybe they are both effects - or whether we're talking about different subsets of patients with different underlying causes.
Unlike my sibling post, I think it's a genuine condition. I agree that the current diagnostic criteria basically boil down to "unreasonably fatigued and we don't know why". But I've met a number of other people with the same diagnosis, and there are too many similarities for us to be just an arbitrary collection of tired people. Maybe there are two or three distinct subgroups with different underlying causes, maybe there are a handful that has been misdiagnosed. But I'm sure there's something there.
For recovery, we don't know yet. A number of things have worked for a number of different people, some people eventually get better on their own. Long term and in general, we will hopefully know more in a few years.
It can feel like "flu with a hangover", that's my experience too.
In my case that's triggered by too much activity. I've gradually learned to stay within my limits most of the time, so these days I don't have the flu feeling as often as I used to. Although I have to limit activity to a handful of hours per week in order to keep it at bay.
Can't agree with that. Betting say $1 on a highly improbable event can't really be considered "insane" by any metric I can think of - if you lose, so what, you're only out one dollar.
The risk of the event itself doesn't matter; what matters for your personal risk is how much of your personal fortune you put into it.
If losing a bet makes you homeless, it's hardly a good bet no matter how good the odds are.
Conversely, there's barely an "insane" bet in the world as long as the probability for a payoff is greater than zero (excluding e.g. Nigerian scam emails), and as long as the amounts involved are small relative to your disposable capital and the payoff expectation.
Think of it as "how many times do I need to place this bet before I win", vs "if I win, what's the payoff", vs "if I'm wrong, what's the most I could lose".
If you get those numbers right, I can't see that bitcoin margin bets are intrinsically insane.
It's different in a number of ways: Suppose it's not you hacking your car, it's an enemy that want you dead. So they disable the brakes. Or perhaps it's possible for an attacker to disable the brakes only when you're braking hard and have a speed above 100 km/h (60 mph).
Or suppose a neighborhood kid is angry at you, have figured out how to hack the system, but haven't yet figured out the difference between "that'll teach them a lesson" and "this might actually kill them".
Or, hypothetically, if system hacks don't require a physical connection, it's wide open for anyone anywhere in the world to replicate something like the file encryption extortion scam[1]: Break into as many cars as you can. Send them a mail saying that you hacked their car. They can take the chance of figuring out what you did on their own, or pay you money to revert it. The scam might work just as well for cars you didn't break into, as long as the owners believe it's a credible threat.
The point isn't necessarily that these scenarios are more likely than in the physical world. The point is that many people have a fair idea how the physical world works, while they have only vague notions about "hacking" in the virtual world. We know that there are new threats, but we don't yet know what they are, so these new threats will be inherently scarier than the threats we already know about. (The devil you know, etc.)
Cutting the brake lines or using a kitchen knife (or a gun bought off Craigslist in the US) to kill the person you hate works just as well and is much simpler.
I personally would like to see various "hacks" adjusting the suspension, brakes, spark timings and other things for a better ride in certain conditions (racing, drifting, mountain roads, etc).
I haven't tried a standing desk for programming yet (it's on my todo-list).
But I did have a summer job as a student where I was standing in front of a computer all day, registering incoming packages in a warehouse.
My feet hurt for the first few days, but after some experimentation with different footwear (I ended up with a pair of orthopedic sandals as the most comfortable), a soft mat to stand on, and a 5-10 minute break each hour, I had no issues after the first couple of weeks.
I'm pretty sure that moving is better than either standing or sitting. Since walking around isn't very compatible with working on a PC, I expect that an adjustable desk is pretty much the best we can do. Along with switching between standing and sitting, frequent breaks, and suitable footwear.
Yes. Representing resolution by horizontal pixel count has been standard in digital cinema for years[1][2]. So they have 2k, 4k and a number of subvariants.
So I guess we're just seeing a conflation of cinema, TV and computing, at least when it comes to displays and resolutions, so the marketing terminology is conflating too.
Especially since 4k is an exisiting standard, I'm willing to give them a pass for keeping the naming convention. (Although the 4k TV standard, which most 4k monitors will be using, is slightly different: It's the cinema standard cropped to a 16:9 aspect ratio.)