There seem to be a lot of defensive comments, which I guess, seem to be coming from the focus on individual guilt. Focussing too much on individual choice is examining a particular solution approach and not the problem itself which needs to be solved. Lack of awareness is not a valid excuse, because even when some people speak out on the harmful consequences of technology(people have long talked about the surveillance state), they are ignored since we dont have systems in place to do a cost-benefit analysis and coordinate around a solution. Sure, there are many problems with the individual approach(what if somebody else is hired?, what if opponents develop a weapon technology?). But this only highlights the difficulty of the problem. This is why the 'we' becomes appropriate, because individuals are weak(unless they are at some critical stage).
There are potentially many developments in areas like nanotech, bioengineering, ai with which humanity might face big troubles. For this, there needs to be some strategy identifying such potential problems, and putting precautionary measures on r&d. Something like this happened with nuclear scientists hiding their research before WW2. This is harder to replicate now, with the science and engineering community distributed more widely between competing nations. So, a first step could be to acheive a shared understanding of common interests and ways to coordinate with each other.
Alternatively, energy can be poured into defensive measures for defeating harmful technologies. We already have many engineers developing counter-technologies. Also, Snowden himself was an example of exposing something which he considered harmful.
One important thing to note is that cyclone hit Orissa at low tide. Floods, more than wind speed, seem to be the the important factor. There was flooding some time after Phailin in Andhra Pradesh due to heavy rains and the casualty count was much higher than the cyclone itself. That said, the Orissa government did a good job. There was a terrible cyclone ten years ago with huge devastation, and this time they organized mass evacuations. Hope the disaster relief is strong and the Filipinos get all the help that they will be needing.
Maybe, we should have an international language reform to introduce a phonetic spelling system (WYSIWYHear) which would also lead to a somewhat standardized accent. The local ways of pronouncing words, can still continue in each region. There are probably many aesthetic considerations for this. But when two people across different regions speak, they can use this system.
This would require linguists to document the phonemes, and to find a minimal set of alterations of either the spelling or the pronounciation of a word so that you have a phonetic system. There are already variations in spelling (American/British English). This would be another variation which is phonetic.
Of course, even if this this work is done, it would still be yet another standard needing popular adoption. The phonemes would also be more in number than the current letters which means either using diacritic symbols (requiring change in keyboards, slower typing) or mapping a phoneme to multiple letters (longer spellings and possibly some ambiguity in detecting phoneme boundaries in spellings).
When I took an acting class on accents, I learned about the International Phonetic Alphabet (http://en.wikipedia.org/wiki/Ipa). It is surprisingly handy, although most Americans have no idea it exists, much less how to read it.
Interesting. This is a very detailed effort, with encyclopaedic ambitions across different languages. The question is if there is a lightweight version of this - someway of chosing a reasonably small set of phonemes which can represent, most English words accurately. The second part is to represent the phonemes by either using a small number of new letters and diacritics (which is what the IPA already does), or to map the phonemes directly to single letters or a string of two or three letters. With the second option, there would still be ambiguity in reading a spelling (where are the phoneme breaks?). But, the accent situation would improve and one can get used to the phoneme boundaries by looking it up for each new word.
The title seems apt actually. I think 'universe' here refers to the concept of a set so large that it is closed under all set theoretic operations and since mathematical operations reduce to set theoretic operations (in the usual foundation scheme) one can start with some elements in this universe set and perform any arbitrary manipulation and the output will still be inside this set. http://en.wikipedia.org/wiki/Grothendieck_universe
Inter-universal presumably refers to geometrical statements which hold
across different such universes. At least, that's my understanding. The paper is well beyond my knowledge.
To me, 'hack' and 'unintuitive' carry different meanings. 'Hack' indicates some kind of ad-hoc patch to fix the model. Even if it fits the data, there could be a theoretical irregularity. I suspect that programmers experience this quite often when making a change that keeps the program working. Sure, it is possible that ultimately the hack is indeed how nature works, but its still worth exploring and making precise the nature of the irregularity.
This shouldn't be surprising given that the traditional maps of the Vipassana explicitly talk about negative stages. The problem is that somehow this information is not commonly known. It would benefit practitioners a great deal. In this vipassana community, for instance, it is considered important enough to be mentioned as a sticky post right at the top, http://dharmaoverground.org/web/guest/discussion/-/message_b...
Reasoning from basic principles is a valuable tool to evaluate conclusions. Sure, it has strong weaknesses (hidden assumptions which are wrong, insufficient imagination about what could happen, rationalizing one's biases). But it is useful, when you dont have complete trust in the quality or the scope of the experiment. For instance, claims about quantum computing solving NP complete problems are legitimately held in doubt because of theoretical reasons. Also, whenever there are short term positives hiding a long term negative, like say unsustainable financial or ecological behavior, the negatives might be only seen by a chain of reasoning and not by direct experiments.
I agree in the sense that we see so many cases in the other direction - reasoning full of holes being trusted over empirics. The interesting thing is in any given situation, how much trust to give to the different tools that we have to evaluate a claim.
We need a higher resolution in our vocabulary when talking about technological risks. Someone can be in favour of most forms of technology, and yet even when dealing with technologies with extinction risk, the word that comes up to describe the opposition is 'Luddite'. Which is not to say that I am opposed to AI research.
In twenty years or so, this could easily be widely available, just like machine guns today. Even if there was an explicit ban on the endproduct itself, the basic technology could be understood well enough to allow underground manufacturing. If true, this would allow anonymous murders and a complete breakdown of law and order. The more advanced miniature variants with face recognition software, might elude detection and jamming technologies.
Popular officials might find it too dangerous just to go outside. This would provoke even more intrusive surveillance systems.
There seems to be a broader trend where our offensive technologies are moving much faster than our protective systems. At a state level, this has been handled by deterrence. Fortunately, for nuclear weapons, the necessary materials were rare enough for regulation to be possible. But deterrence wont work when these technologies become popularly available. Hopefully, there is some non-brutal solution that we can find for this problem.
Now that is cool (in a horrible way). I'd never linked anonymous assassinations being carried out by drones built and controlled by garage hackers. I was more worried about government-controlled, weaponized drones monitoring citizens domestically. Though I'd say in less than 3 years the technology and economics will allow amateurs to build weaponized drones with face recognition for ~$1000.
Perhaps Anon will build their own drones to take-down government spy drones?
Or imagine a worldwide, anonymous, crowd-funded assassination network funded by bitcoins; targets decided via user votes and hits performed by weaponized drones that have no trace. They could even simply be suicide drones packed with explosives, controlled autonomously to fly at a target via face recognition.
...aaaaaand now I'm probably on a watchlist for typing that :)
I think you vastly over estimate the 'criminal underworld'. The DOD might be able to make miniature lethal drones using facial recognition software in 20 years. But, probably not. The idea that criminals will start mass producing such things cheaply is silly in your lifetime is just shy if ridiculous. Talking about building stuff is easy, but actually building stuff is hard.
PS: Problem #1 people don't look up that often. Problem #2 there are a lot of people walking around out there. And it just get's worse from there.
Well, I'd love to be wrong about this and hope that you are right. What worries me is that key parts of the technology, like the processor, sensors and software might be available cheaply due to their civilian uses. Manufacturing ability could well be scarce, but these weapons could pass through illicit distribution networks which today distribute machine guns and rocket launchers. I didn't fully understand your postscript, but regarding visibilty, I admit I dont know much here - for instance, how small of drone would be realistically possible for a non-state producer. But I dont see a clear knockdown argument, given that drones are used effectively today. Again, I hope you are right and that there are more technical obstacles out there.
It is not about making it yourself from scratch and doing research and experiments, I think it is about buying the parts from China and assembling.
Say IED builders in Afghanistan, probably do not have PhDs in physics and chemistry but they can build effective shaped charges to pierce thick tank armor.
So at some point you could just order parts from some place. Assemble, then upload the picture of face it needs to recognize, approximative coordinates and press the "take-off" button.
I don't know, I think this sounds disturbingly doable in the near future. Both your problems are essentially "opportunities to strike are few." But the point of drones is that they're cheaper than humans. If you can afford to tail someone with 2 goons, maybe you can afford 20 drones all along their typical route. And the drones can recognize license plates and other non-facial things to help them hone in.
As for facial recognition software, there's probably already an Android app for that. Sure, there's a lot of work involved - it has to recognize the person, it has to drive around autonomously, it has to aim and fire. But most of those may be pluggable components of hardware and software soon, coming from perfectly innocent projects.
I think drones may well be a big problem in the near future.
There's money in that. There's less (steady) money in assassinating the president. In fact, I imagine assassinating a president creates more problems than it prevents.
You don't have to think criminal organizations since they have little to gain from attacking public officials -- that is just likely to get too much attention.
Think China sending a drone and using it to assassinate the the president. Make it fly away and destroy itself over water or some other place where it won't be recovered.
Later on it might be possible for people like McVeight to use a drone to kill the president rather than blow up a federal building.
In your metamorphosis example, is the finished project essentially be a robot? Would you have the same problem when a software agent is transferred from one computer to another?
Assume a negative answer. Then, the interesting implicit belief, is that there is something special about body composition. Whereas in physical law, there is nothing unique in principle about the body. So, the laws would have to be wrong or incomplete(fail to include consciousness) in an important way.
There are potentially many developments in areas like nanotech, bioengineering, ai with which humanity might face big troubles. For this, there needs to be some strategy identifying such potential problems, and putting precautionary measures on r&d. Something like this happened with nuclear scientists hiding their research before WW2. This is harder to replicate now, with the science and engineering community distributed more widely between competing nations. So, a first step could be to acheive a shared understanding of common interests and ways to coordinate with each other.
Alternatively, energy can be poured into defensive measures for defeating harmful technologies. We already have many engineers developing counter-technologies. Also, Snowden himself was an example of exposing something which he considered harmful.