Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“In the perfect world, the drone should take off, fly, find the target, strike it, and report back on the task,” Burukin says. “That’s where the development is heading.”

That's the problem in a nutshell. A few years back, few would argue against keeping a human in the kill/no-kill decision chain. It just took one war to get pop tech authors writing on it without even a mention of the ethical considerations or autonomous killing machines.



It highlights an awkward apparent fact that 'ethics' and 'honor' are luxuries to maintain when one is on top and they will be thrown out the door as soon as they face an actual threat. Not saying that it is right but that it appears to be the predictable response.

I suppose it highlights axiomatically the terribleness of ethics when they must be defined in a might-makes-right manner. All very high minded and complex questions which leave the awkward question unanswered: what are we supposed to do?


I don’t really want to argue this side, but is it that different from a smart bomb or guided missile? A human is in the loop; the human issued the coordinates of the target to the delivery vehicle.

That kind of operation seems extremely different from a stationary turret or patrol robot with standing orders to shoot upon arbitrary targets at any time it decides to.


I mean I think it's relevant that these machines weren't actually used to kill.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: