I'm trying to say, that our means to keep a moral high ground are subjective and based on heuristics. It seems to me that you do not notice, let me show you.
> If you cut power to an AI and then come back in a year, it's all still there.
Does AI's current state stored in volatile memory doesn't matter? Or it does? Should we avoid turning off only those AIs which store they weights in DRAM?
> Most of what causes us to have sympathy for living things or treat them with compassion just doesn't apply.
I believe slaves didn't trigger sympathy in slave-owners, but it doesn't stop us from believing that slavery is bad. I admire that you at not like this, and your sympathy extents to all living things, but it is your subjective way to decide what is moral and what is not. Other people may feel differently, what should they do to be not less highly moral than you? Or can you and I become even better and to hold even higher moral standards?
> If you do the equivalent to a computer program and the result is undesirable, it can roll back to a previous snapshot.
If we could roll back people to a previous snapshots after burning them to ashes on a bonfire, would it be ok to burn people and then restore them?
Questions like this may be impractical (because we cannot restore a human burned to ashes), but our hesitation to answer shows the limitations of our ways to think about such problems.
Humanity can benefit a lot from an objective way to deal with moral dilemmas, and based not on heuristics but on universal laws, like a physics does. It may help people to understand each other and to find ways to live together without fighting. I'm not sure that moral can be objective and based on a universal law, but it is not a reason to stop thinking. When you think about it, you find new corner-cases and specific solutions to them. At least it makes your heuristics better.
> Does AI's current state stored in volatile memory doesn't matter? Or it does? Should we avoid turning off only those AIs which store they weights in DRAM?
Is this supposed to be hard to distinguish? Destroying something is clearly a difference. But you could still "shut down" an AI that normally stores its state in volatile memory by saving the state to non-volatile memory. We don't know how to do that with humans.
AIs are also different because they're often minor variants on each other. The value of information is largely in how much it diverges from what continues to exist. For copyable data, minor forks can't be as valued as major ones. We don't have the resources to permanently store everything that is ever temporarily stored in memory. So "can you destroy a minor variant" has to be yes as a matter of practicality.
Notice that this is already what happens with humans continuously. You're not the same person you were yesterday; that person is gone forever.
> I believe slaves didn't trigger sympathy in slave-owners, but it doesn't stop us from believing that slavery is bad.
I don't think the people it didn't trigger sympathy in thought it was bad. Some people are sociopaths. And some people at the time it was happening did have sympathy and think it was bad.
> If we could roll back people to a previous snapshots after burning them to ashes on a bonfire, would it be ok to burn people and then restore them?
If we could roll people back to a previous snapshot then what you would be burning is meat. There are reasons you might want to prohibit that, e.g. because the meat is someone else's property, but it's no longer the same thing at all as murdering someone.
> If you cut power to an AI and then come back in a year, it's all still there.
Does AI's current state stored in volatile memory doesn't matter? Or it does? Should we avoid turning off only those AIs which store they weights in DRAM?
> Most of what causes us to have sympathy for living things or treat them with compassion just doesn't apply.
I believe slaves didn't trigger sympathy in slave-owners, but it doesn't stop us from believing that slavery is bad. I admire that you at not like this, and your sympathy extents to all living things, but it is your subjective way to decide what is moral and what is not. Other people may feel differently, what should they do to be not less highly moral than you? Or can you and I become even better and to hold even higher moral standards?
> If you do the equivalent to a computer program and the result is undesirable, it can roll back to a previous snapshot.
If we could roll back people to a previous snapshots after burning them to ashes on a bonfire, would it be ok to burn people and then restore them?
Questions like this may be impractical (because we cannot restore a human burned to ashes), but our hesitation to answer shows the limitations of our ways to think about such problems.
Humanity can benefit a lot from an objective way to deal with moral dilemmas, and based not on heuristics but on universal laws, like a physics does. It may help people to understand each other and to find ways to live together without fighting. I'm not sure that moral can be objective and based on a universal law, but it is not a reason to stop thinking. When you think about it, you find new corner-cases and specific solutions to them. At least it makes your heuristics better.