IMHO ethics will effectively be the defining characteristic of AI in the future. The companies who innovate AI, however, will probably never mention it except as marketing copy.
That leaves literally everyone else holding the bag in regards to judging the industries actions and holding it accountable for transgressions. I'm not sure how that will work out, but I'm not terribly hopeful...
It may be a focus for people who are not actively working on the machines, but for people who are actively innovating in the AI space, the most powerful machines will be created by the people who are spending the least amount of time stressing over the ethics of various decisions. More powerful AI means a more competitive business, so the industry will self-select for those with lesser concerns for the ethical implications of AI.
We already know that we are unable to hold industry accountable for unethical action. How many times did we scream about Facebook's increasing privacy abuses? How many times do people talk about being uneasy because Google knows where their flights are, where their home is, where their favorite restaurants are, even if it was not told explicitly. And, despite all the complaints, the industry giants committing the worst atrocities remain the biggest giants. It's because violating the rights of your users makes you competitive, and the users can't tear themselves away from the increased power that it lends the features.
We need to accept that when AI arrives, we're not going to have very many controls over what it does. Regardless of how terrible the implications might be, we're going to be about as effective at stopping the arrival of AI as we have been at stopping the arrival of global warming. We need to prepare for a post-AI future and just accept that it's going to be invasive and have relatively little regard for human ethical concerns, and instead be focused almost entirely on the things that make it competitive.
A major book about ethics, "After Virtue", takes as half of is subject matter what the author calls "the interminable nature of ethical debates" and the failure of post-enlightenment ethical thinking.
I don't see how AI is going to suddenly make us capable of ethical reasoning on a large scale... unless... maybe AI could do the reasoning for us...
I'm probably going to be downvoted to oblivion, but I would actually be more at ease in general if most people would defer critical judgments to a reliable and open-sourced AI. For example in terms of driving, my discomfort about how an AI would handle being Kobayashi Maru'd is far less significant than my discomfort about encountering a teenage driver.
That leaves literally everyone else holding the bag in regards to judging the industries actions and holding it accountable for transgressions. I'm not sure how that will work out, but I'm not terribly hopeful...