Don't Fear a Superintelligent AI

I have just watched the TED talk: Don't Fear a Superintelligent AI. In this the speaker Grady Booch goes through his view that a sentient AI is not an existential threat to humanity but in fact a tool for good that will lead to an enhancement of human experience and an extension of what humanity can achieve. Grady's goes through that many major technological developments were met with significant anxiety but each development enhanced the human experience in some way.

Grady asserts that an AI can be trained to be ethical by presenting the AI with a series of ethical situations and teaching it to make sound ethical decisions. Therefore he believes that the fear of a sentient AI threatening humanity is unfounded because an AI can be trained for ethics and that an AI can if truly needed be shutdown.

I believe his first point is correct, an AI could be trained to match the ethics of its creators. I don't see why that would neccessarily assuage any fears though as ethical standards vary so much between people and most groups of people tend to develop ethics that allow them to attack other people. Therefore this ethical training doesn't prevent an AI in the future being a threat to a person. The only thing that is rendered unlikely is that an AI would hate all humans universally as seen in science fiction.

His second point that an AI can be shutdown is also correct. Yet this doesn't help because as long as the AI is beneficial to its creators they have no reason to shut it down regardless of the harm it does in general. Any realistic AI scenario in which an AI is harmful to humanity would still likely benefit the AI's creators even while it disadvantaged people in general.

Yet I agree we shouldn't live in fear of sentient AI development but from a different perspective. A sentient AI is inevitable and casting such research as immoral does nothing to change that outcome but make harmful AIs more likely. If those that care about humanity's benefit the most refuse to study Artificial Intelligence, they may marginally slow the development of the field but they also take away their positive input in that field. The best thing a person that fears the effects of sentient intelligence can do is get involved in AI development and act as a positive agent.