Scientific limitations to the non-scientific idea that super-intelligence will come (for exterminating humans)

Ernest Davis, Ethical guidelines for a superintelligence, Artificial Intelligence, Volume 220, March 2015, Pages 121-124, ISSN 0004-3702, DOI: 10.1016/j.artint.2014.12.003.

Nick Bostrom, in his new book SuperIntelligence, argues that the creation of an artificial intelligence with human-level intelligence will be followed fairly soon by the existence of an almost omnipotent superintelligence, with consequences that may well be disastrous for humanity. He considers that it is therefore a top priority for mankind to figure out how to imbue such a superintelligence with a sense of morality; however, he considers that this task is very difficult. I discuss a number of flaws in his analysis, particularly the viewpoint that implementing ethical behavior is an especially difficult problem in AI research.

Comments are closed.

Post Navigation