Must be something in the water

Monday, December 26, 2011

Machine Morality

I read this article recently in the New York Times. The topic is the idea of machine morality. The fundamental argument Allen makes is that, assuming that machines do reach a human level of intelligence, that we will have to find a way of instilling a sense of morality in them. But what is interesting of course is that this means that the morality of the machine will inevitably be the morality of its programmer. Yet the catch-22 here is that for the machine to be able to be moral, it has to be capable of deciding whether to obey its creator. As in, it has to realize that it may ultimately have to choose to disobey it's creator. But then, how does one program a machine that specifically may not do as told? And the question is, is that something we want to build?