|
The Killer Robots Problem
Yet another fear about AI research is that it will create intelligent systems that are flawed or lack a
moral code and grant them too much autonomy, causing problems when the machines make
decisions that go against a human moral code--for example, humans should not be killed and
should be allowed to make their own decisions.
This is a theme that runs throughout science fiction, ever since the invention of the robot in Karel
Capek's play, R.U.R. In that play, robots eventually revolt against their human masters, unwilling to
act as servants anymore--unwilling, that is, to be purely technology, tools for use by humans. The
causes of system failure among intelligent computers in science fiction are broadly varied;
sometimes AI systems rebel, sometimes they "go crazy" due to some unforeseen bug in the
program, and sometimes they make decisions which are perfectly rational by their system of logic
but which involve killing humans, enslaving them, and so forth. This fear of AI is in many ways not
inappropriate. GOFAI, the mainstream of AI for many years, presented itself as purely
mathematical and logical. For one thing, the pure sciences and mathematics have long prided
themselves on their pureness of fact, the idea that their fields were totally rational ways of advancing
knowledge and had no connection to fallible emotions. At the same time, our only working example
of human-like intelligence, that is, the human, certainly functions using emotions and ethics. It is
therefore not unreasonable for the public to consider ethics a necessary part of a sufficiently
intelligent system in order to make it workable in the real world, and to believe that a universal
symbol system such as GOFAI presented would not be able to include things like empathy and
morality. Although connectionism is in a somewhat better position, because it does not claim pure
logic as its basic building block, it still raises the question of how systems like morality and
empathy arise out of neurons; on the other hand, one could ask the same question about the human
brain itself. In the end, AI researchers must address the question of how intelligent computers are to
make decisions which the general public will consider "ethical," if they are to create intelligent
systems that will interface with the real world.
back
|