|
The Strangelove Effect
I've named this section after the movie Dr. Strangelove, in which nuclear catastrophe is caused
because a "Doomsday Device" owned by the United States has been set to go off due to a simple
miscommunication and a series of security measures designed to protect the system which end up
ensuring that it cannot be stopped. This sets in sharp relief the question: how much power should
autonomous agents be given?
The answer to this question is clearly, "Enough power to maintain a useful autonomy, and little
enough power to prevent damage." But it is not at all clear that the specifics of this answer can be
systematized in any way. Not only does each autonomous agent require a different level of
autonomy and have a different potential to do damage, but as autonomous agents because more and
more advanced, their capacity for both autonomy and damage will increase. It is not clear at this
point how to best limit their power, although those who are optimistic about advances in AI would
likely claim that eventually systems with have enough autonomy for us to ascribe them with
intentionality, at which point we could simply apply human laws to them. But this point seems
distant at best, and in fact reveals a problem with judging even currently running expert systems.
They are not "intelligent" enough to warrant worry about how to judge liability; liability clearly
falls to either the user or the designer in almost all cases, but there are as many potential futures for
AI as there are people who think about AI, and thus it is very difficult to construct an ethics ahead
of time, as much as it would be helpful to have laws and ethics in place to handle the sort of AI that
makes us question our own ideas of consciousness, if it arrives.
back
|