|
Product liability
The question of liability is a major legal question in AI--systems that act on their own impulses
might make it very difficult to draw a distinct line between human fault and system fault. The first
possibility of liability in the case of AI-based systems is an AI system deliberately designed to
cause harm; liability for the results of such a system would fairly clearly fall to the system's
designer. Although it would be harder to prove, liability as a result of negligence in the system's
design would also clearly be the fault of the designer. If a smart bomb was designed to seek out
military installations and weapons factories, but not designed to avoid hospitals, liability for
violating international military law would clearly fall on the designer of the smart bomb, not the
smart bomb itself.
The fact that the idea of a smart bomb taking responsibility is a bit ridiculous exposes the current
status of most expert systems--that are still stupid enough that their behavior is obviously the result
of their designer's programming. The question of responsibility is much more questionable in the
case of experimental, more autonomous agents. Autonomy, as Carey Heckman and Jacob
Wobbrock have pointed out, raises problems of causality; with most software, damage caused by
the program is the direct fault of the user's input or the system's programming, whereas with
autonomous agents, the system's behavior may be different than the behavior expected by the user.
This effectively means that the designers of systems will be implicated for damage caused by an
autonomous agent much more than the users of the agents will.
Further problems with autonomous agents arise because of the indeterminacy of their behavior,
their mobility, and the unpredictability of their use and their environment, but in the end the general
conclusion seems to be that the designer should be held responsible in most cases, and that the
designer should only release a product that is thoroughly tested and prevented from engaging in
damaging behavior. The problem there is simply that rigorous software testing is always extremely
difficult and never 100% complete, and the more advanced the AI, the less predictable and less
testable the system's behavior is. This raises another interesting question, to be explored in the next
section: if the potential for damage by an autonomous agent cannot be fully eliminated, the agent's
power must be limited. How must power should an autonomous system be given so that it can both
maintain useful autonomy and be prevented from causing damage in the event of bugs?
back
|