Intro to Expert Systems
This section is concerned largely with those AI systems which have been fully developed and used
in industry; most of these are expert systems. We will first talk about
Product liability
which covers both expert systems which provide human operators with knowledge, for instance, a
medical diagnostic program, and systems which perform some sort of complex behavior
autonomously, such as assembly-line robots. We are mostly concerned with legal liability, that is,
who must take the blame if an autonomous agent fails? Second, we will address what I have chosen
to call
The Strangelove effect,
in which we raise questions of how much autonomy an autonomous system should and can be
given, in order, perhaps, to prevent liability issues from arising in the first place. Finally, there is the
ethical question of
Feigned intelligence
or how much intelligence an expert system possesses compared with how much it appears or claims
to possess. This last issue ties most closely into the question of AI's representation of itself as
much more advanced than it actually is.
It is worth noting that from here on out, this project is going to ask questions and not attempt to
answer them. The purpose of this project is to explore ethical questions raised by AI, and not to
propose a plan of action.
back
|