|
Feigned Intelligence
One question about future AI systems which can be addressed at the level of today's systems is
that of "feigned intelligence"--AI systems whose user interface causes them to appear more
intelligent than they actually are. A common example of this is anthropomorphic on-screen
characters; Heckman and Wobbrock point out that "[p]eople attribute intelligence and personality
to media, especially on-screen characters. While competence and trust are two characteristics that
must be built into agents, graphical agents can be imputed with greater confidence than they
deserve" (Heckman and Wobbrock 7). Here, as with the rest of the paper, the authors recommend
that the designer of the system be held responsible for misconceptions of the agents' ability due to
a misleading graphical interface. They quote Peter Neumann that "designers of human interfaces
should spend much more time anticipating human foibles" (Heckman and Wobbrock 8).
Forester and Morrison point out that expert systems may represent knowledge in any number of
structural ways, but all expert systems operate by applying deductive or inductive methods to a
body of knowledge. There are two potential ethics difficulties here. The first is that a certain amount
of liability will have to be assumed by the experts who are providing the system its expert
knowledge. If a company is publishing a medical diagnostic program, it must be sure that the
doctors it uses for its diagnostic information have good diagnostic skills and are unlikely to
misrepresent diseases; there is also, for instance, the question of what to do if expert doctors
disagree on a diagnosis. Include both opinions in the system? Pick the one that is more conservative
to prevent panic? Pick the one that is worse to prevent a user from thinking that a condition is more
mild or more stable that it actually is? The second problem is that the path an expert system takes to
make its decision must be clear and available to the user. Again, in the case of a medical diagnostic
system, if a user tells the software that he has a stuffy head and a cough, and the software claims he
has pneumonia, it must be clear why the software has made this decision so that the user can
evaluate the system's answer and decide whether he agrees or not. The more intelligent a system
appears, the clearer the decision-making process must be, or the user may ascribe credence to the
system's output that it does not deserve.
back
|