|
A More General Critique of AI
There is, however, a critique of even the connectionist paradigm, set forth recently by Terry
Winograd, the very same researcher who helped jumpstart the GOFAI revolution of the 1960's
with his SHRDLU program. Winograd argues that the danger of the rationalist tradition is the
assumption that reality has an objective existence independent of the observer, and that cognition
consists of manipulating mental representations of this world. Instead, they argue, cognition is a
dynamic interaction with the world, and our brain helps determine how the world is perceived, rather
than simply grasping a subject world. Forester and Morrison give the example of a stick illuminated
from one side with white light and from the other with red light, and the green shadow that results,
even though there is no actual light shining on the stick in the range of the spectrum we normally
call green. The internal pattern of our retinal states determines our perception, says Winograd, and
not the other way around.
The problem with "building" a cognitive system is therefore that cognition becomes a series of
provocations of the nervous system by the environment. Both the range of stimuli that can
successfully provoke our nervous system and the range of possible effects of these provocations
are determined by human evolution, in the case of innate cognitive properties, and in the case of
learning, a smaller-scale evolutionary and historical process of interacting with the world and
selecting certain stimuli and responses. That is, we have evolved to perceive green in certain
circumstances, and there is no one-to-one mapping between "green in the world" and "green in
our minds."
This raises quite a few major questions for AI. It indicates, for instance, that language might not be
the decidable logical construct most of AI assumes it to be, but rather a historically constructed
phenomenon tied closely into the nature of human experience. This would mean that for computers
to develop human-like intelligence they would need to develop interactively with their environment,
and while this is not outside the realm of possibility, it is not clear how this interaction would take
place. Moreover, the question is raised as to whether computers can interact with their environment
in a human-like way without having human-like bodies. For instance, if computers cannot see the
world in a similar way that humans do, or touch objects in similar ways to humans, could they
develop human-like intelligence? Or to take a more simple example, would developing a computer's
intelligence to the level of a five-year old require leaving a computer turned on for five straight
years, an unreasonable request of almost any computer system today? As our conception of the
world around us changes, our conception of the project of artificial intelligence must change as well.
back
|