|
Connectionism
Connectionism is that approach to AI which takes as its basis the biological workings of the human
brain. The general idea behind connectionism is simply that the human brain is the best example of
intelligence we have, so why not try to model it directly? Connectionism started slowly, largely
because researchers were nervous about trying to duplicate the complex and frightening human
brain, and because at the time of its invention, GOFAI systems seemed like they were heading for
success. Newell and Simon euphorically crowed:
Intuition, insight, and learning are no longer exclusive possessions of humans: any large high-speed
computer can be programmed to exhibit them also (Dreyfus and Dreyfus 19).
Quietly, in a corner away from the mainstream of AI, Frank Rosenblatt was trying to model the
brain rather than the mind. The question of how neurons, which clearly had no intelligence
individually, could demonstrate intelligence collectively was addressed by D. O. Hebb in 1949, who
suggested that neuronal learning took place when, in a system of neurons, neurons A and B were
simultaneously excited, the strength of the connection between them increased.
Rosenblatt created the perceptron, a simple two-layer artificial neural network, which could be
shown to classify certain patterns as similar and other patterns as dissimilar. This was a
revolutionary moment: Rosenblatt had not explicitly told the perceptron what defined similar
patterns, but rather the perceptron had "discovered for itself" what constituted a pattern.
Nevertheless, GOFAI researchers pointed out that "the machines usually work quite well on very
simple problems but deteriorate very rapidly as the tasks assigned to them get harder" (Dreyfus
and Dreyfus 20). This is, of course, a major problem with GOFAI systems as well. As neural
network research continued, it became clear that three-layer networks, rather than two-layer, seemed
able to solve any problem a Turing machine could, and one major advantage of connectionist
systems is that, because they are distributed and massively parallel, their intelligence seems to get
higher with increased size. Thus, rather than redesigning the system from scratch every time one
wants to solve a new problem, one can simply tweak the architecture of the network.
There are two problems with the connectionist approach. The first is its unpredictability for
pragmatic purposes. Sometimes, a neural network will come up with a solution to the problem
which is empirically correct, but not what its human designers were hoping for. Examples abound
of image processors which learned to get the right answers but recognize the wrong patterns in
images. Also, connectionist architectures are subconceptual‹no meaning can be derived from an
individual neuron, and it is frequently very difficult or near-impossible to analyze the way a neural
network is getting its answer. This combination means that industry is nervous about connectionist
systems because of their unpredictability, and those who want to model human intelligence aren't
happy with systems that cannot be analyzed. If one can't show how an artificial neural network
creates intelligent behavior, than proving that an artificial neural network can does not necessarily
offer any conclusions about human cognition, other than that the artificial neural network is similar
to some level of resolution to the human brain. As a result, though connectionism is much more in
vogue these days than GOFAI, its applications are still unclear.
back
|