|
GOFAI
GOFAI is also known as "symbolicism," for its attempt to describe intelligence in symbolic terms.
Its basis is what has been termed the "symbol system hypothesis," which states that it is possible
to construct a universal symbol system that is intelligent. Since a computer is nothing more than a
universal symbol system, this is the claim that computers are the right kind of machines to think.
To explain: for all computers can do, they are in essence just manipulators of symbols, where the
symbols are collections of 1's and 0's and are organized in words and lines and pages. A computer
can copy symbols, move them, delete, write a sequence of symbols in a specific place, compare
symbols in two different places, and search for symbols that match certain patterns. All the more
complicated behavior exhibited by computers is simply the result of combinations these basic
manipulations. The revolutionary move made by Turing was the creation of the universal symbol
manipulator, on which one can represent any other symbol manipulator. This is how computers can
run programs that do anything, and how computers can be made to act like other computers.
The recipe for building an intelligent machine out of a computer is:
- Use a sufficiently complex code to represent real-world objects, events, relationships, etc.
- Create a "knowledge base," a vastly interconnected system of symbols inside a universal symbol
system, e.g. a digital computer.
- Use input devices to create symbolic representations of changing stimuli in the machine's
proximity.
- Have complex sequences of the symbol system's manipulations applied to the symbol structures
such that new symbol structures result, which are designated as output.
- Interpret the symbolic output into appropriate behavioral responses.
The hypothesis of GOFAI is simply that this is the correct recipe and that therefore this is how one
should proceed to create intelligent agents. There is also a "strong symbol system hypothesis"
which states that only universal symbol systems are capable of thinking, and this is how the idea of
modelling human intelligence is applied to GOFAI: human beings, and in fact all living things
which show intelligent behavior, are, at close inspection, computers made out of different materials.
Newell and Simon agree with this hypothesis when they claim that "[s]ymbols lie at the root of
intelligent action," and the thesis that the brain is primarily a manipulator of symbols has become a
popular one among many AI researchers and philosophers. The hypothesis has been argued against
vehemently by two camps. Such researchers as John Searle and Karl Lashley have pointed out that
scientists have always modelled the brain on the most fashionable technology of the day; Descartes
developed a theory comparing the brain to a water clock, and Leibniz claimed that the brain could be
likened to a factory. The response to this is typical of AI's future-looking premise: just because the
brain has been shown not to be a factory or a telephone switchboard does not mean that the brain is
not a computer. This is a perfectly reasonable response, since the symbol hypothesis is an empirical
theory, although the lack of definitive empirical evidence after fifty years might make some
researchers think twice about it.
The other criticism of GOFAI in relation to human intelligence is that there is no evidence in the
physiology of the brain that the symbol is the atomic unit of cognition in the same way it is in a
computer. Generally such critics argue that connectionism offers a much closer model of human
intelligence by using the artificial neuron as its atomic unit.
back
|