|
Hidden Technology
This project is not the place to wax philosophical about the nature of technology, but one thing
about which everyone agrees is that technology is human-made. As a result, it is within the domain
of human understanding. Even though you may not have any idea how the internal combustion
engine under the hood of your car works, it is clearly technology, clearly out in the open, and there
are clearly people around you who do understand how it works, reputably assure you that it will not
suddenly explode while you are driving, and can fix it if it does break. While this is true of personal
computers, it is less true of smaller, intelligent computer systems that cannot be easily seen. There is
a chance today that if your toaster is burning all your toast, it is not a mechanical flaw but a problem
with its computer, a computer that you cannot see and maybe did not even know was there. People
gradually become unable to understand the subtle workings of their own environments.
Add to this the possibility that these hidden computers possess intelligence and power to act, and
the fear is obvious. Somehow, it seems reasonable that a computer might possess the same
intelligence as a human but would not have the ethics or morality of a human, and might do damage.
In many ways, this fear rises out of two ethical concerns over AI. The first is the way AI portrays
itself to the world. I have said earlier, perhaps exaggeratedly, that the hype around AI makes no
distinction between science and science fiction; science fiction is rather simply science-to-be-soon.
Although this breeds excitement about scientific progress, it also breeds fear. It is not clear whether
the sort of hype that accompanies AI is appropriate, if it is to encourage this level of technological
paranoia. The second ethical concern is that AI researchers must take their own ethics into account
when they create agents they intend to be intelligent. They must not allow themselves to be caught
up in the pureness of pure science, but rather must weigh the ethical ramifications of their actions
before they act. The problem, as we can see, is determining what those ethical ramifications might
be when there is no clear plan for future action in AI.
back
|