This paper is due Thursday, May 16, which is the final day of exams, at 5:00 PM. You may turn it in earlier. Email it to both of us (lmeeden1 and kthomas2) as a PDF attachment. Feel free to come discuss ideas during office hours.
-
In the blueprint for the AI Bill of Rights, the final
provision states “You should be able to opt out, where appropriate,
and have access to a person who can quickly consider and remedy
problems you encounter.” The claim in the blueprint seems to be a
moral/ethical one: morally, humans have a right to human
oversight. For this paper, construct an argument that tries to explain
this right. Why would a right to human oversight be morally valuable
or important? One obvious objection to think about here is the
following claim: automated decision-making is supposed to correct for
human flaws and biases. Why would someone want human oversight if
humans are flawed or biased?
-
The Nguyen and Llanera pieces are about how people can get
sucked into extreme views or echo chambers online. In this paper do
two things. First, explain why is it bad to be in an echo chamber or
to hold extreme views. Second, imagine you are writing a guidebook
that would help people avoid getting trapped in echo chambers or
extreme views. What kind of guidance would you give? Remember to
anticipate objections.
-
Facial recognition technology is one of the more ethically
fraught technological developments. We’ve read pieces that advance
many different arguments about its ethical implications. For this
paper, answer the following question: what is the most serious ethical
problem with facial recognition technology and why? To get started, it
would helpful for you to sort through the readings that talked about
it to find the ethical problems identified there (the movie Coded
Bias, Crawford’s Chapters 4, 5, and 6 are likely candidates, but
there are others). Your opponent will be someone who thinks there is
some other problem that is more serious than the one you chose or
someone who thinks facial recognition technology is not in fact
morally fraught at all.
-
In Emily Bender's talk Chat GP-Why: When, if ever, is
synthetic text safe, appropriate and desirable she stated that
LLMs "are nothing more than ungrounded text synthesis machines" that
"only model the distribution of words." With more and more training,
these models can now create very coherent and fluent responses that
seem plausibly correct, yet Bender would argue that they don't really
understand what they are producing. Can a completely disembodied
algorithm that never interacts with the real world, ever gain a deep
understanding of language and meaning? The Dreyfus chapters might be a
good resource in helping you answer this question. Remember to
anticipate an objection.