This course covers a broad range of topics related to parallel and
distributed computing, including parallel and distributed architectures
and systems, parallel and distributed programming paradigms, parallel
algorithms, and scientific and other applications of parallel and distributed
computing. In lecture/discussion sections, students examine both
classic results as well as recent research in the field. The lab portion
of the course includes programming projects using different programming
paradigms, and students will have the opportunity to examine one course
topic in depth through an open-ended project of their own choosing.
Course topics may include: multi-core, SMP, MMP, client-server,
clusters, clouds, grids, peer-to-peer systems, GPU computing,
scheduling, scalability, resource discovery and allocation,
fault tolerance, security, parallel I/0, sockets, threads, message
passing, MPI, RPC, distributed shared memory, data parallel languages,
MapReduce, parallel debugging, and applications of parallel and distributed
computing.
Prereqs: CS31 and CS35 required; at least one prior upper-level CS course
required.
Designations: NSE, W (Writing Course), CS Group 2 Course
Course Goals
- Analyze and critically discuss research papers both in writing and
in class
- Formulate and evaluate a hypothesis by proposing, implementing
and testing a project
- Relate one's project to prior research via a review of related literature
- Write a coherent, complete paper describing and evaluating a project
- Orally present a clear and accessible summary of a research work
- Understand the fundamental questions in parallel and distributed
computing and analyze different solutions to these questions
- Understand different parallel and distributed programming paradigms
and algorithms, and gain practice in implementing and testing solutions
using these.
Course Webpages:
Current or most recent:
CS87: Fall 2023
Older: CS87: Fall 2021,
CS87: Spring 2020 (half semester on-line),
CS87: Spring'18, Spring'16, Spring'12, Spring'10