CS 730 Parallel Processing

 Announcments  Lectures  Programs  Course Resources  Assignments & Solutions  Grading Policy
Course Description
Covers a variety of paradigms and languages for programming parallel computers. Several tools for debugging and measuring the performance of parallel programs will be introduced. Issues related to writing correct and efficient parallel programs will be emphasized. Students will have ample opportunity to write and experiment with parallel programs using a variety of parallel programming environments.
Course Objective
To be able to design, implement, and analyze correct and efficient parallel programs. To become familiar with existing parallel computers and programming environments for parallel computation. All students will complete an extensive programming project which will involve parallelizing an application of interest.
(CS557 Data Structures and Algorithms I), (CS680 Unix Programming Environment) or equivalent courses.
Jeremy Johnson
Office: 271 Korman Center
Phone: 895-2893
E-mail: jjohnson@mcs.drexel.edu
Office hours: MW 4:30 - 6 and Th 4-5. Additional hours by appointment.
Meeting Time
W 6:00-9:00 in Korman 259
  1. There is no required text for the course. We will rely on references and lecture notes provided by the instructor and available from the course web page. The first three references books listed in the coures resource section of this web page would, when combined, be an appropriate text. The first of these books is out of print (selected material from this book will be distributed), and the second and third are available for purchase from the links provided or through other online sources.


  1. Programming Assignments (three) 30% (3 at 10%)
  2. Midterm Exam 30%
  3. Project 40%

Final grades will be determined by your total points weighted according to this distribution. Grades will be curved based on relative student performance. Students who successfully complete all of the homework and do reasonably well on the exam and project should receive a B. Students with high exam and project scores and who do well on the assignments will receive an A.

All assignments must be completed alone unless otherwise stated. No Late assignments will be accepted without prior approval.


Reference Books
  1. Nicholas Carriero and David Gelernter, How to Write Parallel Programs: A First Course, MIT Press, 1990.
  2. Rohit Chandra, Ramesh Menon, Leo Dagum, David Kohr, Dror Maydan, Jeff McDonald, Parallel Programming in OpenMP, Morgan Kaufman Publishers, 2000.
  3. Peter Pacheco, Parallel Programming with MPI, Morgan Kaufman Publishers, 1996.
  4. Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra, MPI: The Complete Reference (Vol. 1 - The MPI Core), 2nd Ed., 1998. The first edition is available online at MPI: The Complete Reference.
  5. Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra, MPI: The Complete Reference (Vol. 2 - The MPI-2 Extensions), 1998.
  6. F. Thomas Leighton, Introduction to Parallel Algorithms and Architectures: Arrays, Trees & Hypercubes, Morgan Kaufman Publishers, 1992.
Web Pages

Look Here for Important Announcements

Announcements (Mar. 8 @ 12:30 am)


This list is tentative and may be modified at the instructor's discretion.
  1. Jan. 3, 2001 (Introduction to Parallel Programming with Linda)
  2. Jan. 10, 2001 (Designing and Implementing Parallel Programs with Linda)
  3. Jan. 17, 2001 (Performance Analysis and Optimization of Linda Programs)
  4. Jan. 24, 2001 (Shared Memory Programming with Threads)
  5. Jan. 31, 2001 (Shared Memory Programming with OpenMP)
  6. Feb. 7, 2001 (Midterm Exam)
  7. Feb. 14, 2001 (Shared Memory FFTs)
  8. Feb. 21, 2001 (Message Passing using MPI)
  9. Feb. 28, 2001 (MPI Performance Analysis and Modeling)
  10. Mar. 7, 2001 (Distributed Memory FFTs)
  11. Mar. 14, 2001 (Project Presentations/Final Exam)



Exam Studyguide


Created: 1/1/01 by jjohnson@mcs.drexel.edu