Jeremy Johnson is a Professor in the Departments of Computer
Science and Electrical and Computer
Engineering. He just completed a ten year term as Department Head of the
Computer Science Department. He received a B.A. in Mathematics from
the University of Wisconsin-Madison in
1985, a M.S. in Computer Science from the University
of Delaware in 1988, and a Ph.D. in Computer Science from The
Ohio State University in 1991.
Dr. Johnson's research interests include algebraic algorithms, computer
algebra systems, problem solving environments, programming languages and
compilers, high performance computing, hardware generation, and automated
performance tuning. He is currently working on SPIRAL,
a joint research project with Carnegie Mellon
University, University of Illinois at
Urbana-Champaign, ETH Zurich,
to develop techniques to automatically implement and optimize
signal processing algorithms. He Director of the Applied Symbolic Computing Lab (ASYM) with projects in signal
processing, communications, scientific computing, computer algebra and
power systems funded by DARPA,NSF, DOE,
and intel. He currently serves as chair of
ACM SIGSAM, the special interest group
in symbolic and algebraic manipulation, and the Franklin Institute Computer and Cognitive Science cluster in the
Committee on Science and the Arts.
- CS 270 Math Foundations of CS - sec. 1 (MW 12:00-2:00 in 3020 Market Room 445) - See BBLearn for syllabus and course materials.
W 10-12 (in UC 100C). Additional hours, including online, can be arranged by appointment.
I can be reached via email as well: johnsojr AT drexel DOT edu
Senior Design Projects
- Gavin Harriso (co-advisor Dave Saunders), High-Performance Exact Linear Algebra.
- Mark Boady (co-advisor Pavel Grinfeld), Symbolic Tensor Analysis, 2016.
- Lingchuan Meng, Automatic Library Generation and
Performance Tuning for Polynomial Multiplication, 2015.
- Petya Vachranukunkiet (co-advisor Prawat Nagvajara),
Power Flow Computation Using Field Programmable Gate
- Anthony Breitzman, Automatic Derivation and
Implementation of Fast Convolution Algorithms, 2003.
- Anatole Ruslanov (co-advisor Werner Krandick),
Architecture Aware Taylor Shift by 1, 2006.
- Assistant Professor, Department of Computer and
Information Sciences, SUNY Fredonia, Fredonia, NY.
- Pinit Kumhom (co-advisor Prawat Nagvajara), Design,
Optimization, and Implementation of a Universal FFT
- Kevin Cunningham (co-advisor Prawat Nagvajara) -
High-Performance Architectures for Accelerating Sparse
LU Computation, 2011.
- Michael Andrews (MS) - Performance Models
- Doug Jones - Data Pump Architecture Simulator and
Performance Model, 2010.
- Gavin Harrison (co-advisor Prawat Nagvajara) -
Hardware for Sparse Matrix-Vector Multiplication, 2010.
- Timothy Chagnon - Architectural Support for Direct
Sparse LU Algorithms, 2010.
- Anupuma Kurpad (co-advisor Prawat Nagvajara) -
Comparative Performance Analysis of Phase Recovery
Algorithm for Microstructure Reconstruction, 2009.
- Pranab Shanoy, Universal FFT Core Generator, 2007.
- Mihai Furis, Cache Miss Analysis of Walsh-Hadamard
Transform Algorithms, 2003.
- Xu Xu, A Recursive Implementation of the
Dimesionless FFT, 2003.
- Michael Balog (co-advisor Prawat Nagvajara), A
Flexible Framework for Implementing FFT Processors,
- Kang Chen, A Prototypical Self-Optimizing Package
for Parallel Implementation of Fast Signal Transforms,
- Hung-Jen Huang, Performance Analysis of an Adaptive
Algorithm for the Walsh-Hadamard Transform, 2002.
- Peter Becker, A High Speed VLSI Architecture for the
Discrete Haar Wavelet Transform, 2001.
- Rich Pedersen, A Simple Model for the Runtime
Performance of Finite Fourier Transform Algorithms,
- Tim Chagnon
- Aliaksei Sandryhaila
- Yevgen Voronenko
Research interests include algebraic algorithms, computer algebra systems,
problem solving environments, programming languages and compilers, high
performance computing, hardware generation, and automated performance tuning.
Research Labs and Projects
- Application Specific Computing Research Group
- Applied Symbolic Computation
Created: 7/18/96 (last revised 1/2/02) by email@example.com