Sparse Coding Lab
SPiking And Recurrent SoftwarE Coding Lab @ Drexel University
Research inspired by breakthroughs in computational and theoretical neuroscience that incorporate ideas not explored by current feed-forward deep learning architectures. Applications in Generative AI, Cryptography, and Distributed Systems.Learn more
128 dictionary elements learned after viewing 10,000 ImageNet images.
We are exploring AI frameworks that mimic how the mammalian brain senses and understands the world. Our goal is to develop an AI system will learn much like an infant learns, by simply observing the world and learning through observation. Eventually, the model should learn the structure of the world and existing associations, and accurately make predictions. We are using neuromorphic software and hardware concepts such as sparse coding, top-down feedback, spiking neural networks, and neuronal dynamics to create a machine intelligence that has a better understanding of the world in which we live.
E.Kim, C.Onweller, A.O'Brien, K.McCoy, "The Interpretable Dictionary in Sparse Coding", Explainable Agency in Artificial Intelligence Workshop, XAI AAAI 2021.
D.Schwartz, Y.Alparaslan, E.Kim, "Regularization and Sparsity for Adversarial Robustness and Stable Attribution", International Symposium on Visual Computing, ISVC 2020.
J.Carter, J.Rego, D.Schwartz, V.Bhandawat, E.Kim, "Learning Spiking Neural Network Models of Drosophila Olfaction", International Conference on Neuromorphic Systems, ICONS 2020.
Y.Watkins, E.Kim, A. Sornborger, G.T. Kenyon, "Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model", IEEE International Conference on Computer Vision and Pattern Recognition Workshop, CVPR-W 2020.
E.Kim, J.Rego, Y.Watkins, G.T. Kenyon, "Modeling Biological Immunity to Adversarial Examples", IEEE International Conference on Computer Vision and Pattern Recognition, CVPR 2020.
E.Kim, J.Yarnall, P.Shah, G.Kenyon, "A Neuromorphic Sparse Coding Defense to Adversarial Images", International Conference on Neuromorphic Systems, ICONS, 2019.
E.Kim, E.Lawson, K.Sullivan, G.Kenyon, "Spatiotemporal Sequence Memory for Prediction using Deep Sparse Coding", Neuro-inspired Computational Elements Workshop, NICE, 2019
J.Springer, C.Strauss, A.Thresher, E.Kim, G.Kenyon, "Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples", arXiv:1811.07211, 2018.
E.Kim, K.McCoy, "Multimodal Deep Learning using Images and Text for Information Graphic Classification", ACM SIGACCESS Conference on Computers and Accessibility, Assets, 2018 (Best Paper Nominee).
E.Kim, D.Hannan, G.Kenyon, "Deep Sparse Coding for Invariant Multimodal Halle Berry Neurons", International Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
N.Barari, E.Kim, "Linking Sparse Coding Dictionaries for Representation Learning", International Symposium on Neuromorphic Systems, ICONS 2021.
M.Daniali, E.Kim, "Object Detection: Conservative Correct Humans vs. Overconfident Incorrect Machines"", Grace Hopper Conference, GHC 2021.
M.Daniali, E.Kim, "Deep Learning is Confidently Incorrect Over Time", IEEE Computer Vision and Pattern Recognition Workshop, CVPRW 2021.
J.Yarnall, P.Shah, E.Kim, "A Neuromorphic Sparse Coding Defense to Adversarial Images", Sigma Xi Student Research Poster Symposium, Villanova, 2019
5th Year PhD Student
3rd Year PhD Student
3rd Year PhD Student
2nd Year PhD Student
2nd Year PhD Student
1st Year PhD Student
Jane Ha, CS, Research Assistant 2022, Coop 2021 - Neuro-inspired Machine Learning
Past Undergraduate Collaborators
Daniel Smith, ME, Summer Research 2021
Brian Nguyen, CS, STAR Scholar 2021
Jasmine Ben-Whyte, CS, STAR Scholar 2021
Isamu Isozaki, CS, Coop 2021 - Multimodal Machine Learning
David Schmidt, CS, Coop 2021 - Neuro-inspired Machine Learning
Hung Do, CS, STAR Scholar 2021
Jeff Winchell, CS, Senior Thesis, 2021 - Temporal Smoothing with Sparse Coding
David Linskens, CS, Coop 2021 - Neuro-inspired Machine Learning
Raisha Begum, CompE, Coop 2020 - Bias in Machine Learning
Zachary Nguyen, CS, STAR Scholar 2020
Muneera Cadersa, CS, STAR Scholar 2020
Salamata Bah, CS, STAR Scholar 2020
Stefan Wagner, CS, Summer Research 2020
Frequent External Collaborators
Garrett T. Kenyon, Ph.D. - Los Alamos National Labratory
Yijing Watkins, Ph.D. - Pacific Northwest National Labratory
Vikas Bhandawat, Ph.D. - Biomedical Engineering, Drexel University
Kathleen McCoy, Ph.D. - University of Delaware
Connor Onweller, Ph.D. student - University of Delaware
Joseph Toscano, Ph.D. - Villanova University
While deep learning continues to permeate through all fields of signal processing and machine learning, a critical exploit in these frameworks exists and remains unsolved. These exploits, or adversarial examples, are a type of signal attack that can change the output class of a classifier by perturbing the stimulus signal by an imperceptible amount. The attack takes advantage of statistical irregularities within the training data, where the added perturbations can "move" the image across deep learning decision boundaries. What is even more alarming is the transferability of these attacks to different deep learning models and architectures. This means a successful attack on one model has adversarial effects on other, unrelated models. In a general sense, adversarial attack through perturbations is not a machine learning vulnerability. Human and biological vision can also be fooled by various methods, i.e. mixing high and low frequency images together, by altering semantically related signals, or by sufficiently distorting the input signal. However, the amount and magnitude of such a distortion required to alter biological perception is at a much larger scale. In this work, we explored this gap through the lens of biology and neuroscience in order to understand the robustness exhibited in human perception. Our experiments show that by leveraging sparsity and modeling the biological mechanisms at a cellular level, we are able to mitigate the effect of adversarial alterations to the signal that have no perceptible meaning. Furthermore, we present and illustrate the effects of top-down functional processes that contribute to the inherent immunity in human perception in the context of exploiting these properties to make a more robust machine vision system.
PetaVision is an open source, object oriented neural simulation toolbox optimized for high-performance multi-core, multi-node computer architectures. PetaVision is intended for computational neuroscientists who seek to apply neuromorphic models to hard signal processing problems; both to improve on the performance of existing algorithms and/or to gain insight into the computational mechanisms underlying biological neural processing.
Neuroscientists are rapidly converging on answers to once seemingly intractable questions: How do we learn? How are memories stored? How is information represented across ensembles of cells? How do neural cells compute? This talk will survey recent progress in experimental and theoretical brain science, including contributions from Los Alamos researchers, and present some novel hypotheses as to how biological information processing differs fundamentally from conventional digital computation.
Spatiotemporal Sequence Memory for Prediction using Deep Sparse Coding. For our project, we sought to create a predictive vision model using spatiotemporal sequence memories learned from deep sparse coding. This model is implemented using a biologically inspired architecture: one that utilizes sequence memories, lateral inhibition, and top-down feedback in a generative framework. Presented at the NICE 2019 conference.
This material is based upon work supported by the National Science Foundation under Grant No. 1846023, 1954364
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the National Science Foundation.