scANNA: Boosting Single-Cell RNA Sequencing Analysis with Simple Neural Attention
I formulated, designed, and co-first-authored scANNA (single-cell Analysis using Neural-Attention), an interpretable deep learning model leveraging neural attention, which sets state-of-the-art performance for a suite of challenging single-cell analyses, all done using a single model!
Here is a very short abstract for our manuscript Boosting Single-Cell RNA Sequencing Analysis with Simple Neural Attention:
A significant drawback of current Deep-Learning (DL) approaches for Single-Cell RNA sequencing (scRNAseq) analysis is the lack of interpretability. Moreover, current pipelines utilize disjoint models for the various stages of analysis. We present a novel interpretable DL model for scRNAseq studies, called scANNA, that learns important biological concepts based on desired downstream tasks. We demonstrate that scANNA’s simple interpretability can be used to perform many challenging scRNAseq tasks.
N-ACT: Interpretable Deep Learning for Single-Cell RNA-Seq
I formulated, designed, and co-first-authored N-ACT (Neural-Attention for Automatic Cell Type identification), a first-of-its-kind interpretable deep learning framework for fully unsupervised identification of cell types present in scRNAseq, without any prior knowledge of the system.
Here is the abstract for our manuscript N-ACT: An Interpretable Deep Learning Model for Automatic Cell Type and Salient Gene Identification.
Single-cell RNA sequencing (scRNAseq) is rapidly advancing our understanding of cellular composition within complex tissues and organisms. A major limitation in most scRNAseq analysis pipelines is the reliance on manual annotations to determine cell identities, which are time-consuming, subjective, and require expertise. Given the surge in cell sequencing, supervised methods-especially deep learning models- have been developed for automatic cell type identification (ACTI), which achieve high accuracy and scalability. However, all existing deep learning frameworks for ACTI lack interpretability and are used as "black-box" models. We present N-ACT (Neural-Attention for Cell Type identification): the first-of-its-kind interpretable deep neural network for ACTI utilizing neural-attention to detect salient genes for use in cell-type identification. We compare N-ACT to conventional annotation methods on two previously manually annotated data sets, demonstrating that N-ACT accurately identifies marker genes and cell types in an unsupervised manner, while performing comparably on multiple data sets to the current state-of-the-art model in traditional supervised ACTI.
Deep Learning + Spatial Tanscriptomics
I first authored a mathematical/computational review paper on the applications of Deep Learning in Spatial Transcriptomics. This review paper targets a mathematical or computational audience, reviewing current Deep Learning techniques and architectures used in Spatial Transcriptomics (single-cell omics). This review can serve as a great starting point for mathematicians, physicists or biologisits with extensive mathematical background who aim to model complex biological systems, specifically, single-cell omics. Our review paper is submitted to the Biophysics Review Journal, and is currently available on BioRxiv as a pre-print (linked below).
Here is the abstract for our review paper Deep Learning in Spatial Transcriptomics: Learning from the Next Next-Generation Sequencing:
Spatial transcriptomics (ST) technologies are rapidly becoming the extension of single-cell RNA sequencing (scRNAseq), holding the potential of profiling gene expression at a single-cell resolution while maintaining cellular compositions within a tissue. Having both expression profiles and tissue organization enables researchers to better understand cellular interactions and heterogeneity, providing insight into complex biological processes that would not be possible with traditional sequencing technologies. The data generated by ST technologies are inherently noisy, high-dimensional, sparse, and multi-modal (including histological images, count matrices, etc.), thus requiring specialized computational tools for accurate and robust analysis. However, many ST studies currently utilize traditional scRNAseq tools, which are inadequate for analyzing complex ST datasets. On the other hand, many of the existing ST-specific methods are built upon traditional statistical or machine learning frameworks, which have shown to be sub-optimal in many applications due to the scale, multi-modality, and limitations of spatially-resolved data (such as spatial resolution, sensitivity and gene coverage). Given these intricacies, researchers have developed deep learning (DL)-based models to alleviate ST-specific challenges. These methods include new state-of-the-art models in alignment, spatial reconstruction, and spatial clustering among others. However, deep-learning models for ST analysis are nascent and remain largely underexplored. In this review, we provide an overview of existing state-of-the-art tools for analyzing spatially-resolved transcriptomics, while delving deeper into the DL-based approaches. We discuss the new frontiers and the open questions in this field and highlight the domains in which we anticipate transformational DL applications.
Deep Learning + Single Cell Omics
I co-first authored a multi-institutional multi-national review paper on "Deep Learning Application in Single-Cell Omics Data Analysis". This review paper, which is targeting a non-mathematical audience, provides a brief overview of common Deep Learning techniques and architectures, particularly the ones used for single-cell omics; this review can be a great start for early practitioners who want to get started learning more about the specific architectures. Our review paper is submitted to Wiley's Interdisciplinary Reviews (WIREs) Data Mining and Knowledge Discovery (DMKD), and is currently available on BioRxiv as a pre-print (linked below).
Here is the abstract for review paper Deep learning applications in single-cell omics data analysis:
Traditional bulk sequencing methods are limited to measuring the average signal in a group of cells, potentially masking heterogeneity, and rare populations. The single-cell resolution, however, enhances our understanding of complex biological systems and diseases, such as cancer, the immune system, and chronic diseases. However, the single-cell technologies generate massive amounts of data that are often high-dimensional, sparse, and complex, thus making analysis with traditional computational approaches difficult and unfeasible. To tackle these challenges, many are turning to deep learning (DL) methods as potential alternatives to the conventional machine learning (ML) algorithms for single-cell studies. DL is a branch of ML capable of extracting high-level features from raw inputs in multiple stages. Compared to traditional ML, DL models have provided significant improvements across many domains and applications. In this work, we examine DL applications in genomics, transcriptomics, spatial transcriptomics, and multi-omics integration, and address whether DL techniques will prove to be advantageous or if the single-cell omics domain poses unique challenges. Through a systematic literature review, we find that DL has not yet revolutionized or addressed the most pressing challenges of the single-cell omics field. However, using DL models for single-cell omics has shown promising results (in many cases outperforming the previous state-of-the-art models) in data preprocessing and downstream analysis, but many DL models still lack the needed biological interpretability. Although developments of DL algorithms for single-cell omics have generally been gradual, recent advances reveal that DL can offer valuable resources in fast-tracking and advancing research in single-cell.
UCM Deep Learning Group
I started UCM Applied Math's Deep Learning Group (DLG) in Fall 2020 (sponsored by UCM's SIAM) with the main objective of diving deeper into fundamentals of Machine Learning (Deep Learning in particular) and approaching the topics through the lens of mathematics. The DLG and the associated learning group aims to provide a forum where faculty and students interested in Deep Learning can learn about the fundamentals and the advances in field, fostering broader discussions and collaborations.
A complete list of our readings throughout the semester, along with references and resources, are available on a dedicated GitHub repository, linked below.
Logo design by M. Powell
I designed and first authored ACTIVA (Automated Cell-Type-informed Introspective Variational Autoencoder), a novel framework for generating realistic synthetic data using a single-stream adversarial variational autoencoder conditioned with cell-type information. ACTIVA achieves state-of-the-art performance in generating and augmenting single-cell RNA sequencing data while training up to 17 times faster than the GAN-based model (scGAN).
Here is the abstract for our manuscript ACTIVA: realistic single-cell RNA-seq generation with automatic cell-type identification using introspective variational autoencoders
Single-cell RNA sequencing (scRNAseq) technologies allow for measurements of gene expression at a single-cell resolution. This provides researchers with a tremendous advantage for detecting heterogeneity, delineating cellular maps, or identifying rare subpopulations. However, a critical challenge remains: the low number of single-cell observations due to limitations by cost or rarity of the subpopulation. This absence of sufficient data may cause inaccuracy or irreproducibility of downstream analysis. In this work, we present ACTIVA (Automated Cell-Type-informed Introspective Variational Autoencoder): a novel framework for generating realistic synthetic data using a single-stream adversarial variational autoencoder conditioned with cell-type information. We train and evaluate ACTIVA, and competing models, on multiple public scRNAseq datasets. Under the same conditions, ACTIVA trains up to 17 times faster than the GAN-based state-of-the-art model, scGAN (2.2 hours compared to 39.5 hours on Brain Small) while performing better or comparable in our quantitative and qualitative evaluations. We show that augmenting rare-populations with ACTIVA significantly increases the classification accuracy of the rare population (more than 45 % improvement in our rarest test case). Data generation and augmentation with ACTIVA can enhance scRNAseq pipelines and analysis, such as benchmarking new algorithms, studying the accuracy of classifiers, and detecting marker genes. ACTIVA will facilitate the analysis of smaller datasets, potentially reducing the number of patients and animals necessary in initial studies.
I developed SoftAdapt, which is a family of techniques for adaptive loss weighting of neural networks with multi-part loss functions and multi-tasking neural nets. This package was tested and deployed during my internship at the U.S. Air Force Research Laboratory (AFRL) and our novel algorithm SoftAdapt is currently under review. Please contact me directly for a request of the preprint of SoftAdapt.
Here is the abstract of our paper (first author) SoftAdapt: Techniques for Adaptive Loss Weighting of Neural Networks with Multi-Part Loss Functions
Adaptive loss function formulation is an active area of research and has gained a great deal of popularity in recent years, following the success of deep learning. However, existing frameworks of adaptive loss functions often suffer from slow convergence and poor choice of weights for the loss components. Traditionally, the elements of a multi-part loss function are weighted equally or their weights are determined through heuristic approaches that yield near-optimal (or sub-optimal) results. To address this problem, we propose a family of methods, called SoftAdapt, that dynamically change function weights for multi-part loss functions based on live performance statistics of the component losses. SoftAdapt is mathematically intuitive, computationally efficient and straightforward to implement. In this paper, we present the mathematical formulation and pseudocode for SoftAdapt, along with results from applying our methods to image reconstruction (Sparse Autoencoders) and synthetic data generation (Introspective Variational Autoencoders). Our implementation of SoftAdapt is available on the authors' GitHub.
Single Image Super Resolution
During my internship at the U.S. Air Force Research Laboratory, I worked on Single Image Super-Resolution (SISR) using a novel and new method that takes advantage of Introspective Variational Autoencoders.
Here is the abstract of our paper SRVAE: Super-Resolution Using Variational Autoencoders (2020):
The emergence of Generative Adversarial Network (GAN)-based single-image super-resolution (SISR) has allowed for finer textures in the super-resolved images, thus making them seem realistic to humans. However, GAN-based models may depend on extensive high-quality data and are known to be very costly and unstable to train. On the other hand, Variational Autoencoders (VAEs) have inherent mathematical properties, and they are relatively cheap and stable to train; but VAEs produce blurry images that prevent them from being used for super-resolution. In this paper, we propose a first of its kind SISR method that takes advantage of a self-evaluating Variational Autoencoder (IntroVAE). Our network, called SRVAE, judges the quality of generated high-resolution (HR) images with the target images in an adversarial manner, which allows for high perceptual image generation. First, the encoder and the decoder of our introVAE-based method learn the manifold of HR images. In parallel, another encoder and decoder are simultaneously learning the reconstruction of the low-resolution (LR) images. Next, reconstructed LR images are fed to the encoder of the HR network to learn a mapping from LR images to corresponding HR versions. Using the encoder as a discriminator allows SRVAE to be a fast single-stream framework that performs super-resolution through generating photo-realistic images. Moreover, SRVAE has the same training stability and “nice” latent manifold structure as of VAEs, while playing a max-min adversarial game between the generator and the encoder like GANs. Our experiments show that our super-resolved images are comparable to the state-of-the-art GAN-based super-resolution.
Predicting Breast Cancer Patients' Response
As a part of my National Science Foundation Research Traineeship (NRT) fellowship, I worked within an interdisciplinary team of graduate students on developing machine learning models for predicting breast cancer patients' responsiveness to treatment with very limited data. Our research group conducted research under the guidance and mentorship of Dr. James Ben Brown (UC Berkeley + LBNL) and Dr. Petrus Zwart (LBNL). Although many machine learning models have aimed at predicting cancer, very few have investigated machine learning for cancer prognosis and even fewer have only used the patients' mutation profile alone. Our aim is to build machine learning models that are not only highly accurate but also interpretable so that we could use the learned information in transfer learning to other cancer types. Despite our success, we are far from finished and our work still remains unpublished; however, I have compiled an informal report of the preliminary result which is linked below. Please note that we are working on publishing this work and all of our code and results are still our intellectual property. Please feel free to contact me with any questions or concerns about our methods or results.
VAEs and GANs : The Mathematics
A tutorial-like summary of the mathematics and the theoretical aspects of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs).
Useful Datasets and Containters
Below are some of the very well know computer vision datasets that may be hard to find for direct download. The links below are direct links to the compressed data. Please be sure to cite the original authors/organizations.
And below we have a reduced quality (low resolution) version of CelebA. The LR images were produced by me for our super-resolution framework SRVAE. The photos are in .png format with some white paddings.
A simple PyTorch Container that supports GPU as well (CUDA).