ACTIVA (Automated Cell-Type-informed Introspective Variational Autoencoder) is a novel framework for generating realistic synthetic data using a single-stream adversarial variational autoencoder conditioned with cell-type information. ACTIVA achieves state-of-the-art performance in generating and augmenting single-cell RNA sequencing data while training up to 17 times faster than the GAN-based model (scGAN). 

Here is the abstract for our manuscript ACTIVA: realistic single-cell RNA-seq generation with automatic cell-type identification using introspective variational autoencoders

Single-cell RNA sequencing (scRNAseq) technologies allow for measurements of gene expression at a single-cell resolution. This provides researchers with a tremendous advantage for detecting heterogeneity, delineating cellular maps, or identifying rare subpopulations. However, a critical challenge remains: the low number of single-cell observations due to limitations by cost or rarity of the subpopulation. This absence of sufficient data may cause inaccuracy or irreproducibility of downstream analysis. In this work, we present ACTIVA (Automated Cell-Type-informed Introspective Variational Autoencoder): a novel framework for generating realistic synthetic data using a single-stream adversarial variational autoencoder conditioned with cell-type information. We train and evaluate ACTIVA, and competing models, on multiple public scRNAseq datasets. Under the same conditions, ACTIVA trains up to 17 times faster than the GAN-based state-of-the-art model, scGAN (2.2 hours compared to 39.5 hours on Brain Small) while performing better or comparable in our quantitative and qualitative evaluations. We show that augmenting rare-populations with ACTIVA significantly increases the classification accuracy of the rare population (more than 45 % improvement in our rarest test case). Data generation and augmentation with ACTIVA can enhance scRNAseq pipelines and analysis, such as benchmarking new algorithms, studying the accuracy of classifiers, and detecting marker genes. ACTIVA will facilitate the analysis of smaller datasets, potentially reducing the number of patients and animals necessary in initial studies.



SoftAdapt is a family of techniques for adaptive loss weighting of neural networks with multi-part loss functions and multi-tasking neural nets. This package was tested and deployed during my internship at the U.S. Air Force Research Laboratory (AFRL) and our novel algorithm SoftAdapt is currently under review. Please contact me directly for a request of the preprint of SoftAdapt.


Here is the abstract of our paper SoftAdapt: Techniques for Adaptive Loss Weighting of Neural Networks with Multi-Part Loss Functions

Adaptive loss function formulation is an active area of research and has gained a great deal of popularity in recent years, following the success of deep learning. However, existing frameworks of adaptive loss functions often suffer from slow convergence and poor choice of weights for the loss components. Traditionally, the elements of a multi-part loss function are weighted equally or their weights are determined through heuristic approaches that yield near-optimal (or sub-optimal) results. To address this problem, we propose a family of methods, called SoftAdapt, that dynamically change function weights for multi-part loss functions based on live performance statistics of the component losses. SoftAdapt is mathematically intuitive, computationally efficient and straightforward to implement. In this paper, we present the mathematical formulation and pseudocode for SoftAdapt, along with results from applying our methods to image reconstruction (Sparse Autoencoders) and synthetic data generation (Introspective Variational Autoencoders).  Our implementation of SoftAdapt is available on the authors' GitHub.

Single Image Super Resolution

During my internship at the U.S. Air Force Research Laboratory, I also worked on Single Image Super-Resolution (SISR) using a novel and new method that takes advantage of Introspective Variational Autoencoders.


Here is the abstract of our paper SRVAE: Super-Resolution Using Variational Autoencoders (2020):

The emergence of Generative Adversarial Network (GAN)-based single-image super-resolution (SISR) has allowed for finer textures in the super-resolved images, thus making them seem realistic to humans.  However, GAN-based models may depend on extensive high-quality data and are known to be very costly and unstable to train. On the other hand, Variational Autoencoders (VAEs) have inherent mathematical properties, and they are relatively cheap and stable to train; but VAEs produce blurry images that prevent them from being used for super-resolution. In this paper, we propose a first of its kind SISR method that takes advantage of a self-evaluating Variational Autoencoder (IntroVAE). Our network, called SRVAE, judges the quality of generated high-resolution (HR) images with the target images in an adversarial manner, which allows for high perceptual image generation. First, the encoder and the decoder of our introVAE-based method learn the manifold of HR images. In parallel, another encoder and decoder are simultaneously learning the reconstruction of the low-resolution (LR) images. Next, reconstructed LR images are fed to the encoder of the HR network to learn a mapping from LR images to corresponding HR versions. Using the encoder as a discriminator allows SRVAE to be a fast single-stream framework that performs super-resolution through generating photo-realistic images. Moreover, SRVAE has the same training stability and “nice” latent manifold structure as of VAEs, while playing a max-min adversarial game between the generator and the encoder like GANs. Our experiments show that our super-resolved images are comparable to the state-of-the-art GAN-based super-resolution.


Predicting Breast Cancer Patients' Response 

As a part of my National Science Foundation Research Traineeship (NRT) fellowship, I worked on developing machine learning models that would predict breast cancer patients' responsiveness to treatment with very limited data. On this project, I have been collaborating with an interdisciplinary team of graduate students and our advisors Dr. James Ben Brown (the University of Birmingham, UK + UC Berkeley + LBNL) and Dr. Petrus Zwart (LBNL). Although many machine learning models have aimed at predicting cancer, very few have researched using machine learning for cancer prognosis and even fewer have only used the patients' mutation profile only. Our aim is to build machine learning models that are not only highly accurate but also interpretable so that we could use the learned information in transfer learning to other cancer types. Despite our success, we are far from finished and our work still remains unpublished; but I have compiled an informal report of the preliminary result which is linked below. Please note that we are working on publishing this work and all of our code and results are still our intellectual property. Please feel free to contact me with any questions or concerns about our methods or results.

VAEs and GANs : The Mathematics

A tutorial-like summary of the mathematics and the theoretical aspects of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs).  

Useful Datasets and Containters

Below are some of the very well know computer vision datasets that may be hard to find for direct download. The links below are direct links to the compressed data. Please be sure to cite the original authors/organizations.





And below we have a reduced quality (low resolution) version of CelebA. The LR images were produced by me for our super-resolution framework SRVAE. The photos are in .png format with some white paddings. 

CelebA (Low Resolution) 


HPC Containers

HPC Container

A simple PyTorch Container that supports GPU as well (CUDA).

PyTorch Container
Keras TF Container