Adaptive Learning Integration
This is the page for the Adaptive Learning Integration (A.L.I) package, which is the implementation of our novel algorithm SoftAdapt. SoftAdapt is a family of techniques for adaptive loss weighting of neural networks with multi-part loss functions and multi-tasking neural nets. This package was tested and deployed during my internship at the U.S. Air Force Research Laboratory (AFRL) and our novel algorithm SoftAdapt is currently under review for the 2020 Conference on Computer Vision and Pattern Recognition (CVPR). Please contact me directly for a request of the preprint of SoftAdapt.
Here is the abstract of our paper SoftAdapt: Techniques for Adaptive Loss Weighting of Neural Networks with Multi-Part Loss Functions
Adaptive loss function formulation is an active area of research and has gained a great deal of popularity in recent years, following the success of deep learning. However, existing frameworks of adaptive loss functions often suffer from slow convergence and poor choice of weights for the loss components. Traditionally, the elements of a multi-part loss function are weighted equally or their weights are determined through heuristic approaches that yield near-optimal (or sub-optimal) results. To address this problem, we propose a family of methods, called SoftAdapt, that dynamically change function weights for multi-part loss functions based on live performance statistics of the component losses. SoftAdapt is mathematically intuitive, computationally efficient and straightforward to implement. In this paper, we present the mathematical formulation and pseudocode for SoftAdapt, along with results from applying our methods to image reconstruction (Sparse Autoencoders) and synthetic data generation (Introspective Variational Autoencoders). Our implementation of SoftAdapt is available on the authors GitHub.
Single Image Super Resolution
During my internship at the U.S. Air Force Research Laboratory, I also worked on Single Image Super-Resolution (SISR) using a novel and new method that takes advantage of Introspective Variational Autoencoders.
Here is the abstract of our paper SRVAE: Super-Resolution Using Variational Autoencoders (accepted and in-press for SPIE 2020):
The emergence of Generative Adversarial Network (GAN)-based single-image super-resolution (SISR) has allowed for finer textures in the super-resolved images, thus making them seem realistic to humans. However, GAN-based models may depend on very large high-quality data and are known to be very costly and unstable to train. On the other hand, Variational Autoencoders (VAE) have intuitive mathematical properties, and they are relatively cheap and stable to train; but VAEs produce blurry images that prevent them from being used for super-resolution. In this paper, we propose a novel and the first of its kind SISR method that takes advantage of a self-evaluating VAE (IntroVAE) to judge the quality of the generated high-resolution (HR) images with the target images in an adversarial manner, which allows for high perceptual image generation. First, the encoder and the decoder of the IntroVAE learn the manifold of HR images, while the encoder and decoder are simultaneously learning the reconstruction of the low-resolution (LR) images. Second, LR images are inputted to the generator, then the output is fed to the encoder to compare the encoded LR to the encoded HR. This allows SRVAE to be a fast single-stream framework that generates photo-realistic images without requiring an additional discriminator.On one hand, SRVAE has the same “nice” latent manifold structure of VAEs and stable training while playing a max-min adversarial game between the generator and the encoder like GANs. Our experiments show that our super-resolved images are comparable to the state-of-the-art GAN-based super-resolution.
Predicting Breast Cancer Patients' Response
VAEs and GANs : The Mathematics
As a part of my National Science Foundation Research Traineeship (NRT) fellowship, I worked on developing machine learning models that would predict breast cancer patients' responsiveness to treatment with very limited data. On this project, I have been collaborating with an interdisciplinary team of graduate students and our advisors Dr. James Ben Brown (the University of Birmingham, UK + UC Berkeley + LBNL) and Dr. Petrus Zwart (LBNL). Although many machine learning models have aimed at predicting cancer, very few have researched using machine learning for cancer prognosis and even fewer have only used the patients' mutation profile only. Our aim is to build machine learning models that are not only highly accurate but also interpretable so that we could use the learned information in transfer learning to other cancer types. Despite our success, we are far from finished and our work still remains unpublished; but I have compiled an informal report of the preliminary result which is linked below. Please note that we are working on publishing this work and all of our code and results are still our intellectual property. Please feel free to contact me with any questions or concerns about our methods or results.
A tutorial-like summary of the mathematics and the theoretical aspects of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs).
Useful Datasets and Containters
Below are some of the very well know computer vision datasets that may be hard to find for direct download. The links below are direct links to the compressed data. Please be sure to cite the original authors/organizations.
And below we have a reduced quality (low resolution) version of CelebA. The LR images were produced by me for our super-resolution framework SRVAE. The photos are in .png format with some white paddings.
CelebA (Low Resolution)
A simple PyTorch Container that supports GPU as well (CUDA).