Toggle Theme

Sharad Chitlangia

I am a Research Scientist at Amazon where I work on Machine Learning and Optimization Algorithms that operate at scale.

Previously, I was a senior year undergraduate student at BITS Pilani Goa, where I studied Electronics. I specialized in the field of Artificial Intelligence (unofficially).

While at BITS, I was also the President of Society for Artificial Intelligence and Deep Learning and a part of APP Centre for AI Research where I regularly collaborated on projects, assisted in courses and helped incubate AI research.

Scholar / Feedback

profile photo

  • B.E. in Electronics and Instrumentation, 2021

    BITS Pilani

Multilingual Spoken Words Corpus

NeurIPS 2021

Mark Mazumder, Sharad Chitlangia, Colby Banbury, Yiping Kang, Juan Manuel Ciro, Keith Achorn, Daniel Galvez, Mark Sabini, Peter Mattson, David Kanter, Greg Diamos, Pete Warden, Josh Meyer, Vijay Janapa Reddi

Multilingual Spoken Words Corpus is a speech dataset of over 340,000 spoken words in 50 languages, with over 23.7 million examples

Using Program Synthesis and Inductive Logic Programming to solve Bongard Problems

10th International Workshop on Approaches and Applications of Inductive Programming

Sharad Chitlangia, Atharv Sonwane, Tirtharaj Dash, Lovekesh Vig, Gautam Shroff, Ashwin Srinivasan

We utilise graphical program synthesis to synthesize programs that represent Bongard problems, to serve as generative representations of those Bongard problems. We show that these can serve as good representations for learning interpretable discrimiative theories using Inductive Logic Programming to solve Bongard Problems.

Arxiv Poster BibTeX
Widening Access to Applied Machine Learning with tinyML

Under Review

Vijay Janapa Reddi, Brian Plancher, Susan Kennedy, Laurence Moroney, Pete Warden, Anant Agarwal, Colby Banbury, Massimo Banzi, Matthew Bennett, Benjamin Brown, Sharad Chitlangia, Radhika Ghosal, Sarah Grafman, Rupert Jaeger, Srivatsan Krishnan, Maximilian Lam, Daniel Leiker, Cara Mann, Mark Mazumder, Dominic Pajak, Dhilan Ramaprasad, J. Evan Smith, Matthew Stewart, Dustin Tingley

What went behind creating a massive community on tinyML comprising of leading academic and industrial individuals working at the intersection of Machine Learning and Systems? A Whitepaper on the tinyML EdX Professional Certificate Course and the much broader tinyMLx community which garnered over 35000 learners from across the world in less than 6 months.

PDF Website BibTeX
ActorQ: Quantization for Actor-Learner Distributed Reinforcement Learning

Hardware Aware Efficient Training Workshop at ICLR, 2021

Maximilian Lam, Sharad Chitlangia, Srivatsan Krishnan, Zishen Wan, Gabriel Barth-Maron, Aleksandra Faust, Vijay Janapa Reddi

Speeding up reinforcement learning training is not that straightforward due to a continuous environmental interaction process. We demonstrate that by running parallel actors on a lower precision, and the learner in full precision, training can be sped up by 1.5-2.5x without harm in any performance (and at times better, due to noise induced by quantization)!

PDF Poster Code BibTeX
Improving Perception via Sensor Placement: Designing Multi-LiDAR Systems for Autonomous Vehicles

CVPR, Autonomous Driving: Perception, Prediction and Planning Workshop

Sharad Chitlangia, Zuxin Liu, Akhil Agnihotri, Ding Zhao

A surrogate cost function is proposed to optimize placement of LiDAR Sensors so as to increase 3-d Object Detection performance. We validate our approach by creating a data collection framework in a realistic open source Autonomous Vehicle Simulator.

PDF Talk BibTeX
How to tell Deep Neural Networks what we know

Under Review

Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan

For the advancement of AI for Science, its important for us to be able to tell Machine Learning models what we might know about a particular problem domain in a concise form. We survey the current techniques used for incorporation of domain knowledge in neural networks, list out the major problems and justify how incorporating domain knowledge could be helpful from the perspective of explainability, ethics, etc.

Psychological and Neural Evidence for Reinforcement Learning

Elsevier Neural Networks

Ajay Subramanian, Sharad Chitlangia, Veeky Baths

We review a number of findings that establish evidence of key elements of the RL problem in the neuroscience and psychology literature and how they're represented in regions of the brain.

Quantized Reinforcement Learning (QuaRL)

MLSys ReCoML workshop, 2020
Under Review

Srivatsan Krishnan, Sharad Chitlangia, Maximilian Lam, Zishen Wan, Aleksandra Faust, Vijay Janapa Reddi

Does quantization work for Reinforcement Learning? We discuss the benefits of applying quantization to RL. Motivated by few results on PTQ, we introduce an algorithm ActorQ and show how quantization could be used in the actor learner distributed setting for speedup by upto 5 times!

Arxiv Code BibTeX Poster
Work Experience

Research Scientist
Edge Computing Lab, Harvard University

Research Intern

Data Engineering for Zero/Few Shot Multilingual Keyword Spotting

Anuradha and Prashanth Palakurthi Centre for Artificial Intelligence Research (APPCAIR)

Undergraduate Researcher

Inductive Programming and its applications to Bongard Problems

SafeAI Lab, Carnegie Mellon University

Remote Research Intern

Work on Optimal LIDAR placement.

India Machine Learning Group, Amazon Research

Research Engineering Intern

Worked on Query Disambiguation and Intent Mining from Search Queries for improving Catalog Quality.

Microsoft Research

Independent Research Developer

VowpalWabbit is known for its abilitiy to solve complex machine learning problems extremely fast. Through this project, we aim to take this ability, even further, by the introduction of Flatbuffers. Flatbuffers is an efficient cross-platform serialization library known for its memory access efficiency and speed. We develop flatbuffer schemas, for input examples, to be able to store them as binary buffers. We show a performance increase of upto 30%, compared to traditional formats.

Multimodal Digital Media Analysis Lab - MIDAS, IIIT Delhi

Remote Research Development Intern

Working on use of Online Bayesian Reinforcement Learning for Meta-Learning and Automatic Text Simplification.

Anuradha and Prashanth Palakurthi Centre for Artificial Intelligence Research (APPCAIR)

Undergraduate Researcher

Work on Interpretable and Explainable AI on Deep Relational Machines using Causal Machine Learning. Showed that features that have high Causal Attribution preserve learning and on back tracking features to learning rules, they turn out to cover more example cases than others. Part of the TCS-Datalab

Edge Computing Lab, Harvard University, Cambridge

Research Intern

Worked at intersection of Deep Reinforcement Learning and Energy Efficiency for Drones. Extensive use of Tensorflow and TFLite. Performed >350 experiments to show effects of Quantization in RL, Quantization during training to be a better regularizer than traditional techniques and thus enable higher exploration and generalization.

Google, CERN-HSF

Google Summer of Code Intern

Particle Track Reconstruction using Machine Learning. Ported top solutions from TrackML challenge to ACTS Framework. Added an example of running pytorch model in ACTS using Pytorch’s C++ frontend libtorch in an end-to-end fashion to enable rapid testing of models and thread safe fashion to allow massive parallel processing. Some testing with GNNs

In the final product, it was possible to do a simulation producing more than 10,000 particles and 100K trajectories and perform reconstruction with over 93% accuracy in less than 10 seconds. Among other things, I also added an example of running a PyTorch model using the C++ frontend, libtorch.

Machine Learning Intern

Revamped the existing Information Retrieval system to focus more on distributional semantics. Developed embeddings from a deep learning based model which could capture Semantic, Syntactic as well as Contextual information - ELMo. Training and deploying Stance Detection models - ESIM


I've worked in various sub-fields of AI. Ranging from Computer Vision to Speech Synthesis, Particle Physics to Reinforcement Learning. Please take a look at my projects below to know more.


A PyTorch reinforcement learning library centered around reproducible and generalizable algorithm implementations..

Real time interfacing with Spiking Neural Networks

With the 3rd generation of Neural Networks coming up i.e., Spiking Neural Networks, we explore their real time interfacing capabilities of Conducatance based neurons in one of the up and coming softwares - SpineCreator.

Neural Voice Cloning with Few Samples

Implementation of Neural Voice Cloning with Few Samples Paper by Baidu Research.

Audios Code
Autonomous Drone Navigation using Deep Reinforcement Learning

Drone performing imitation learning on IDSIA dataset. ResNet for Image Classification.

Particle Track Reconstruction using Machine Learning

Reconstruction of particle tracks using ML. Explored models {Random Forests, XGBoost, Feedforward Neural Networks for pair classification, Graph Neural Networks for Edge Classification.}

Code Project Report
Epileptic Seizure Detection using Deep Learning

An open source implementation of ChronoNet.

Code Project Report
Image Segmentation for Pneumonia Detection

Research Project in unofficial collaboration with TCS Research. Applying State of the art models for pneumonia detection on RSNA pneumonia detection dataset. Tested InceptionNet-v3, DenseNet121 and explored Mask RCNN applicability for the dataset. Got 83.8% and 77.9% classification accuracy respectively.


Original Template

This page has been accessed at least several times since 24th Nov 2020.