Sharad Chitlangia

I am a senior year undergraduate student at BITS Pilani Goa, where I studied Electronics. I am specializing in the field of Artificial Intelligence.

In particular, I'm interested in Real World Machine Learning.

I have been extremely fortunate to be advised by fairly established researchers including Professor Vijay Janapa Reddi on Machine Learning and Systems, Professor Ashwin Srinivasan on using Methods from Causal Inference for making Neural Networks Interpretable and Professor Ding Zhao on Safe RL and Perception Planning for Autonomous Vehicles.

I was the President of the Society for Artificial Intelligence and Deep Learning. I am also a part of APP Centre for AI Research where I regularly collaborate on projects, assist in courses and work on initiatives to improve the AI research exposure

CV/Resume (Shoot me an email) / Resources / Notes / Blog / Feedback

profile photo

  • B.E. in Electronics and Instrumentation, 2021 (expected)

    BITS Pilani

  • AI Summer School 2020

    Google Research

  • Deep Learning & Reinforcement Learning Summer School 2020

    Montreal Institute of Learning Algorithms

Widening Access to Applied Machine Learning

Under Review

Vijay Janapa Reddi, Brian Plancher, Susan Kennedy, Laurence Moroney, Pete Warden,Anant Agarwal, Colby Banbury, Massimo Banzi, Benjamin Brown, Sharad Chitlangia,Radhika Ghosal, Rupert Jaeger, Srivatsan Krishnan, Daniel Leiker, Mark Mazumder,Dominic Pajak, Dhilan Ramaprasad, J. Evan Smith, Matthew Stewart, Dustin Tingley

What went behind creating a massive community on tinyML comprising of leading academic and industrial individuals working at the intersection of Machine Learning and Systems? A Whitepaper on the tinyML EdX Professional Certificate Course and the much broader tinyMLx community which garnered over 35000 learners from across the world in less than 6 months.

PDF (Available Soon)! Website BibTeX
ActorQ: Quantization for Actor-Learner Distributed Reinforcement Learning

Hardware Aware Efficient Training Workshop at ICLR, 2021

Maximilian Lam, Sharad Chitlangia, Srivatsan Krishnan, Zishen Wan, Gabriel Barth-Maron, Aleksandra Faust, Vijay Janapa Reddi

Speeding up reinforcement learning training is not that straightforward due to a continuous environmental interaction process. We demonstrate that by running parallel actors on a lower precision, and the learner in full precision, training can be sped up by 1.5-2.5x without harm in any performance (and at times better, due to noise induced by quantization)!

PDF Code BibTeX
Improving Perception via Sensor Placement: Designing Multi-LiDAR Systems for Autonomous Vehicles

Under Review. (Available Soon!)

Sharad Chitlangia, Zuxin Liu, Akhil Agnihotri, Ding Zhao

A surrogate cost function is proposed to optimize placement of LiDAR Sensors so as to increase 3-d Object Detection performance. We validate our approach by creating a data collection framework in a realistic open source Autonomous Vehicle Simulator.

Incorporating Domain Knowledge into Neural Networks

Under Review at IJCAI

Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan

For the advancement of AI for Science, its important for us to be able to tell Machine Learning models what we might know about a particular problem domain in a concise form. We survey the current techniques used for incorporation of domain knowledge in neural networks, list out the major problems and justify how incorporating domain knowledge could be helpful from the perspective of explainability, ethics, etc.

Psychological and Neural Evidence for Reinforcement Learning

Under Review at Elsevier Neural Networks

Ajay Subramanian, Sharad Chitlangia, Veeky Baths

We review a number of findings that establish evidence of key elements of the RL problem in the neuroscience and psychology literature and how they're represented in regions of the brain.

Quantized Reinforcement Learning (QuaRL)

MLSys ReCoML workshop, 2020
Under Review

Srivatsan Krishnan, Sharad Chitlangia, Maximilian Lam, Zishen Wan, Aleksandra Faust, Vijay Janapa Reddi

Does quantization work for Reinforcement Learning? We discuss the benefits of applying quantization to RL. Motivated by few results on PTQ, we introduce an algorithm ActorQ and show how quantization could be used in the actor learner distributed setting for speedup by upto 5 times!

Arxiv Code BibTeX Poster
Work Experience
SafeAI Lab, Carnegie Mellon University

Remote Research Intern

Work on Optimal LIDAR placement and Safe RL.

India Machine Learning Group, Amazon Research

Research Engineering Intern

Worked on Query Disambiguation and Intent Mining from Search Queries for improving Catalog Quality.

Microsoft Research

Independent Research Developer

VowpalWabbit is known for its abilitiy to solve complex machine learning problems extremely fast. Through this project, we aim to take this ability, even further, by the introduction of Flatbuffers. Flatbuffers is an efficient cross-platform serialization library known for its memory access efficiency and speed. We develop flatbuffer schemas, for input examples, to be able to store them as binary buffers. We show a performance increase of upto 30%, compared to traditional formats.

Multimodal Digital Media Analysis Lab - MIDAS, IIIT Delhi

Remote Research Development Intern

Working on use of Online Bayesian Reinforcement Learning for Meta-Learning and Automatic Text Simplification.

Anuradha and Prashanth Palakurthi Centre for Artificial Intelligence Research (APPCAIR)

Undergraduate Researcher

Work on Interpretable and Explainable AI on Deep Relational Machines using Causal Machine Learning. Showed that features that have high Causal Attribution preserve learning and on back tracking features to learning rules, they turn out to cover more example cases than others. Part of the TCS-Datalab

Edge Computing Lab, Harvard University, Cambridge

Research Intern

Worked at intersection of Deep Reinforcement Learning and Energy Efficiency for Drones. Extensive use of Tensorflow and TFLite. Performed >350 experiments to show effects of Quantization in RL, Quantization during training to be a better regularizer than traditional techniques and thus enable higher exploration and generalization.

Google, CERN-HSF

Google Summer of Code Intern

Particle Track Reconstruction using Machine Learning. Ported top solutions from TrackML challenge to ACTS Framework. Added an example of running pytorch model in ACTS using Pytorch’s C++ frontend libtorch in an end-to-end fashion to enable rapid testing of models and thread safe fashion to allow massive parallel processing. Some testing with GNNs

In the final product, it was possible to do a simulation producing more than 10,000 particles and 100K trajectories and perform reconstruction with over 93% accuracy in less than 10 seconds. Among other things, I also added an example of running a PyTorch model using the C++ frontend, libtorch.

Machine Learning Intern

Revamped the existing Information Retrieval system to focus more on distributional semantics. Developed embeddings from a deep learning based model which could capture Semantic, Syntactic as well as Contextual information - ELMo. Training and deploying Stance Detection models - ESIM


I've worked in various sub-fields of AI. Ranging from Computer Vision to Speech Synthesis, Particle Physics to Reinforcement Learning. Please take a look at my projects below to know more.


A PyTorch reinforcement learning library centered around reproducible and generalizable algorithm implementations..

Real time interfacing with Spiking Neural Networks

With the 3rd generation of Neural Networks coming up i.e., Spiking Neural Networks, we explore their real time interfacing capabilities of Conducatance based neurons in one of the up and coming softwares - SpineCreator.

Neural Voice Cloning with Few Samples

Implementation of Neural Voice Cloning with Few Samples Paper by Baidu Research.

Audios Code
Autonomous Drone Navigation using Deep Reinforcement Learning

Drone performing imitation learning on IDSIA dataset. ResNet for Image Classification.

Particle Track Reconstruction using Machine Learning

Reconstruction of particle tracks using ML. Explored models {Random Forests, XGBoost, Feedforward Neural Networks for pair classification, Graph Neural Networks for Edge Classification.}

Code Project Report
Epileptic Seizure Detection using Deep Learning

An open source implementation of ChronoNet.

Code Project Report
Image Segmentation for Pneumonia Detection

Research Project in unofficial collaboration with TCS Research. Applying State of the art models for pneumonia detection on RSNA pneumonia detection dataset. Tested InceptionNet-v3, DenseNet121 and explored Mask RCNN applicability for the dataset. Got 83.8% and 77.9% classification accuracy respectively.


Original Template

This page has been accessed at least several times since 24th Nov 2020.