Sharad Chitlangia

I am a Founding Scientist at Kily.ai where I lead the AI charter. We're building AI for brands and vendors to optimize and grow their sales across online channels (including e-commerce and q-commerce platforms). My work primarily focuses on research and development of small language models, online learning, reinforcement learning, and multi-agent LLM systems.

Previously, I was an Applied Scientist at Amazon, where I led ML initiatives for traffic quality and supply quality programs, pioneering work on scaling GPT-style models for user action prediction and content/data quality assessment on the internet for ad adjacency optimization.

Sharad Chitlangia

News

  • 2025 Our paper Semi-Supervised Graph Learning for Low-Quality Publisher Detection in Programmatic Advertising was accepted for an oral presentation at the Graph Learning Workshop at Amazon Machine Learning Conference 2025 (top 15% of submissions).
  • 2025 Our paper Made for Advertising Website Detection using Self-Supervised Learning was accepted for a poster presentation at Amazon Machine Learning Conference 2025 (~20% acceptance rate).
  • Aug 2023 Our paper on Learning explainable network request signatures for robot detection was accepted for an oral presentation at the 10th edition of Amazon Machine Learning Conference. Read more →
  • Jun 2023 Our paper on Scaling Generative Pre-training on User Ad Activity Sequences was accepted at AdKDD, held in conjunction with KDD. Read more →
  • Jun 2023 Our paper on explainable network request signatures for robot detection was accepted at the AI for Cyber Security Workshop at KDD.
  • Feb 2023 We published a blog on our paper Real-time detection of robotic traffic in online advertising at AAAI. Read the blog →
  • Nov 2022 Our paper on Real-time detection of robotic traffic in online advertising was accepted in the Innovative Applications of AI track at AAAI 2023.
  • Oct 2022 Our paper on Self Supervised Pre-training for Large Scale Tabular Data was accepted at the Table Representation Learning Workshop at NeurIPS 2022.
  • Sep 2022 Checkout the Google AI blog describing our work QuaRL: Quantization for Fast and Environmentally Sustainable Reinforcement Learning.
  • Jun 2022 Our paper QuaRL: Quantization for Fast and Environmentally Sustainable Reinforcement Learning is accepted at TMLR. Read the paper →
  • Mar 2022 Our paper on Investigating the Impact of Multi-LiDAR Placement on Object Detection for Autonomous Driving is accepted at CVPR 2022.
  • Dec 2021 Our paper A Review of Some Techniques for Inclusion of Domain-Knowledge into Deep Neural Networks is accepted in Nature's Scientific Reports.
  • Dec 2021 Multilingual Spoken Words Corpus was covered in Harvard News.
  • Oct 2021 Our paper on Multilingual Spoken Words was accepted at NeurIPS 2021.
  • Sep 2021 Big Move! Joined Amazon as a Research Scientist.
  • Aug 2021 Our review paper on Reinforcement Learning and its connections with Neuroscience and Psychology was accepted in Elsevier's Neural Networks Journal.
  • Mar 2021 Our work ActorQ: Quantization for Actor-Learner Distributed RL was accepted at Hardware Aware Efficient Training Workshop, ICLR 2021.
  • Jan 2021 I'll be a Teaching Assistant for a Graduate level course on Meta Learning.
  • Dec 2020 Watch my presentation to Microsoft Research on "Pushing the limits of VowpalWabbit with Flatbuffers".
  • Nov 2020 Launch of the TinyML Courses! I'm contributing to Course 3: Deploying TinyML.
  • Sep 2020 Featured on Microsoft Research's website.
  • Aug 2020 Featured on my University's website for funding of USD 10,000 as Principal Investigator of a project in the RL Open Source Festival.
  • Jul 2020 Organised a Summer Symposium on AI Research. Over 3000 registrations!
  • Mar 2020 Selected to work as an Independent Developer as part of the RL Open Source Fest at Microsoft Research NYC.
  • Jan 2020 Our paper Quantized Reinforcement Learning got accepted at the workshop on Resource Constrained ML at MLSys.

Research

Scaling GPT

Scaling Generative Pre-training for User Ad Activity Sequences

17th AdKDD (KDD 2023 Workshop)

Sharad Chitlangia, Krishna Reddy Kesari, Rajat Agarwal

We demonstrate scaling properties across model size, data and compute on User Activity Sequences data. Larger models show better downstream task performance on response prediction and bot detection.

Network Signatures

Learning Explainable Network Request Signatures for Robot Detection

KDD 2023 Workshop on AI-Enabled Cybersecurity Analytics

Rajat Agarwal, Sharad Chitlangia, Anand Muralidhar, Adithya Niranjan, Abheesht Sharma, Koustav Sadhukan, Suraj Sheth

We introduce a 3-tiered framework for learning and generating explainable network request signatures to explain black box robot detection decisions.

Bot Detection

Real-time Detection of Robotic Traffic in Online Advertising

Innovative Applications of AI, AAAI 2023

Anand Muralidhar, Sharad Chitlangia, Rajat Agarwal, Muneeb Ahmed

An approach towards detection of bot traffic that protects advertisers in realtime from online fraud, with an optimization framework for optimal performance across various business slices.

SSL Tabular

Self Supervised Pre-training for Large Scale Tabular Data

NeurIPS 2022 Workshop on Table Representation Learning

Sharad Chitlangia, Anand Muralidhar, Rajat Agarwal

A method for self supervised learning on large scale tabular data, showing efficacy on bot detection with very high cardinality categorical and large range continuous features.

LiDAR

Investigating the Impact of Multi-LiDAR Placement on Object Detection for Autonomous Driving

CVPR 2022

Hanjiang Hu*, Zuxin Liu*, Sharad Chitlangia, Akhil Agnihotri, Ding Zhao

A surrogate cost function to optimize placement of LiDAR Sensors for 3D Object Detection performance, validated in a realistic open source Autonomous Vehicle Simulator.

MSWC

Multilingual Spoken Words Corpus

NeurIPS 2021

Mark Mazumder, Sharad Chitlangia, Colby Banbury, Yiping Kang, Juan Manuel Ciro, et al.

A speech dataset of over 340,000 spoken words in 50 languages, with over 23.7 million examples.

NeuroRL

Psychological and Neural Evidence for Reinforcement Learning

Elsevier Neural Networks

Ajay Subramanian, Sharad Chitlangia, Veeky Baths

We review findings that establish evidence of key elements of the RL problem in the neuroscience and psychology literature and how they're represented in regions of the brain.

Domain Knowledge

How to Tell Deep Neural Networks What We Know

Nature, Scientific Reports

Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan

We survey techniques for incorporation of domain knowledge in neural networks, listing major problems and justifying how it could help from the perspective of explainability and ethics.

TinyML

Widening Access to Applied Machine Learning with TinyML

Harvard Data Science Review

Vijay Janapa Reddi, Brian Plancher, Susan Kennedy, ..., Sharad Chitlangia, et al.

A Whitepaper on the tinyML EdX Professional Certificate Course and the broader tinyMLx community which garnered over 35000 learners from across the world.

ActorQ

ActorQ: Quantization for Actor-Learner Distributed Reinforcement Learning

Hardware Aware Efficient Training Workshop at ICLR, 2021

Maximilian Lam, Sharad Chitlangia, Srivatsan Krishnan, Zishen Wan, Gabriel Barth-Maron, Aleksandra Faust, Vijay Janapa Reddi

By running parallel actors on lower precision and the learner in full precision, training can be sped up by 1.5-2.5x without harm in performance!

QuaRL

Quantized Reinforcement Learning (QuaRL)

MLSys ReCoML Workshop, 2020

Srivatsan Krishnan, Sharad Chitlangia, Maximilian Lam, Zishen Wan, Aleksandra Faust, Vijay Janapa Reddi

Does quantization work for Reinforcement Learning? We discuss the benefits of applying quantization to RL and show speedup by up to 5 times!

Bongard

Using Program Synthesis and Inductive Logic Programming to Solve Bongard Problems

10th International Workshop on Approaches and Applications of Inductive Programming

Sharad Chitlangia, Atharv Sonwane, Tirtharaj Dash, Lovekesh Vig, Gautam Shroff, Ashwin Srinivasan

We utilize graphical program synthesis to synthesize programs that represent Bongard problems, serving as good representations for learning interpretable discriminative theories using ILP.

Experience

Kily.ai

Founding Scientist

Building agentic AI systems for e-commerce optimization—online learning, RL, and multi-agent systems

Amazon

Applied Scientist

Led ML initiatives for traffic quality and supply quality programs, building fraud detection systems at scale.

Microsoft Research

Pushing the limits of VowpalWabbit with Flatbuffers - achieving 30% performance increase

Google, CERN-HSF

Particle Track Reconstruction using Machine Learning. 10,000+ particles reconstructed with 93% accuracy in under 10 seconds.

Projects

GenRL

A PyTorch reinforcement learning library centered around reproducible and generalizable algorithm implementations.

Neural Voice Cloning

Implementation of Neural Voice Cloning with Few Samples Paper by Baidu Research.

Autonomous Drone Navigation

Drone performing imitation learning on IDSIA dataset using Deep Reinforcement Learning.

Particle Track Reconstruction

ML-based reconstruction using Random Forests, XGBoost, Neural Networks, and Graph Neural Networks.

Spiking Neural Networks

Real-time interfacing capabilities of Conductance based neurons in SpineCreator.

Epileptic Seizure Detection

An open source implementation of ChronoNet for seizure detection using EEG data.