Letian (Zac) Chen

PhD Candidate at Georgia Institute of Technology

ABOUT

Hi! I am Letian (Zac) Chen, a final-year PhD candidate at Georgia Tech. I received my Bachelor’s degrees in computer science and psychology from Peking University in 2018 and Master's degree in computer science from Georgia Tech in 2020. Prior to joining the CORE Robotics lab (led by Professor Matthew Gombolay) in Georgia Tech, I was in CVDA (led by Professor Yizhou Wang) and CDLab (led by Professor Hang Zhang) at Peking University (Beijing, China).

I love researching and implementing cool stuff that works in real-world. I am interested in all kinds of intelligence problems, both artificial intelligence and human intelligence. Specifically, I look into the intelligence problem through reinforcement learning perspective. My research enables robots to infer human's intent in Learning from Demonstration settings, teasing out heterogeneity and suboptimality from the demonstrations. I envision human and machine share certain sources of intelligence, including but not limited to reinforcement learning (dopamine system), hierarchical learning (hippocampus), and meta learning (pre-frontal cortex). I wish to reveal the mystery of intelligence through my work and make robot assistance accessible to everyone!

If you want to learn more about me, please feel free to contact me (contact info).

EDUCATION

Doctor of Philosophy: Computer Science
Georgia Institute of Technology
GPA: 4.0/4.0

EXPECT TO GRADUATE IN DEC 2024
IN PROGRESS

Master of Science: Computer Science
Georgia Institute of Technology
GPA: 4.0/4.0

GRADUATED IN MAY 2020

Bachelor of Science: Computer Science and Technology
Peking University
GPA: 3.80/4.0

GRADUATED IN JUN 2018

Bachelor of Science: Psychology
Peking University
GPA: 3.78/4.0

GRADUATED IN JUN 2018



WORK

Research Intern
Waymo

  • Designed input-output representations for fine-tuning large Vision-Language-Models (VLMs) for the vehicle planning task.
  • Proposed a novel Reinforcement Learning algorithm for fine-tuning VLMs towards planning metrics to replace the usual token matching objectives in LM training.
  • Developed training and evaluation pipeline infrastructure of the large VLM. Experiments show significant improvements in target behavior metrics via the proposed method.

MAY 2024 - DEC 2024

Research Intern
Toyota Research Institute

  • Implemented DIAYN to generate diverse driving policies for autonomous racing.
  • Proposed a novel algorithm, Learn Thy Enemy, to model and leverage opponent information in multi-car racing.
  • Deployed DIAYN and LTE policies on motion simulator hardware and demonstrated qualitatively and quantitatively high performance.

MAY 2023 - AUG 2023

Research Assistant
Georgia Institute of Technology

  • Assisted Professor Matthew Gombolay

SEP 2021 - NOW

Reinforcement Learning Intern
iRobot Corporation

  • Identified real-world challenges of Offline Policy Evaluation (OPE) methods.
  • Created a ease-to-use benchmark dataset where real-world challenges present.
  • Proposed an ad-hoc OPE algorithm selection method via validation mechanisms.

MAY 2021 - AUG 2021

Teaching Assistant
Georgia Institute of Technology

  • Assisted CS 7648 Interactive Robot Learning

JAN 2021 - MAY 2021

Teaching Assistant
Georgia Institute of Technology

  • Assisted OMSCS 7641 Machine Learning

AUG 2020 - DEC 2020

Research Assistant
Georgia Institute of Technology

  • Assisted Professor Matthew Gombolay

MAY 2019 - MAY 2020

Teaching Assistant
Georgia Institute of Technology

  • Assisted OMSCS 7641 Machine Learning

JAN 2019 - MAY 2019

iOS Engineer
Peking University PKU Helper Team

  • Developed and maintained iOS app “PKU Helper” for Peking University campus life (10k+ users)
  • Information and download link: PKU Helper

SEP 2015 - AUG 2018

Teaching Assistant
Peking University

  • Assisted Professor Jun Sun in Introduction to Computation

SEP 2016 - JAN 2017



RESEARCH

Safe Learning form Demonstration

  • Created a new modality for users to specify safe vs. unsafe states for robots via demonstrations.
  • Proposed a novel shielding algorithm, SECURE, that can be applied on policies to enforce customized safety bounds (defined by users), via a combination of data-driven control-barrier function and task-aware safe action search.
  • Tested SECURE on two simulated robotic control tasks and a real robot kitchen cutting task where the robot is equipped with a knife; showed SECURE successfully prevent all unsafe executions, such as human-hand entering robot cutting space.

2022-2023

Paper

Learning Interpretable Tree-based Control Policies for Autonomous Driving

  • Developed interpretable, tree-based continuous-control models that allow gradient updates.
  • Demonstrated the strong qualitative and quantitative performance of the proposed model in comparison with black-box neural networks in 10+ driving scenarios.
  • Verified interpretability with user-studies to show the proposed model is easier and faster to interpret than neural networks and other interpretable models.

2022-2023

Paper

Learning from Offline Heterogeneous Demonstrations

  • Analyzed real Mars rover driving data and identified heterogeneity among rover drivers.
  • Proposed a novel IRL framework, DROID, to accommodate the offline learning required by the application while allowing learning from heterogeneous demonstrations via dual reward and policy distillation.
  • Applied DROID on two simulated robotic control tasks and the real Mars rover path-planning problem; achieved better learning and generalization to unseen conditions in all three domains.

2022-2023

Paper

Fast Lifelong Adaptive Learning from Demonstrations

  • Analyzed the personalization problem in lifelong learning from demonstration process where large number of heterogeneous demonstrations arrive sequentially by federation among users.
  • Proposed a novel IRL framework, FLAIR, to provide efficient personalization and scalability by constructing policy mixtures with a concise set of prototypical strategy policies.
  • Applied FLAIR on three virtual robotic control tasks and a real robot table-tennis task; achieved better personalization with significantly higher sample efficiency.

2021

Paper

Learning from Suboptimal Demonstration via Self-Supervised Reward Regression

  • Characterized policy performance degradation from noise injection by a sigmoid function.
  • Proposed a novel IRL framework, SSRR, to learn policies that are better than suboptimal demonstrations by inferring the idealized reward function (i.e., the latent intent of the demonstrator).
  • Applied algorithm on three virtual robotic tasks and a real robot table-tennis task; achieved accurate recovery of the demonstrator intention and a better-than-best-demonstration policy.

2020

Paper

Learning from Heterogeneous Demonstrations

  • Modeled humans' latent objective via shared task reward and individual strategy reward.
  • Proposed a novel IRL framework, MSRD, to jointly infer task reward and strategy reward to gain a better estimation of both.
  • Applied algorithm on two virtual robot control tasks and one real robot table-tennis task; achieved better learning of task reward than SOTA AIRL, extracted precise strategic rewards, and optimized versatile policies that resemble the heterogeneous demonstrations.


AWARDS

Amazon Science Scholarship for AAAI 2022

Best paper finalist in Conference on Robot Learning (CoRL 2020)

First place in Brainhack ATL 2019 Track 2

Graduate of merit in Beijing

Excellent Graduate in Peking University

Zhang Wenjin Scholarship link

Scholarship for Undergraduate Research

First Prize of National Olympiad in Informatics in Provinces Advanced Group



SKILLS

Python


Tensorflow


Matlab


C/C++


Linux


JAVA


iOS Development (Swift)


Data Analysis (SQL, R, SAS)


Web Front End (Html, css, JavaScript)



CONTACT

Email
zac.letian.chen@gmail.com

Office
266 Ferst Dr NW, Room 1306
Atlanta, GA, 30332, United States

SOCIAL LINKS

Created base on BLACKTIE.CO