Letian (Zac) Chen

PhD Student at Georgia Institute of Technology | zac.letian.chen@gmail.com

Welcome to my Github:

ABOUT

Hi! I am Letian (Zac) Chen, a fourth-year PhD student at Georgia Tech. I received my Bachelor’s degrees in psychology and computer science from Peking University in 2018 and Master's degree in computer science from Georgia Tech in 2020. Prior to joining the CORE Robotics lab (led by Professor Matthew Gombolay) in Georgia Tech, I was in CVDA (led by Professor Yizhou Wang) and CDLab (led by Professor Hang Zhang) at Peking University (Beijing, China).

I love researching and implementing cool stuff that works in real-world. I am interested in all kinds of intelligence problems, both artificial intelligence and human intelligence. Specifically, I look into the intelligence problem through reinforcement learning perspective. My research enables robots to infer human's intent in Learning from Demonstration settings, teasing out heterogeneity and suboptimality from the demonstrations. I envision human and machine share certain sources of intelligence, including but not limited to reinforcement learning (dopamine system), hierarchical learning (hippocampus), and meta learning (pre-frontal cortex). I wish to reveal the mystery of intelligence through my work and make robot assistance accessible to everyone!

If you want to learn more about me, please feel free to contact me (contact info).

EDUCATION

Doctor of Philosophy: Computer Science
Georgia Institute of Technology
GPA: 4.0/4.0

EXPECT TO GRADUATE IN DEC 2024
IN PROGRESS

Master of Science: Computer Science
Georgia Institute of Technology
GPA: 4.0/4.0

GRADUATED IN MAY 2020

Bachelor of Science: Psychology
Peking University
GPA: 3.78/4.0

GRADUATED IN JUN 2018

Bachelor of Science: Computer Science and Technology
Peking University
GPA: 3.80/4.0

GRADUATED IN JUN 2018



RESEARCH

Fast Lifelong Personalized Learning from Crowdsourced Demonstration
Graduate Research Assistant, Advisor: Matthew Gombolay, Georgia Tech

  • Analyzed the personalization problem in lifelong learning from demonstration process where large number of heterogeneous demonstrations arrive sequentially by federation among users.
  • Proposed a novel IRL framework, FLAIR, to provide efficient personalization and scalability by constructing policy mixtures with a concise set of prototypical strategy policies
  • Applied FLAIR on three virtual robotic control tasks and a real robot table-tennis task; achieved better personalization with significantly higher sample efficiency.

2021

Learning from Suboptimal Demonstration via Self-Supervised Reward Regression
Graduate Research Assistant, Advisor: Matthew Gombolay, Georgia Tech

  • Characterized policy performance degradation from noise injection by a sigmoid function.
  • Proposed a novel IRL framework, SSRR, to learn policies that are better than suboptimal demonstrations by inferring the idealized reward function (i.e., the latent intent of the demonstrator).
  • Proposed Noisy-AIRL to enhance the robustness of SSRR by providing more reliable initial reward via training with noise.
  • Applied algorithm on three virtual robotic tasks and a real robot table-tennis task; achieved accurate recovery of the demonstrator intention and a better-than-best-demonstration policy.

2020

Paper

Robot Learning from Heterogeneous Demonstration
Master Thesis, Advisor: Matthew Gombolay, Georgia Tech

  • Dissected and categorized the heterogeneous demonstration problems.
  • Designed multiple algorithms for learning from heterogeneous strategy and heterogeneous performance demonstrations.
  • Showed the algorithms could achieve better learning from heterogeneous strategy demonstration than assuming homogeneity or dividing data to achieve homogeneity.
  • Illustrated the algorithms could achieve better performance than suboptimal demonstrations and previous learning-from-suboptimal-demonstration techniques.

Joint Inference of Task Reward and Strategy Reward
Graduate Research Assistant, Advisor: Matthew Gombolay, Georgia Tech

  • Modeled humans' latent objective via shared task reward and individual strategy reward.
  • Proposed a novel IRL framework, MSRD, to jointly infer task reward and strategy reward to gain a better estimation of both.
  • Applied algorithm on two virtual robot control tasks and one real robot table-tennis task; achieved better learning of task reward than SOTA AIRL, extracted precise strategic rewards, and optimized versatile policies that resemble the heterogeneous demonstrations.

2019

Paper

Model-Free and Model-Based Algorithms in Human Sequential Decision Making
Undergraduate Thesis, Advisor: Hang Zhang, Peking University

  • Designed an experiment to investigate human’s learning strategy (model-free vs model-based) under multi-task setting.
  • Showed hybrid model with forgetting mechanism best explain subject data via computational model comparison; Confirmed conclusion by simulation that learned hybrid model recovered subject behavior.
  • Still exploring meta-learning computational model explanation.

Better Exploration using Good and Bad Demos
Directed Research, Advisor: Yizhou Wang, Peking University

  • Introduced new algorithm built on Bayesian Neural Network and Thompson Sampling.
  • Proposed sample efficiency proof for the new method based on Gaussian Process and Thompson Sampling.
  • Developed tool to record human demonstrations on OpenAI Universe platform.

2017



AWARDS

Best paper finalist in Conference on Robot Learning (CoRL 2020)

First place in Brainhack ATL 2019 Track 2

Graduate of merit in Beijing

Excellent Graduate in Peking University

Zhang Wenjin Scholarship link

Scholarship for Undergraduate Research

First Prize of National Olympiad in Informatics in Provinces Advanced Group



WORK

Research Assistant
Georgia Institute of Technology

  • Assisted Professor Matthew Gombolay

SEP 2021 - NOW

Reinforcement Learning Intern
iRobot Corporation

  • Identified real-world challenges of Offline Policy Evaluation (OPE) methods.
  • Created a ease-to-use benchmark dataset where real-world challenges present.
  • Proposed an ad-hoc OPE algorithm selection method via validation mechanisms.

MAY 2021 - AUG 2021

Teaching Assistant
Georgia Institute of Technology

  • Assisted CS 7648 Interactive Robot Learning

JAN 2021 - MAY 2021

Teaching Assistant
Georgia Institute of Technology

  • Assisted OMSCS 7641 Machine Learning

AUG 2020 - DEC 2020

Research Assistant
Georgia Institute of Technology

  • Assisted Professor Matthew Gombolay
  • Worked on multi-agent expected policy gradient and sampling-based policy gradient from May 2019 - Aug 2019
  • Worked on heterogeneous inverse reinforcement learning and robot table tennis from Sep 2019 - Dec 2019
  • Worked on learning from suboptimal demonstration from Dec 2019 - Aug 2019

MAY 2019 - MAY 2020

Teaching Assistant
Georgia Institute of Technology

  • Assisted OMSCS 7641 Machine Learning

JAN 2019 - MAY 2019

iOS Engineer
Peking University PKU Helper Team

  • Developed and maintained iOS app “PKU Helper” for Peking University campus life (10k+ users)
  • Information and download link: PKU Helper

SEP 2015 - AUG 2018

Teaching Assistant
Peking University

  • Assisted Professor Jun Sun in Introduction to Computation
  • Designed practice sets, held office hour, set exam papers

SEP 2016 - JAN 2017


SKILLS

Python


Tensorflow


Matlab


C/C++


Linux


JAVA


iOS Development (Swift)


Data Analysis (SQL, R, SAS)


Web Front End (Html, css, JavaScript)



CONTACT

Email
zac.letian.chen@gmail.com

Office
266 Ferst Dr NW, Room 1306
Atlanta, GA, 30332, United States

SOCIAL LINKS

Created base on BLACKTIE.CO