[2023.09] Diff-PGD and 3D-IntPhys are accepted by NeurIPS'2023!
[2023.08] I was invited as a reviewer for ICLR'2024.
[2023.05] We propose Diff-PGD, a diffusion-based adv-sample generation framework.
[2022.12] I am selected as the Top Reviewer of NeurIPS 2022.
[2022.10] Our Distance-Transformer is accepted to EMNLP2022 Findings.
[2022.08] I will start as a PhD student at (ML@GT) starting from 2022Fall.
My research interest lies in broad aspects of Machine Learning, Computer Vision and Natural Language Processing. Currenty, I target myself to the following directions:
- Generative models + X: utilize/learn strong prior knowledge using Generative Models (e.g. GAN, Diffusion Models), to empower AI problem, including robust AI, robot learning and inverse problem
- Compositional and Explainable AI: learning Compositional and Explainable representation or network stuctures for deep learning models in e.g. Computer Vision and Natural Language Processing
- [2023-current]: Start collaborating with Prof. Animesh Garg, GaTech, on DM+RL
- [2023-current]: Start collaborating with Prof. Bin Hu, UIUC, on DM+Robustness
- [2022-current]: Work as a GRA Ph.D. student at FLAIR lab with Prof. Yongxin Chen
- [2022-2023]: Work as a remote intern at MIT CSAIL, advised by Josh Tenenbaum, Yunzhu Li and Fish Tung
- [2021-2021]: Work as a research intern at NLC group, Microsoft Research
- [2021-2022]: Work as a research intern at John Hopcroft Center, advised by Prof. Zhouhan Lin on NLP
- [2020-2021]: Work as a research intern at John Hopcroft Center, advised by Prof. Quanshi Zhang on XAI
ICML'22(2), NeurIPS'22(4), ICML'23(2), NeurIPS'23(6), ICLR'24(?)