Yosemite National Park, CA
2nd June 2024
Credit to Rebecca Sun

Haotian Xue 薛昊天 /haʊtiˈæn ; ʃweɪ/

Email: htxue.ai [at] gatech [dot] edu

Office: 401, MK Building; 270 Ferst Dr, Atlanta, GA 30332

I am currently a 2nd-year ML Ph.D. student at ML@GaTech, advised by Dr. Yongxin Chen. Previously, I obtained my B.E. of Computer Science from Shanghai Jiao Tong University with honor in 2022. I was a Visiting Student at MIT CSAIL supervised by Prof. Josh Tenenbaum, working closely with Yunzhu Li and Fish Tung. Feel free to contact me if you want to discuss or collaborate!

News 📢
  • [2024.05] I started as a research intern at Nvidia DIR Group !
  • [2024.05] We propose DP-Attacker, a framework to attack Diffusion-based policy generator.
  • [2024.04] We release PDM-Pure, a universal purifier for protection against diffusion models
  • [2024.03] I received the ICLR'2024 travel award, thanks!
  • [2024.01] Our Diff-Protect is accepted by ICLR'2024!
  • [2023.10] I was awarded the NeurIPS'2023 scholar award, thanks!
  • [2023.10] I was invited as a reviewer for TPAMI.
  • [2023.10] We propose Diff-Protect, a more effective protection framework against AI mimicry.
  • [2023.09] Diff-PGD and 3D-IntPhys are accepted by NeurIPS'2023!
  • [2023.08] I was invited as a reviewer for ICLR'2024.
  • [2023.05] We propose Diff-PGD, a diffusion-based adv-sample generation framework.
  • [2022.12] I am selected as the Top Reviewer of NeurIPS 2022.
  • [2022.10] Our Distance-Transformer is accepted to EMNLP2022 Findings.
  • [2022.08] I will start as a PhD student at (ML@GT) starting from 2022Fall.
Research Interest

My research interest lies in broad aspects of Machine Learning topics such as Computer Vision and Natural Language Processing. From 2022 to now, I focus on the following topics:

  • Generative AI & Robust AI: I am interested in building more Robust AI Systems, especially for GenAI such as Diffusion Models and LLMs. I did investigation in adversarial attacks for diffusion models [NeurIPS'23, Arxiv'24] and privacy protection against diffusion-based mimicry [ICLR'24, Arxiv'24] .
  • Compositional and Explainable AI: learning Compositional and Explainable representation, stuctures or algorithms for deep learning problems in e.g. Computer Vision and Natural Language Processing. I did research on Explanable AI [Arxiv'19, Arxiv'20] and learning compositional structure for vision tasks[NeurIPS'23, Arxiv'24] and NLP tasks [EMNLP'22] .
Research Experience
  • [2023-2024]: Collaborate with Dr. Animesh Garg, GaTech, on DM+RL
  • [2023-2024]: Collaborate with Dr. Bin Hu from UIUC and Dr. Alexandre Araujo from NYU on DM+Robustness
  • [2022-Current]: Work as a GRA Ph.D. student at FLAIR lab with Dr. Yongxin Chen
  • [2022-2023]: Work as a visiting student at MIT, advised by Prof. Josh Tenenbaum, Yunzhu Li and Fish Tung
  • [2021-2021]: Work as a research intern at NLC group, Microsoft Research, advised by Dr. Lei Cui
  • [2021-2022]: Work as a research intern at John Hopcroft Center, advised by Dr. Zhouhan Lin on NLP
  • [2020-2021]: Work as a research intern at John Hopcroft Center, advised by Dr. Quanshi Zhang on XAI
Reviewer Experience

I have reviewed 20+ papers for ML conferences like NeurIPS@22/23/24, ICLR@24 and ICML@22/23/24.

Publications ( show selected / show all by date / show all by topic )

Topics: Vision / NLP/ Robot Learning/ GenAI/ Explainable AI / (*/†: indicates equal contribution.)

RefDrop: Controllable Consistency in Image or Video Generation via Reference Feature Guidance
Jiaojiao Fan, Haotian Xue, Qinsheng, Zhang, Yongxin Chen

[Arxiv] [Project Website]

Diffusion Policy Attacker: Crafting Adversarial Attacks for Diffusion-based Policies
Yipu Chen* , Haotian Xue*, Yongxin Chen

[Arxiv] [Project Website]

Pixel is a Barrier: Diffusion Models Are More Adversarially Robust Than We Think
Haotian Xue, Yongxin Chen

[In Submission] [GitHub]

Towards More Effective Protection Against Diffusion-Based Mimicry with Score Distillation
Haotian Xue, Chumeng Liang*, Xiaoyu Wu*, Yongxin Chen

[ICLR 2024] [GitHub] [Poster]

Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability
Haotian Xue, Alexandre Araujo, Bin Hu, Yongxin Chen

[NeurIPS 2023] [GitHub] [Poster]

3D-IntPhys: Towards More Generalized 3D-grounded Visual Intuitive Physics under Challenging Scenes
Haotian Xue, Antonio Torralba, Joshua B. Tenenbaum, Daniel LK Yamins, Yunzhu Li, Hsiao-Yu Tung

[NeurIPS 2023] [AIhub] [Poster]

[Abridged in CVPR 2023 3DVR & Precognition]

Syntax-guided Localized Self-attention by Constituency Syntactic Distance
Shengyuan Hou*, Jushi Kai*, Haotian Xue*, Bingyu Zhu, Bo Yuan, Longtao Huang, Xinbin Wang, Zhouhan Lin

[EMNLP2022, Findings] [GitHub]

Learning to Adaptively Incorporate External Syntax through Gated Self-Attention
[TBA]

[In Submission]

A Hypothesis For The Cognitive Difficulty of Images
Xu Chen, Xin Wang, Haotian Xue, Zhengyang Liang, Xin Jin, Quanshi Zhang

[Arxiv]

Evaluation of Attribution Explanations without Ground Truth
Hao Zhang, Haotian Xue, Jiayi Chen, Yiting Chen, Wen Shen, Quanshi Zhang

[OpenReview]

Active Adversarial Learning
Haotian Xue,

Advisor: Nanyang Ye

[Bachelor Thesis]