Yisen Wang

yisen.wang AT pku.edu.cn

Assistant Professor

Peking University

Personal Site


I am a Tenure-track Assistant Professor (Ph.D. Advisor) in Department of Machine Intelligence, School of Electronics Engineering and Computer Science (EECS), Peking University. I am also a faculty member of ZERO Lab at Peking University led by Prof. Zhouchen Lin.

I got my Ph.D. degree from Department of Computer Science and Technology, Tsinghua University. I have visited Georgia Tech, USA, hosted by Prof. Le Song and Prof. Hongyuan Zha, and The University of Melbourne, Australia, hosted by Prof. James Bailey.

My research interests broadly include the theory and applications of machine learning and deep learning. Please check my recent publications to know more.

We are always actively recruiting postdocs, interns and prospective graduate students! Welcome to send me your detailed CV and Research Statement!

Selected Publications

Improving Adversarial Robustness via Channel-wise Activation Suppressing. ICLR, 2021.

The study of adversarial examples and their activations have attracted significant attention for secure and robust learning with deep …

Towards A Unified Understanding and Improving of Adversarial Transferability. ICLR, 2021.

In this paper, we use the interaction inside adversarial perturbations to explain and boost the adversarial transferability. We …

Unlearnable Examples: Making Personal Data Unexploitable. ICLR, 2021.

The volume of “free” data on the internet has been key to the current success of deep learning. However, it also raises …

Adversarial Weight Perturbation Helps Robust Generalization. NeurIPS, 2020.

The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years. Among them, …

Normalized Loss Functions for Deep Learning with Noisy Labels. ICML, 2020.

Robust loss functions are essential for training accurate deep neural networks (DNNs) in the presence of noisy (incorrect) labels. It …

Improving Adversarial Robustness Requires Revisiting Misclassified Examples. ICLR, 2020.

Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by imperceptible perturbations. A range of defense …

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets. ICLR, 2020.

Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs) such as ResNet, WideResNet, …

Symmetric Cross Entropy for Robust Learning with Noisy Labels. ICCV, 2019.

Training accurate deep neural networks (DNNs) in the presence of noisy labels is an important and challenging task. Though a number of …

On the Convergence and Robustness of Adversarial Training. ICML, 2019.

Improving the robustness of deep neural networks (DNNs) to adversarial examples is an important yet challenging problem for secure deep …

Academic Acativities

Reviewer to Journals

Reviewer to Conferences


Tsinghua University Outstanding Doctoral Dissertation Award

Best Paper Award