Avatar

Yisen Wang

yisen.wang AT pku.edu.cn

Assistant Professor

Peking University

Personal Site

Biography

I am a Tenure-track Assistant Professor (Ph.D. Advisor) in Department of Machine Intelligence, School of Electronics Engineering and Computer Science (EECS), Peking University. I am also a faculty member of ZERO Lab at Peking University led by Prof. Zhouchen Lin.

I got my Ph.D. degree from Department of Computer Science and Technology, Tsinghua University. I have visited Georgia Tech, USA, hosted by Prof. Le Song and Prof. Hongyuan Zha, and The University of Melbourne, Australia, hosted by Prof. James Bailey.

My research interests broadly include the theory and applications of machine learning and deep learning. Please check my recent publications to know more.

We are always actively recruiting postdocs, interns and prospective graduate students! Welcome to send me your detailed CV and Research Statement!

Selected Publications

Optimization-induced Implicit Graph Diffusion. ICML, 2022.

Due to the over-smoothing issue, most existing graph neural networks can only capture limited de- pendencies with their inherently …

Training Much Deeper Spiking Neural Networks with a Small Number of Time-Steps. Neural Networks, 2022.

Spiking Neural Network (SNN) is a promising energy-efficient neural architecture when implemented on neuromorphic hardware. The …

Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation. CVPR, 2022.

Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware. However, it is a …

Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State. NeurIPS, 2021.

Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware. However, …

Reparameterized Sampling for Generative Adversarial Networks. ECML-PKDD, 2021.

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs). …

Demystifying Adversarial Training via A Unified Probabilistic Framework. ICML workshop 2021, 2021.

Adversarial Training (AT) is known as an effective approach to enhance the robustness of deep neural networks. Recently researchers …

Dissecting the Diffusion Process in Linear Graph Convolutional Networks. NeurIPS, 2021.

Graph Convolutional Networks (GCNs) have attracted more and more attentions in recent years. A typical GCN layer consists of a linear …

Efficient Equivariant Network. NeurIPS, 2021.

Convolutional neural networks (CNNs) have dominated the field of Computer Vision and achieved great success due to their built-in …

Gauge Equivariant Transformer. NeurIPS, 2021.

Attention mechanism has shown great performance and efficiency in a lot of deep learning models, in which relative position encoding …

Residual Relaxation for Multi-view Representation Learning. NeurIPS, 2021.

Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the …

Improving Adversarial Robustness via Channel-wise Activation Suppressing. ICLR, 2021.

The study of adversarial examples and their activations have attracted significant attention for secure and robust learning with deep …

Towards A Unified Understanding and Improving of Adversarial Transferability. ICLR, 2021.

In this paper, we use the interaction inside adversarial perturbations to explain and boost the adversarial transferability. We …

Unlearnable Examples: Making Personal Data Unexploitable. ICLR, 2021.

The volume of “free” data on the internet has been key to the current success of deep learning. However, it also raises …

Adversarial Weight Perturbation Helps Robust Generalization. NeurIPS, 2020.

The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years. Among them, …

Normalized Loss Functions for Deep Learning with Noisy Labels. ICML, 2020.

Robust loss functions are essential for training accurate deep neural networks (DNNs) in the presence of noisy (incorrect) labels. It …

Improving Adversarial Robustness Requires Revisiting Misclassified Examples. ICLR, 2020.

Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by imperceptible perturbations. A range of defense …

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets. ICLR, 2020.

Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs) such as ResNet, WideResNet, …

Symmetric Cross Entropy for Robust Learning with Noisy Labels. ICCV, 2019.

Training accurate deep neural networks (DNNs) in the presence of noisy labels is an important and challenging task. Though a number of …

On the Convergence and Robustness of Adversarial Training. ICML, 2019.

Improving the robustness of deep neural networks (DNNs) to adversarial examples is an important yet challenging problem for secure deep …

Awards

Tsinghua University Outstanding Doctoral Dissertation Award

Best Paper Award