Avatar

Zhisheng Zhong

zszhong AT pku.edu.cn

Ph.D. candiate

The University of Tokyo

Personal Site

Biography

I graduated from Peking University (PKU) in 2019. From 2016, I was fortunately directed by Prof. Zhouchen Lin and Prof. Chao Zhang. Before studying in PKU, I obtained my bachelor’s degree from Beijing University Of Posts And Telecommunications (BUPT) in 2016.

Interests

  • Computer Vision/Deep Learning
  • Low-level Vision
  • Deep Network Architecture

Education

  • PhD in Information and Communication Engineering, 2019-Now

    The University of Tokyo

  • MSc in Computer Science, 2016-2019

    Peking University

  • BSc in Telecommunication Engineering, 2012-2016

    Beijing Univerisity of Posts and Telecommunications

Publications @ZERO Lab

Expectation Maximization Attention Networks for Semantic Segmentation. ICCV, 2019.

We formulate the attention mechanism into an expectation-maximization manner and iteratively estimate a much more compact set of bases …

ADA-Tucker: Compressing Deep Neural Networks via Adaptive Dimension Adjustment Tucker Decomposition. NN, 2019.

We propose a novel network compression method called Adaptive Dimension Adjustment Tucker decomposition (ADA-Tucker), with learnable …

R^2 Net Recurrent and Recursive Network for Sparse View CT Artifacts Removal. MICCAI, 2019.

We propose a novel neural network architecture to reduce streak artifacts generated in sparse-view 2D Cone Beam Computed To-mography …

Differentiable Linearized ADMM. ICML, 2019.

We propose D-LADMM, which is a K-layer LADMM inspired deep neural network and rigorously prove that there exist a set of learnable …

Convolutional Neural Networks with Alternately Updated Clique. CVPR, 2018.

We propose a new convolutional neural network architecture with alternately updated clique (CliqueNet)

Joint Sub-bands Learning with Clique Structures for Wavelet Domain Super-Resolution. NIPS, 2018.

We propose the Super-Resolution CliqueNet (SRCliqueNet) to reconstruct the high resolution (HR) image with better textural details in …