Deep Learning Theory

Maximum-and-Concatenation Networks

We propose a novel multi-layer DNN structure termed MCN, which can approximate some class of continuous functions arbitrarily well even with highly sparse connection. We prove that the global minima of an $l$-layer MCN may be outperformed, at least can be attained, by simply increasing the network depth. More importantly, MCN could be easily appended to any of the many existing DNN and the augmented DNN will share the same property of MCN.