LOCALLY CONNECTED NEURAL NETWORKS FOR IMAGE RECOGNITION WadekarShakti Nagnath 2019 Weight-sharing property in convolutional neural network (CNN) is useful in reducing number of parameters in the network and also introduces regularization effect which helps to gain high performance. Non-weight-shared convolutional neural networks also known as Locally connected networks (LCNs) has potential to learn more<br>in each layer due to large number of parameters without increasing number of inference computations as compared to CNNs. This work explores the idea of where Locally connected layers can be used to gain performance benefits in terms of accuracy and computations, what are the challenges in training the locally connected networks and what are the techniques that should be introduced in order to train this network and achieve high performance. Partially-local connected network (P-LCN) VGG-16 which is hybrid of convolutional layers and Locally connected layers achieves on average 2.0% accuracy gain over VGG-16 full convolutional network on CIFAR100 and 0.32% on CIFAR10. Modified implementation of batch normalization for Full LCNs (all layers in network are locally connected layers) gives improvement of 50% in training accuracy as compared to using CNN batch normalization layer in full LCN. Since L1, L2 and Dropout regularization does not help improve accuracy of LCNs, regularization methods which focuses on kernels rather than individual weight for regularizing the network were explored. Ladder networks with semi supervised learning achieves this goal. Training methodology of ladder networks was modified to achieve ∼2% accuracy improvement on Pavia-University hyper-spectral image dataset with 5 labels per class.