AYI, MANEESH RMNv2: Reduced Mobilenet V2 An Efficient Lightweight Model for Hardware Deployment Humans can visually see things and can differentiate objects easily but for computers, it is not that easy. Computer Vision is an interdisciplinary field that allows computers to comprehend, from digital videos and images, and differentiate objects. With the Introduction to CNNs/DNNs, computer vision is tremendously used in applications like ADAS, robotics and autonomous systems, etc. This thesis aims to propose an architecture, RMNv2, that is well suited for computer vision applications such as ADAS, etc.<br><div>RMNv2 is inspired by its original architecture Mobilenet V2. It is a modified version of Mobilenet V2. It includes changes like disabling downsample layers, Heterogeneous kernel-based convolutions, mish activation, and auto augmentation. The proposed model is trained from scratch in the CIFAR10 dataset and produced an accuracy of 92.4% with a total number of parameters of 1.06M. The results indicate that the proposed model has a model size of 4.3MB which is like a 52.2% decrease from its original implementation. Due to its less size and competitive accuracy the proposed model can be easily deployed in resource-constrained devices like mobile and embedded devices for applications like ADAS etc. Further, the proposed model is also implemented in real-time embedded devices like NXP Bluebox 2.0 and NXP i.MX RT1060 for image classification tasks. <br></div> convolution neural network;Deep Neural Network (DNN);embedded systems;Bluebox 2.0;Computer Engineering;Electrical and Electronic Engineering not elsewhere classified 2020-04-22
    https://hammer.purdue.edu/articles/thesis/RMNv2_Reduced_Mobilenet_V2_An_Efficient_Lightweight_Model_for_Hardware_Deployment/12156771
10.25394/PGS.12156771.v1