Singleton, William Steven Increasing CNN Representational Power Using Absolute Cosine Value Regularization The Convolutional Neural Network (CNN) is a mathematical model designed to distill input information into a more useful representation. This distillation process removes information over time through a series of dimensionality reductions, which ultimately, grant the model the ability to resist noise, and generalize effectively. However, CNNs often contain elements that are ineffective at contributing towards useful representations. This Thesis aims at providing a remedy for this problem by introducing Absolute Cosine Value Regularization (ACVR). This is a regularization technique hypothesized to increase the representational power of CNNs by using a Gradient Descent Orthogonalization algorithm to force the vectors that constitute their filters at any given convolutional layer to occupy unique positions in R<sup>n</sup>. This method should in theory, lead to a more effective balance between information loss and representational power, ultimately, increasing network performance. The following Thesis proposes and examines the mathematics and intuition behind ACVR, and goes on to propose Dynamic-ACVR (D-ACVR). This Thesis also proposes and examines the effects of ACVR on the filters of a low-dimensional CNN, as well as the effects of ACVR and D-ACVR on traditional Convolutional filters in VGG-19. Finally, this Thesis proposes and examines regularization of the Pointwise filters in MobileNetv1. Absolute Cosine Value Regularization;CIFAR-10;Convolutional Neural Networks Imaging;Gradient Descent Orthogonalization;MobileNetV1;VGG-19;D-ACVR;Artificial Intelligence and Image Processing 2020-04-21
    https://hammer.purdue.edu/articles/thesis/Increasing_CNN_Representational_Power_Using_Absolute_Cosine_Value_Regularization/12164253
10.25394/PGS.12164253.v1