Matrix-Based Representations and Gradient-Free Algorithms for Neural Network Training
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Rights
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
Subjects
Abstract
This paper presents a compact, matrix-based representation of neural networks. Although neural networks are often understood pictorially as interconnected neurons, they are fundamentally mathematical nonlinear functions constructed by composing several vector-valued functions. Using basic results from linear algebra, we represent neural networks as an alternating sequence of linear maps and scalar nonlinear functions, known as activation functions. The training of neural networks involves minimizing a cost function, which typically requires the computation of a gradient. By applying basic multivariable calculus, we show that the cost gradient is also a function composed of a sequence of linear maps and nonlinear functions. In addition to the analytical gradient computation, we explore two gradient-free training methods. We compare these three training methods in terms of convergence rate and prediction accuracy, demonstrating the potential advantages of gradient-free approaches.