Rozario, TuribiusOveissi, ParhamGoel, Ankit2024-09-242024-09-24http://hdl.handle.net/11603/3634123rd Annual International Conference on Association for Machine Learning and Applications (AMLA), Miami, Florida, Dec. 18-20, 2024This paper presents a compact, matrix-based representation of neural networks. Although neural networks are often understood pictorially as interconnected neurons, they are fundamentally mathematical nonlinear functions constructed by composing several vector-valued functions. Using basic results from linear algebra, we represent neural networks as an alternating sequence of linear maps and scalar nonlinear functions, known as activation functions. The training of neural networks involves minimizing a cost function, which typically requires the computation of a gradient. By applying basic multivariable calculus, we show that the cost gradient is also a function composed of a sequence of linear maps and nonlinear functions. In addition to the analytical gradient computation, we explore two gradient-free training methods. We compare these three training methods in terms of convergence rate and prediction accuracy, demonstrating the potential advantages of gradient-free approaches.8 pagesen-USThis work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.Matrix-Based Representations and Gradient-Free Algorithms for Neural Network TrainingText