Hurwitz, JohnNicholas, CharlesRaff, Edward2024-12-112024-12-112024-10-20https://doi.org/10.48550/arXiv.2410.15280http://hdl.handle.net/11603/3707738th Conference on Neural Information Processing Systems (NeurIPS 2024), Machine Learning and Compression Workshop, Dec 10-Dec 15 2024It is generally well understood that predictive classification and compression are intrinsically related concepts in information theory. Indeed, many deep learning methods are explained as learning a kind of compression, and that better compression leads to better performance. We interrogate this hypothesis via the Normalized Compression Distance (NCD), which explicitly relies on compression as the means of measuring similarity between sequences and thus enables nearest-neighbor classification. By turning popular large language models (LLMs) into lossless compressors, we develop a Neural NCD and compare LLMs to classic general-purpose algorithms like gzip. In doing so, we find that classification accuracy is not predictable by compression rate alone, among other empirical aberrations not predicted by current understanding. Our results imply that our intuition on what it means for a neural network to ``compress'' and what is needed for effective classification are not yet well understood.10 pagesen-USThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.Computer Science - Machine LearningUMBC Discovery, Research, and Experimental Analysis of Malware Lab (DREAM Lab)UMBC Interactive Robotics and Language Lab (IRAL Lab)Statistics - Machine LearningNeural Normalized Compression Distance and the Disconnect Between Compression and ClassificationText