Why do autoencoders work?

dc.contributor.authorKvalheim, Matthew D.
dc.contributor.authorSontag, Eduardo D.
dc.date.accessioned2023-10-25T13:53:20Z
dc.date.available2023-10-25T13:53:20Z
dc.date.issued2024-02-17
dc.description.abstractDeep neural network autoencoders are routinely used computationally for model reduction. They allow recognizing the intrinsic dimension of data that lie in a k-dimensional subset 𝑲 of an input Euclidean space ℝⁿ. The underlying idea is to obtain both an encoding layer that maps ℝⁿ into ℝᵏ (called the bottleneck layer or the space of latent variables) and a decoding layer that maps ℝᵏ back into ℝⁿ, in such a way that the input data from the set 𝑲 is recovered when composing the two maps. This is achieved by adjusting parameters (weights) in the network to minimize the discrepancy between the input and the reconstructed output. Since neural networks (with continuous activation functions) compute continuous maps, the existence of a network that achieves perfect reconstruction would imply that 𝑲 is homeomorphic to a k-dimensional subset of ℝᵏ, so clearly there are topological obstructions to finding such a network. On the other hand, in practice the technique is found to "work" well, which leads one to ask if there is a way to explain this effectiveness. We show that, up to small errors, indeed the method is guaranteed to work. This is done by appealing to certain facts from differential geometry. A computational example is also included to illustrate the ideas.en_US
dc.description.urihttps://arxiv.org/abs/2310.02250en_US
dc.format.extent24 pagesen_US
dc.genrejournal articlesen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m2cfyi-vlck
dc.identifier.urihttps://doi.org/10.48550/arXiv.2310.02250
dc.identifier.urihttp://hdl.handle.net/11603/30367
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Mathematics Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.titleWhy do autoencoders work?en_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-2662-6760en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2310.02250v3.pdf
Size:
1.09 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: