HiDeNN-TD: Reduced-order hierarchical deep learning neural networks

dc.contributor.authorZhang, Lei
dc.contributor.authorLu, Ye
dc.contributor.authorTang, Shaoqiang
dc.contributor.authorLiu, Wing Kam
dc.date.accessioned2023-10-11T14:27:52Z
dc.date.available2023-10-11T14:27:52Z
dc.date.issued2022-02-01
dc.description.abstractThis paper presents a tensor decomposition (TD) based reduced-order model of the hierarchical deep-learning neural networks (HiDeNN). The proposed HiDeNN-TD method keeps advantages of both HiDeNN and TD methods. The automatic mesh adaptivity makes the HiDeNN-TD more accurate than the finite element method (FEM) and conventional proper generalized decomposition (PGD) and TD, using a fraction of the FEM degrees of freedom. This work focuses on the theoretical foundation of the method. Hence, the accuracy and convergence of the method have been studied theoretically and numerically, with a comparison to different methods, including FEM, PGD, TD, HiDeNN and Deep Neural Networks. In addition, we have theoretically shown that the PGD/TD converges to FEM at increasing modes, and the PGD/TD solution error is a summation of the mesh discretization error and the mode reduction error. The proposed HiDeNN-TD shows a high accuracy with orders of magnitude fewer degrees of freedom than FEM, and hence a high potential to achieve fast computations with a high level of accuracy for large-size engineering and scientific problems. As a trade-off between accuracy and efficiency, we propose a highly efficient solution strategy called HiDeNN-PGD. Although the solution is less accurate than HiDeNN-TD, HiDeNN-PGD still provides a higher accuracy than PGD/TD and FEM with only a small amount of additional cost to PGD.en_US
dc.description.sponsorshipL. Zhang and S. Tang are supported by National Natural Science Foundation of China Grant Number 11890681, 11832001, 11521202 and 11988102. W.K. Liu and Y. Lu are supported by National Science Foundation, USA Grant Numbers CMMI-1934367 and CMMI-1762035.en_US
dc.description.urihttps://www.sciencedirect.com/science/article/pii/S0045782521006629en_US
dc.format.extent43 pagesen_US
dc.genrejournal articlesen_US
dc.genrepostprintsen_US
dc.identifierdoi:10.13016/m2hdug-kpd1
dc.identifier.citationZhang, Lei, Ye Lu, Shaoqiang Tang, and Wing Kam Liu. “HiDeNN-TD: Reduced-Order Hierarchical Deep Learning Neural Networks.” Computer Methods in Applied Mechanics and Engineering 389 (February 1, 2022): 114414. https://doi.org/10.1016/j.cma.2021.114414.en_US
dc.identifier.urihttps://doi.org/10.1016/j.cma.2021.114414
dc.identifier.urihttp://hdl.handle.net/11603/30070
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Mechanical Engineering Department Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.titleHiDeNN-TD: Reduced-order hierarchical deep learning neural networksen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0003-3698-5596en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
S0045782521006629.pdf
Size:
4.91 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: