Learning Networks from Wide-Sense Stationary Stochastic Processes
Loading...
Links to Files
Author/Creator ORCID
Date
2024-12-04
Type of Work
Department
Program
Citation of Original Publication
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Abstract
Complex networked systems driven by latent inputs are common in fields like neuroscience, finance, and engineering. A key inference problem here is to learn edge connectivity from node outputs (potentials). We focus on systems governed by steady-state linear conservation laws: Xₜ = L*Yₜ, where Xₜ, Yₜ ∈ Rᵖ denote inputs and potentials, respectively, and the sparsity pattern of the p × p Laplacian L* encodes the edge structure. Assuming Xₜ to be a wide-sense stationary stochastic process with a known spectral density matrix, we learn the support of L* from temporally correlated samples of Yₜ via an ℓ₁-regularized Whittle’s maximum likelihood estimator (MLE). The regularization is particularly useful for learning large-scale networks in the high-dimensional setting where the network size p significantly exceeds the number of samples n. We show that the MLE problem is strictly convex, admitting a unique solution. Under a novel mutual incoherence condition and certain sufficient conditions on (n, p, d), we show that the ML estimate recovers the sparsity pattern of L* with high probability, where d is the maximum degree of the graph underlying L*. We provide recovery guarantees for L* in element-wise maximum, Frobenius, and operator norms. Finally, we complement our theoretical results with several simulation studies on synthetic and benchmark datasets, including engineered systems (power and water networks), and real-world datasets from neural systems (such as the human brain).