Multi-modal data fusion using source separation: Application to medical imaging
Loading...
Author/Creator
Author/Creator ORCID
Date
2015-08-17
Type of Work
Department
Program
Citation of Original Publication
Tülay Adali , Yuri Levin-Schwartz, Vince D. Calhoun, Multimodal Data Fusion Using Source Separation: Two Effective Models Based on ICA and IVA and Their Properties, Proceedings of the IEEE ,Volume: 103 , Issue: 9 , Sept. 2015, DOI: 10.1109/JPROC.2015.2461601
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
© 2015 IEEE
© 2015 IEEE
Abstract
The Joint ICA (jICA) and the Transposed IVA
(tIVA) models are two effective solutions based on blind source
separation that enable fusion of data from multiple modalities
in a symmetric and fully multivariate manner. In [1], their
properties and the major issues in their implementation are
discussed in detail. In this accompanying paper, we consider the
application of these two models to fusion of multi-modal medical
imaging data—functional magnetic resonance imaging (fMRI),
structural MRI (sMRI), and electroencephalography (EEG) data
collected from a group of healthy controls and patients with
schizophrenia performing an auditory oddball task. We show
how both models can be used to identify a set of components
that report on differences between the two groups, jointly, for
all the modalities used in the study. We discuss the importance
of algorithm and order selection as well as trade-offs involved
in the selection of one model over another. We note that for
the selected dataset, especially given the limited number of
subjects available for the study, jICA provides a more desirable
solution, however the use of an ICA algorithm that uses flexible
density matching provides advantages over the most widely used
algorithm, Infomax, for the problem.