Throughput studies on an InfiniBand interconnect via all-to-all communications

dc.contributor.authorMistry, Nil
dc.contributor.authorRamsey, Jordan
dc.contributor.authorWiley, Benjamin
dc.contributor.authorYanchuck, Jackie
dc.contributor.authorHuang, Xuan
dc.contributor.authorGobbert, Matthias K.
dc.date.accessioned2018-09-20T19:41:07Z
dc.date.available2018-09-20T19:41:07Z
dc.date.issued2015-04-12
dc.description.abstractDistributed-memory clusters are the most important type of parallel computer today, and they dominate the TOP500 list. The InfiniBand interconnect is the most popular network for distributed-memory compute clusters. Contention of communications across a switched network that connects multiple compute nodes in a distributed-memory cluster may seriously degrade performance of parallel code. This contention is maximized when communicating large blocks of data among all parallel processes simultaneously. This communication pattern arises in many important algorithms such as parallel sorting. The cluster tara in the UMBC High Performance Computing Facility (HPCF) with a quad-data rate InfiniBand interconnect provides an opportunity to test if the capacity of a switched network can become a limiting factor in algorithmic performance. We find that we can design a test case of a problem involving increasing usage of memory that does not scale any more on the InifiniBand interconnect, thus becoming a limiting factor for parallel scalability. However, for the case of stable memory usage of the problem, the InifiniBand communications get faster and will not inhibit parallel scalability. The tests in this paper are designed to involve only basic MPI commands for wide reproducibility, and the paper provides the detailed motivation of the design of the memory usage needed for the tests.en_US
dc.description.sponsorshipThese results were obtained as part of the REU Site: Interdisciplinary Program in High Performance Computing (www.umbc.edu/hpcreu) in the Department of Mathematics and Statistics at the University of Maryland, Baltimore County (UMBC) in Summer 2013, where they were originally reported in the tech. rep. [7]. This program is funded jointly by the National Science Foundation and the National Security Agency (NSF grant no. DMS{1156976), with additional support from UMBC, the Department of Mathematics and Statistics, the Center for Interdisciplinary Research and Consulting (CIRC), and the UMBC High Performance Computing Facility (HPCF). HPCF (www.umbc.edu/hpcf) is supported program (grant nos. CNS{0821258 and CNS{1228778) and the SCREMS program (grant no. DMS{0821311), with additional substantial support from UMBC. Coauthor Jordan Ramsey was supported, in part, by the UMBC National Security Agency (NSA) Scholars Program though a contract with the NSA. Graduate RA Xuan Huang was supported by UMBC as HPCF RA.en_US
dc.description.urihttps://dl.acm.org/citation.cfm?id=2872611en_US
dc.format.extent7 pagesen_US
dc.genreconference paper pre-printen_US
dc.identifierdoi:10.13016/M2V698G4D
dc.identifier.citationNil Mistry, Jordan Ramsey, Benjamin Wiley, Jackie Yanchuck, Xuan Huang, Matthias K. Gobbert, Throughput studies on an InfiniBand interconnect via all-to-all communications, HPC '15 Proceedings of the Symposium on High Performance Computing Pages 93-99 ,en_US
dc.identifier.isbn978-1-5108-0101-1
dc.identifier.urihttp://hdl.handle.net/11603/11339
dc.language.isoen_USen_US
dc.publisherAssociation of Computing Literature (ACM)en_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Mathematics Department Collection
dc.relation.ispartofumbc Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please contact the author.
dc.subjectInfniBand interconnecten_US
dc.subjectAll-to-All communicationsen_US
dc.subjectnetwork contentionen_US
dc.subjectscalability studiesen_US
dc.subjectMPIen_US
dc.subjectUMBC High Performance Computing Facility (HPCF)en_US
dc.titleThroughput studies on an InfiniBand interconnect via all-to-all communicationsen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0d91510844a16ad4605f2775dc04499b881f.pdf
Size:
390.12 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.68 KB
Format:
Item-specific license agreed upon to submission
Description: