Throughput studies on an InfiniBand interconnect via all-to-all communications
dc.contributor.author | Mistry, Nil | |
dc.contributor.author | Ramsey, Jordan | |
dc.contributor.author | Wiley, Benjamin | |
dc.contributor.author | Yanchuck, Jackie | |
dc.contributor.author | Huang, Xuan | |
dc.contributor.author | Gobbert, Matthias K. | |
dc.date.accessioned | 2018-09-20T19:41:07Z | |
dc.date.available | 2018-09-20T19:41:07Z | |
dc.date.issued | 2015-04-12 | |
dc.description.abstract | Distributed-memory clusters are the most important type of parallel computer today, and they dominate the TOP500 list. The InfiniBand interconnect is the most popular network for distributed-memory compute clusters. Contention of communications across a switched network that connects multiple compute nodes in a distributed-memory cluster may seriously degrade performance of parallel code. This contention is maximized when communicating large blocks of data among all parallel processes simultaneously. This communication pattern arises in many important algorithms such as parallel sorting. The cluster tara in the UMBC High Performance Computing Facility (HPCF) with a quad-data rate InfiniBand interconnect provides an opportunity to test if the capacity of a switched network can become a limiting factor in algorithmic performance. We find that we can design a test case of a problem involving increasing usage of memory that does not scale any more on the InifiniBand interconnect, thus becoming a limiting factor for parallel scalability. However, for the case of stable memory usage of the problem, the InifiniBand communications get faster and will not inhibit parallel scalability. The tests in this paper are designed to involve only basic MPI commands for wide reproducibility, and the paper provides the detailed motivation of the design of the memory usage needed for the tests. | en_US |
dc.description.sponsorship | These results were obtained as part of the REU Site: Interdisciplinary Program in High Performance Computing (www.umbc.edu/hpcreu) in the Department of Mathematics and Statistics at the University of Maryland, Baltimore County (UMBC) in Summer 2013, where they were originally reported in the tech. rep. [7]. This program is funded jointly by the National Science Foundation and the National Security Agency (NSF grant no. DMS{1156976), with additional support from UMBC, the Department of Mathematics and Statistics, the Center for Interdisciplinary Research and Consulting (CIRC), and the UMBC High Performance Computing Facility (HPCF). HPCF (www.umbc.edu/hpcf) is supported program (grant nos. CNS{0821258 and CNS{1228778) and the SCREMS program (grant no. DMS{0821311), with additional substantial support from UMBC. Coauthor Jordan Ramsey was supported, in part, by the UMBC National Security Agency (NSA) Scholars Program though a contract with the NSA. Graduate RA Xuan Huang was supported by UMBC as HPCF RA. | en_US |
dc.description.uri | https://dl.acm.org/citation.cfm?id=2872611 | en_US |
dc.format.extent | 7 pages | en_US |
dc.genre | conference paper pre-print | en_US |
dc.identifier | doi:10.13016/M2V698G4D | |
dc.identifier.citation | Nil Mistry, Jordan Ramsey, Benjamin Wiley, Jackie Yanchuck, Xuan Huang, Matthias K. Gobbert, Throughput studies on an InfiniBand interconnect via all-to-all communications, HPC '15 Proceedings of the Symposium on High Performance Computing Pages 93-99 , | en_US |
dc.identifier.isbn | 978-1-5108-0101-1 | |
dc.identifier.uri | http://hdl.handle.net/11603/11339 | |
dc.language.iso | en_US | en_US |
dc.publisher | Association of Computing Literature (ACM) | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Mathematics Department Collection | |
dc.relation.ispartof | umbc Faculty Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.rights | This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please contact the author. | |
dc.subject | InfniBand interconnect | en_US |
dc.subject | All-to-All communications | en_US |
dc.subject | network contention | en_US |
dc.subject | scalability studies | en_US |
dc.subject | MPI | en_US |
dc.subject | UMBC High Performance Computing Facility (HPCF) | en_US |
dc.title | Throughput studies on an InfiniBand interconnect via all-to-all communications | en_US |
dc.type | Text | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- 0d91510844a16ad4605f2775dc04499b881f.pdf
- Size:
- 390.12 KB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.68 KB
- Format:
- Item-specific license agreed upon to submission
- Description: