Intel Concurrent Collections as a Method for Parallel Programming

dc.contributor.authorAdjogah, Richard
dc.contributor.authorMckissack, Randal
dc.contributor.authorSibeudu, Ekene
dc.contributor.authorRaim, Andrew M.
dc.contributor.authorGobbert, Matthias K.
dc.contributor.authorCraymer, Loring
dc.date.accessioned2018-10-18T13:35:02Z
dc.date.available2018-10-18T13:35:02Z
dc.date.issued2011
dc.description.abstractComputer hardware has become parallel in order to run faster and more efficient. One of the current standard parallel coding libraries is MPI (Message Passing Interface). The Intel Corporation is developing a new parallel software and translator called CnC (Concurrent Collections) to make programming in parallel easier. When using MPI, the user has to explicitly send and receive messages to and from different processes with multiple function calls. These functions have numerous arguments that need to be passed in; this can be error-prone and a hassle. CnC uses a system of collections comprised of steps, items, and tags to create a graph representation of the algorithm that defines the parallelizable code segments and their dependencies. Instead of manually assigning work to processes like MPI, the user specifies the work to be done and CnC automatically handles parallelization. This, in theory, reduces the amount of work the programmer has to do. Our research evaluates if this new software is efficient and usable when creating parallel code and converting serial code to parallel. To test the difference between the two methods, we used benchmark codes with both MPI and CnC and compared the results. We started with a prime number generator provided by Intel as sample code that familiarizes programmers with CnC. Then we moved on to a π approximation, for which we used a MPI sample code that uses integration to approximate π. We ran it in MPI first, then stripped it of all MPI, ported it to C++, and added our own CnC code. We then ran performance studies to compare the two. Our last two tests involved doing parameter studies on a variation of the Poisson equation using the finite difference method and a DNA entropy calculating project. We used existing serial code for the two problems and were easily able to create a couple of new files to run the studies using CnC. The studies ran multiple calls to the problem functions in parallel with varying parameters. These last two tests showcase a clear advantage CnC has over MPI in parallelization of these types of problems. Both the Poisson and the DNA problems showed how useful techniques from parallel computing and using an intuitive tool such as CnC can be for helping application researchers.en_US
dc.description.sponsorshipThis research was conducted during Summer 2011 in the REU Site: Interdisciplinary Program in High Performance Computing (www.umbc.edu/hpcreu) in the UMBC Department of Mathematics and Statistics. This program is also supported by UMBC, the Department of Mathematics and Statistics, the Center for Interdisciplinary Research and Consulting (CIRC), and the UMBC High Performance Computing Facility (HPCF). The co-authors Adjogah, Mckissack, and Sibeudu were supported, in part, by a grant to UMBC from the National Security Agency (NSA). The computational hardware in HPCF (www.umbc.edu/hpcf) is partially funded by the National Science Foundation through the MRI program (grant no. CNS–0821258) and the SCREMS program (grant no. DMS–0821311), with additional substantial support from UMBCen_US
dc.description.urihttps://userpages.umbc.edu/~gobbert/papers/REU2011Team4.pdfen_US
dc.format.extent13 pagesen_US
dc.genreTechnical Reporten_US
dc.identifierdoi:10.13016/M2GX44Z4Z
dc.identifier.urihttp://hdl.handle.net/11603/11594
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Mathematics and Statistics Department
dc.relation.ispartofseriesHPCF Technical Report;HPCF-2011-14
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectMessage Passing Interfaceen_US
dc.subjectConcurrent Collectionsen_US
dc.subjectParallel Programmingen_US
dc.subjectUMBC High Performance Computing Facility (HPCF)en_US
dc.titleIntel Concurrent Collections as a Method for Parallel Programmingen_US
dc.typeTexten_US

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
REU2011Team4.pdf
Size:
144.04 KB
Format:
Adobe Portable Document Format
Description:
No Thumbnail Available
Name:
REU2011Team4_code.gz
Size:
8.77 KB
Format:
Unknown data format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.68 KB
Format:
Item-specific license agreed upon to submission
Description: