Intel Concurrent Collections as a Method for Parallel Programming

Author/Creator ORCID

Date

2011

Department

Program

Citation of Original Publication

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.

Abstract

Computer hardware has become parallel in order to run faster and more efficient. One of the current standard parallel coding libraries is MPI (Message Passing Interface). The Intel Corporation is developing a new parallel software and translator called CnC (Concurrent Collections) to make programming in parallel easier. When using MPI, the user has to explicitly send and receive messages to and from different processes with multiple function calls. These functions have numerous arguments that need to be passed in; this can be error-prone and a hassle. CnC uses a system of collections comprised of steps, items, and tags to create a graph representation of the algorithm that defines the parallelizable code segments and their dependencies. Instead of manually assigning work to processes like MPI, the user specifies the work to be done and CnC automatically handles parallelization. This, in theory, reduces the amount of work the programmer has to do. Our research evaluates if this new software is efficient and usable when creating parallel code and converting serial code to parallel. To test the difference between the two methods, we used benchmark codes with both MPI and CnC and compared the results. We started with a prime number generator provided by Intel as sample code that familiarizes programmers with CnC. Then we moved on to a π approximation, for which we used a MPI sample code that uses integration to approximate π. We ran it in MPI first, then stripped it of all MPI, ported it to C++, and added our own CnC code. We then ran performance studies to compare the two. Our last two tests involved doing parameter studies on a variation of the Poisson equation using the finite difference method and a DNA entropy calculating project. We used existing serial code for the two problems and were easily able to create a couple of new files to run the studies using CnC. The studies ran multiple calls to the problem functions in parallel with varying parameters. These last two tests showcase a clear advantage CnC has over MPI in parallelization of these types of problems. Both the Poisson and the DNA problems showed how useful techniques from parallel computing and using an intuitive tool such as CnC can be for helping application researchers.