Provenance for MapReduce-based data-intensive workflows
Links to Files
Author/Creator
Author/Creator ORCID
Date
Department
Program
Citation of Original Publication
Crawl, Daniel, Jianwu Wang, and Ilkay Altintas. “Provenance for MapReduce-Based Data-Intensive Workflows.” In Proceedings of the 6th Workshop on Workflows in Support of Large-Scale Science, 21–30. WORKS ’11. New York, NY, USA: Association for Computing Machinery, 2011. https://doi.org/10.1145/2110497.2110501.
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Abstract
MapReduce has been widely adopted by many business and scientific applications for data-intensive processing of large datasets. There are increasing efforts for workflows and systems to work with the MapReduce programming model and the Hadoop environment including our work on a higher-level programming model for MapReduce within the Kepler Scientific Workflow System. However, to date, provenance of MapReduce-based workflows and its effects on workflow execution performance have not been studied in depth. In this paper, we present an extension to our earlier work on MapReduce in Kepler to record the provenance of MapReduce workflows created using the Kepler+Hadoop framework. In particular, we present: (i) a data model that is able to capture provenance inside a MapReduce job as well as the provenance for the workflow that submitted it; (ii) an extension to the Kepler+Hadoop architecture to record provenance using this data model on MySQL Cluster; (iii) a programming interface to query the collected information; and (iv) an evaluation of the scalability of collecting and querying this provenance information using two scenarios with different characteristics.