PARALLEL FEATURE SELECTION OF MULTIPLE CLASS DATASETS USING APACHE SPARK

Author/Creator

Author/Creator ORCID

Date

2017-01-01

Type of Work

Department

Information Systems

Program

Information Systems

Citation of Original Publication

Rights

This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.

Abstract

Feature selection is the task of selecting a small subset from original features that can achieve maximum classification accuracy. This subset of features has some very important benefits like, it reduces the computational complexity of learning algorithms, saves time, improve accuracy and the selected features can be insightful for the people involved in a problem domain. This makes feature selection as an indispensable task in the classification task. In this theses, we present a two-phase approach for feature selection. In the first phase, a batch based Minimum Redundancy and Maximum Relevance (mRMR) algorithm is used with "correlation coefficient" and "mutual information" as a statistical measure of similarity. This phase helps in improving the classification performance by removing redundant and unimportant features. In the second phase, we present a stream based tree-based feature selection method that allows dynamic generation and selection of features, while taking advantage of the different feature classes and the fact that they are of different sizes and have a different fraction of good features. Experimental results show that this phase is computationally less expensive than comparable "batch" methods that do not take advantage of the feature classes and expect all features to be known in advance.