Neyra, JorgeSiramshetty, Vishal B.Ashqar, Huthaifa2024-12-112024-12-112024-11-08https://doi.org/10.48550/arXiv.2411.05937http://hdl.handle.net/11603/37105This study examines the effect that different feature selection methods have on models created with XGBoost, a popular machine learning algorithm with superb regularization methods. It shows that three different ways for reducing the dimensionality of features produces no statistically significant change in the prediction accuracy of the model. This suggests that the traditional idea of removing the noisy training data to make sure models do not overfit may not apply to XGBoost. But it may still be viable in order to reduce computational complexity.11 pagesen-USAttribution 4.0 International CC BY 4.0http://creativecommons.org/licenses/by/4.0/Computer Science - Information RetrievalComputer Science - Machine LearningThe effect of different feature selection methods on models created with XGBoostText