Feature Engineering & Feature Selection
A comprehensive guide [pdf] [markdown] for Feature Engineering and Feature Selection, with implementations and examples in Python.
Feature Engineering & Selection is the most essential part of building a useable machine learning project, even though hundreds of cutting-edge machine learning algorithms coming in these days like deep learning and transfer learning. Indeed, like what Prof Domingos, the author of 'The Master Algorithm' says:
“At the end of the day, some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used.”
— Prof. Pedro Domingos
Data and feature has the most impact on a ML project and sets the limit of how well we can do, while models and algorithms are just approaching that limit. However, few materials could be found that systematically introduce the art of feature engineering, and even fewer could explain the rationale behind. This repo is my personal notes from learning ML and serves as a reference for Feature Engineering & Selection.
Download the PDF here:
Same, but in markdown:
PDF has a much readable format, while Markdown has auto-generated anchor link to navigate from outer source. GitHub sucks at displaying markdown with complex grammar, so I would suggest read the PDF or download the repo and read markdown with Typora.
What You'll Learn
Not only a collection of hands-on functions, but also explanation on Why, How and When to adopt Which techniques of feature engineering in data mining.
- the nature and risk of data problem we often encounter
- explanation of the various feature engineering & selection techniques
- rationale to use it
- pros & cons of each method
- code & example
This repo is mainly used as a reference for anyone who are doing feature engineering, and most of the modules are implemented through scikit-learn or its communities.
To run the demos or use the customized function, please download the ZIP file from the repo or just copy-paste any part of the code you find helpful. They should all be very easy to understand.
- Python 3.5, 3.6 or 3.7
Table of Contents and Code Examples
Below is a list of methods currently implemented in the repo.
1. Data Exploration
- 1.1 Variables
- 1.2 Variable Identification
- 1.3 Univariate Analysis
- 1.4 Bi-variate Analysis
2. Feature Cleaning
- 2.1 Missing Values
- 2.2 Outliers
- 2.3 Rare Values
- 2.4 High Cardinality
- Grouping Labels with Business Understanding [guide]
- Grouping Labels with Rare Occurrence into One Category [guide] [demo]
- Grouping Labels with Decision Tree [guide] [demo]
3. Feature Engineering
- 3.1 Feature Scaling
- 3.2 Discretize
- 3.3 Feature Encoding
- 3.4 Feature Transformation
- 3.5 Feature Generation
4. Feature Selection
- 4.1 Filter Method
- 4.2 Wrapper Method
- 4.3 Embedded Method
- 4.4 Feature Shuffling
- 4.5 Hybrid Method
Key Links and Resources
- Udemy's Feature Engineering online course
- Udemy's Feature Selection online course
- JMLR Special Issue on Variable and Feature Selection
- Data Analysis Using Regression and Multilevel/Hierarchical Models, Chapter 25: Missing data
- Data mining and the impact of missing data
- PyOD: A Python Toolkit for Scalable Outlier Detection
- Weight of Evidence (WoE) Introductory Overview
- About Feature Scaling and Normalization
- Feature Generation with RF, GBDT and Xgboost
- A review of feature selection methods with applications