How to Choose a Feature Selection Method for Machine Learning
Updated on Nov 24, 2022 | 11 min read | 6.7k views
Share:
For working professionals
For fresh graduates
More
Updated on Nov 24, 2022 | 11 min read | 6.7k views
Share:
Table of Contents
Lots of features are used by a machine learning model of which only a few of them are important. There is a reduced accuracy of the model if unnecessary features are used to train a data model. Further, there is an increase in the complexity of the model and a decrease in the Generalization capability resulting in a biased model. The saying “sometimes less is better” goes well with the concept of machine learning. The problem has been faced by a lot of users where they find it difficult to identify the set of relevant features from their data and ignore all the irrelevant sets of features. The less important features are termed so as they don’t contribute to the target variable.
Therefore, one of the important processes is feature selection in machine learning. The goal is to select the best possible set of features for the development of a machine learning model. There is a huge impact on the performance of the model by the feature selection. Along with data cleaning, feature selection should be the first step in a model design.
Feature selection in Machine Learning may be summarized as
Benefits of Feature Selection
The main objective of the feature selection algorithms is to select out a set of best features for the development of the model. Feature selection methods in machine learning can be classified into supervised and unsupervised methods.
Supervised methods of feature selection in machine learning can be classified into
This type of feature selection algorithm evaluates the process of performance of the features based on the results of the algorithm. Also known as the greedy algorithm, it trains the algorithm using a subset of features iteratively. Stopping criteria are usually defined by the person training the algorithm. The addition and removal of features in the model take place based on the prior training of the model. Any type of learning algorithm can be applied in this search strategy. The models are more accurate compared to the filter methods.
Techniques used in Wrapper methods are:
Figure 4: An example of code showing the recursive feature elimination technique
The embedded feature selection methods in machine learning have a certain advantage over the filter and wrapper methods by including feature interaction and also maintaining a reasonable computational cost. Techniques used in embedded methods are:
The methods are applied during the pre-processing steps. The methods are quite fast and inexpensive and work best in the removal of duplicated, correlated, and redundant features. Instead of applying any supervised learning methods, the importance of features is evaluated based on their inherent characteristics. The computational cost of the algorithm is lesser compared to the wrapper methods of feature selection. However, if enough data is not present to derive the statistical correlation between the features, the results might be worse than the wrapper methods. Therefore, the algorithms are used over high dimensional data, which would lead to a higher computational cost if wrapper methods are to be applied.
The formula for Chi-square test
Implementation of Chi-Squared algorithm: sklearn, scipy
An example of code for Chi-square test
4. CFS (Correlation-based feature selection): The method follows “Features are relevant if their values vary systematically with category membership.” Implementation of CFS (Correlation-based feature selection): scikit-feature
Join the AI & ML Courses online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career.
5. FCBF (Fast correlation-based filter): Compared to the above-mentioned methods of Relief and CFS, the FCBF method is faster and more efficient. Initially, the computation of Symmetrical Uncertainty is carried out for all features. Using these criteria, the features are then sorted out and redundant features are removed.
Symmetrical Uncertainty= the information gain of x | y divided by the sum of their entropies. Implementation of FCBF: skfeature
6. Fischer score: Fischer ration (FIR) is defined as the distance between the sample means for each class per feature divided by their variances. Each feature is independently selected according to their scores under the Fisher criterion. This leads to a suboptimal set of features. A larger Fisher’s score denotes a better-selected feature.
The formula for Fischer score
Implementation of Fisher score: scikit-feature
The output of the code showing Fisher score technique
Pearson’s Correlation Coefficient: It is a measure of quantifying the association between the two continuous variables. The values of the correlation coefficient range from -1 to 1 which defines the direction of relationship between the variables.
7. Variance Threshold: The features whose variance doesn’t meet the specific threshold are removed. Features having zero variance are removed through this method. The assumption considered is that higher variance features are likely to contain more information.
Figure 15: An example of code showing the implementation of Variance threshold
8. Mean Absolute Difference (MAD): The method calculates the mean absolute
difference from the mean value.
An example of code and its output showing the implementation of Mean Absolute Difference (MAD)
9. Dispersion Ratio: Dispersion ratio is defined as the ratio of the Arithmetic mean (AM) to that of Geometric mean (GM) for a given feature. Its value ranges from +1 to ∞ as AM ≥ GM for a given feature.
A higher dispersion ratio implies a higher value of Ri and therefore a more relevant feature. Conversely, when Ri is close to 1, it indicates a low relevance feature.
Feature selection in the machine learning process can be summarized as one of the important steps towards the development of any machine learning model. The process of the feature selection algorithm leads to the reduction in the dimensionality of the data with the removal of features that are not relevant or important to the model under consideration. Relevant features could speed up the training time of the models resulting in high performance.
If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s Executive PG Program in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Top Resources