/    /  Machine Learning- Linear Disriminant Analysis

Linear Discriminant Analysis

 

Linear Discriminant Analysis is usually termed as LDA. It is a Dimensionality Reduction Technique which is generally used for Supervised learning problem as preprocessing technique. It uses two axes X and Y to create a new axis and projects data onto a new axis in such a way to maximize the separation of the two categories and thereby reducing the 2D graph into a 1D graph. Therefore, the goal of Linear Discriminant Analysis is to project the features of higher dimension space onto a lower dimensional space.

Conditions used by LDA to create a new axis

  1. Maximize the distance between means of the two classes.
  2. Minimize the variation within each class.

 

To make it simple, this newly generated axis increases the separation between the data points of the two classes to the maximum level. After generating this new axis, all the data points of the classes are plotted on this new axis.

 

Linear Discriminant Analysis can be done by using following steps

Step 1

The first step is to calculate the distance between the mean of different classes which means separability between different classes also called as between-class variance.

Step 2

Calculate the distance between the mean and sample of each class, which is called as the within class variance. It is represented by Sw, can be computed by using the below formula.

Step 3

The third step is to construct the lower dimensional space which increases the between class variance (Sb) and decreases the within class variance (Sw). P is the lower dimensional space projection, which is called Fisher’s criterion can be computed by using below formula.

For performing Linear Discriminant analysis, there are 5 general steps followed which are explained below.

  1. Calculate the d-dimensional mean vectors for the different classes from the dataset.
  2. Calculate the scatter matrices in between-class and within-class.
  3. Calculate the eigenvectors (ee1, ee2, …, eed) and their corresponding eigen values (λλ1, λλ2, …, λλd) for the above calculated scatter matrices.
  4. Sort the eigen vectors in a descending order of eigenvalues and choose k eigen vectors with the largest eigen values to form a d×k dimensional matrix WW where every column represents an eigenvector.
  5. Use this d×k eigen vector matrix to project the samples onto the new subspace.

 

But Linear Discriminant Analysis technique fails when the mean of the distributions is shared, since it becomes impossible for LDA to discover a new axis that makes both the classes linearly separable. Then, we can use non-linear discriminant analysis.

 

Difference between Linear Discriminant analysis and Principal Component Analysis

LINEAR DISRIMINANT ANALYSIS 5 (i2tutorials)
  • PCA does the features classification where as LDA does the data classification.
  • In PCA, shape and location of real dataset changes when transformed into new dimensions. Whereas there will be no change in shape or location in LDA even after transforming into new dimensions.
  • PCA is used for Unsupervised Learning algorithm as it avoids the class labels and concentrates on finding principal components to maximize the variance in the dataset. LDA is used for Supervised Learning algorithm and calculates the directions to present axes and to increase the separation between multiple classes.

 

Different Applications of Linear Discriminant Analysis

  • Customers recognition
  • Face recognition 
  • Medical uses
  • Predictions in decision making