/    /  ElasticNet Regression

Elastic Net Regression

Elastic Net Regression is third type of Regularization technique. It came into existence due to the limitation of Lasso regression. Lasso regression cannot take enough alpha and lambda values as per the requirement of the data. The solution for this problem is to combine the penalties of both ridge regression and lasso regression.

Hence, Elastic Net Regression overcomes the limitations caused by both Ridge Regression and Lasso Regression. The metric followed by Elastic Net Regression is given by:

By using Elastic Net regression, we can set and choose values for lambda. The advantage of this regression is, it can allow us to tune the alpha parameter. If alpha (𝞪) = 0, it corresponds to Ridge regression, the penalty function reduces to the L2 (ridge) term. If alpha (𝞪) = 1, it corresponds to Lasso regression, the penalty function reduces to the L1 (lasso) term.

Therefore, we can choose an alpha value between 0 and 1 to improve the elastic net. Efficiently this will shrink some coefficients and set some to 0 for sparse selection.

Finally, Performing Elastic Net needs us to tune parameters accordingly to identify the best alpha and lambda. Tuning of the model involves iteration (Repetition) over a number of alpha and lambda pairs and can evaluate which pair has the lowest associated error.