Explain steps to solve Regression Problem using keras in Neural Network?

Ans: To solve regression problem, Neural Network will be built in Keras i.e. one where our dependent variable (y) is in interval format and we are trying to predict the quantity of y with as much accuracy as possible.

For this purpose, we need TensorFlow installed in the system.

Steps to solve Regression problem using keras:

1)Import Libraries

Regression Problem using keras 1 (i2tutorials)

2)Set Directory

3)Normalize the variables in order for Neural Network to interpret them properly.

4)Split the data into Training and Testing Samples

5)Train the Neural Network

6)Use Linear Activation Function to process the Output.

7)Minimize the errors to improve the accuracy.

8)Fit the model.

 

 For Example, let us use cars dataset

Age

Gender

Average miles driven per day

Personal debt

Monthly income

Set Directory

 

Regression Problem using keras 2 (i2tutorials)

 

The data is then split into training and test data:

Regression Problem using keras 3 (i2tutorials)

The mean_squared_error (mse) and mean_absolute_error (mae) are our loss functions – i.e. an estimate of how accurate the neural network is in predicting the test data. We can see that with the validation_split set to 0.2, 80% of the training data is used to test the model, while the remaining 20% is used for testing purposes.

Regression Problem using keras 4 (i2tutorials)

From the output, we can see that the more epochs are run, the lower our MSE and MAE become, indicating improvement in accuracy across each iteration of our model.

Neural Network Output

Let’s now fit our model.

Regression Problem using keras 5 (i2tutorials)

Here, we can see that keras is calculating both the training loss and validation loss, i.e. the deviation between the predicted y and actual y as measured by the mean squared error.

As you can see, we have specified 150 epochs for our model. This means that we are essentially training our model over 150 forward and backward passes, with the expectation that our loss will decrease with each epoch, meaning that our model is predicting the value of y more accurately as we continue to train the model.