/    /  Machine Learning- DEEP LEARNING: RECURRENT NEURAL NETWORKS

DEEP LEARNING: RECURRENT NEURAL NETWORKS

 

Introduction :

RNN or Recurrent neural network is a kind of neural network where the output achieved in previous steps is considered as inputs for the present or current. In normal neural networks which are traditional kind of network, all the fed inputs are kind of independent or standalone for each other, but some cases may arise where it will be required to get introduced with the next step, them the previous words gets required and therefore there will be a need for remembering the previous steps or words.

Recurrent neural networks can solve a problem with a hidden layer in it. One most important features of the recurrent neural network are the hidden state, which can store or keep some information regarding a sequence.

In Recurrent neural networks, there exists a memory that can remember all kinds of information that has been calculated. These RNN uses the same cases for all kinds of inputs as it performs the same kind of operations for the hidden layers to give the output. It, therefore, reduces the complexity of variables compared to neural networks.

 

Advantages of RNN :

  1. It can remember every information along with time, it can be very useful in series where time is a factor only because the object remembers old inputs. It is therefore called long short-term memory.
  2. Recurrent neural networks (RNN) can even be used as layers that are convolutional in nature just to extend pixel neighborhood which is essential.

 

Disadvantages of RNN :

  1. They explode problems and they can be vanishing in terms of the gradient.
  2. It is a difficult procedure to train a Recurrent neural network.
  3. If we use “tan h” or “rel u” as an activation function, it cannot, therefore, process for long such sequences.

Recurrent neural networks actually convert the individual variables or independent variables to variables that are dependent by giving the same weightage to all layers, hence it reduces the difficulty of increasing in a number of parameters and it can also store the previous outputs by giving each input to the hidden layers.

Therefore these layers can be packed up together or merged together in an order such that all the hidden layers are equal contributing to a single recurrent layer.

 

CONCLUSION :

Thus, we can see that a Recurrent neural network is basically a class of artificial neural networks, where the connection between nodes gets directed. It can therefore use the internal state (or in other words “memory”) to process every variable in length and sequence of different inputs. It even works with both finite and infinite impulse behavior. In such a case they exhibit a behavior called, temporal dynamic behavior. Both the infinite and finite impulse can have additional stored states and therefore the storage gets directly under the control of the neural networks. Hence, they are sometimes referred to as LSTM or Long Short-Term memory networks and are also known as gated recurrent units. In other words, they are also known as Feedback Neural networks or FNN. 

 

Reference

DEEP LEARNING: RECURRENT NEURAL NETWORKS