Road map for Deep Learning

Deep learning is a part of Artificial Intelligence that tries to mimic the working of the human brain. Deep learning tries to make predictions from the fed data and also try to decode the complex patterns in the given data and try to make decisions by learning on its own. AI is able to learn without Human Supervision after an instant of time.

We are looking to a full way approach to become zero to hero in the field of Deep Learning. AI is becoming vast day by day because it’s a new era of demand and there are still so many things that need to be invented in Deep Learning. As I said earlier it’s so vast, so we need a proper roadmap or approach before we start learning about Deep Learning. 

Fully connected Deep Neural Network In Deep Learning:

Fully connected neural networks (FCNNs) made by repeating Neurons & creating connections between them. It is one of the types of artificial neural network where the architecture is such that the input layer nodes are fully connected with all the hidden layer nodes. Each of these neurons may be calculating something different, via different weights. The nodes that connect to each other and form networks which are commonly known as “neurons”. Neurons in one layer can act as input in another layer. It works on the principle of biological Neural Networks.

It is one of the commonly used model that is commonly applied to some types of data, in practice this type of network has some issues in terms of image recognition and classification. FNNs are computationally heavy and sometimes prone to overfitting because of noise present in data. If we increase the number of neurons in a given layer make the network Wide and increase the hidden layers in the network makes it Deep

Common Applications of the FCNNs are:

  • Embedding
  • Classification
  • Regression

Convolution Neural Network in Deep Learning:

Convolution neural networks (CNNs) is nothing but a  “Neural Networks with Convolution”. CNNs are widely used in Computer vision. So, basically, convolution is a pattern finder. It just transforms the features of images by adding filters. Convolution is better than matrix multiplication as it’s small in size and fast.

Modern CNNs architecture essentially all originates from the same model, ‘The LeNet’. In this, we add pooling in the strided convolution layer. 

Common Applications of the CNNs are:

  • Image classification
  • Object detection
  • Image Segmentation

Recurrent Neural Network:

Recurrent neural networks (RNNs) refer to Repeating neural networks. In which recurrent means ‘Repeating’. RNN shines in a Sequence form of data i.e., Stream of data which is independent. It means RNN can tell what happened previously. CNN does not perform well when the input data is interdependent in the sequential pattern.

In Sentence construction, each word is related and interdependent on the previous word. RNN has a loop in itself. It takes the output again as an input in the neuron. 

RNN also has a problem that is a Vanishing gradient problem and sometimes Exploding gradient problem. So, because of these problems we are unable to do a lot of backpropagation. Hence, we cannot longer sentence or we cannot store older states. So, we cannot process large sequence data.  

Therefore, we need a better or modified RNN i.e., LSTM.

Common Applications of the RNNs are:

  • Time series prediction
  • Pattern Recognition
  • Text Classification

Gated Recurrent Unit In Deep Learning:

Gated recurrent units (GRUs) is a modified version of a Recurrent neural network which works on a gating mechanism. GRU is similar to Long short-term memory(LSTM) with a forget gate interacting layer but has fewer parameters than LSTM because the output gate is missing. GRU’s perform similar to the LSTM on certain tasks. GRUs perform better with smaller and less frequent datasets.

Common Applications of the GRUs are:

  • Time series prediction
  • Deep sentiment Analysis
  • Text Classification

Temporal ConvNets:

Temporal Convolutional Networks (TCNs) is a variation of CNN for sequential modelling tasks. Rather, it’s quite a descriptive term in this type of architecture. TCNs exhibit longer memory than RNN with the same capacity.

The convolutions in the architecture are causal(there is not any info leakage) from future to past. This model can take a sequence of any length and map it to an output sequence of an equivalent length, just as with an RNNs model.

Common Applications of the Temporal ConvNets are:

  • Time Series Prediction
  • Deep Sentiment analysis
  • Probabilistic Forecasting

AutoEncoders:

An autoencoder is one of the types of ANN. It’s used to learn efficient data codings in unsupervised learning. The task of an autoencoder is to learn a description for a group of data, work on a dimensionality reduction, by training the network to ignore signal “noise”.  

Common Applications of the AutoEncoders are:

  • Noise Reduction 
  • Compression
  • Image Reconstruction

Transformers In Deep Learning:

The Transformers is a deep learning model introduced in 2017, used primarily in the field of NLP. Like RNNs, Transformers are mainly designed to handle sequential data and retain past information for a long time. It shines in the Natural Language, for tasks such as translation and text summarization.

Common Applications of the Transformers are:

  • Neural-Machine Translation
  • Deep Sentiment Analysis
  • Image Generation

Generative Adversarial Nets:

Generative adversarial networks (GANs) are deep learning that uses real data to generate the fake one. In this Two neural networks are paired off against one another as shown in the figure above. The first layer of the network generates fake data to reproduce real data.

The second, discriminative network layer, is trying to differentiate between real and fake data. This process is repeated over and over, requiring both the generation and detection of fake data to continuously improve it. So, the generator creates data that looks real and discriminator learns to tell real data apart from fake. In order to produce realistic fake data.

Common Applications of the GANs are:

  • Image Synthesis
  • DeepFakes 
  • Sound Generation

Graph Neural Networks:

Graph neural networks (GNNs) are connectionist models that capture the dependency of graphs via message passing through the nodes of graphs. Unlike standard ANNs, GNNs retain a state that can represent information from its surroundings with an arbitrary depth of the network.

Common Applications of the GNNs are:

  • Recommenders Systems
  • Social Networks
  • Relationship Modelling

Written By: Sachin Yadav

Reviewed By: Vikas Bhardwaj

If you are Interested In Machine Learning You Can Check Machine Learning Internship Program
Also Check Other Technical And Non Technical Internship Programs

Leave a Comment

Your email address will not be published. Required fields are marked *