3 Important Neural Network Architectures Explained

1. Perceptron

The perceptron is the most basic of all neural networks, being a fundamental building block of more complex neural network. If simple connects an input cell and an output cell.

2. Feed-Forward Network

The feed-forward network is a collection of perceptions’. In which there are three fundamental types of layers – input layers, hidden layers, and output layers. During each connection, the signal from the previous layer is multiplied by a weight, added to a bias, and passed through an activation function. Feed-forward networks use backpropagation to iteratively update the parameters until it achieves a desirable performance.

3. Residual networks (ResNet)

One issue with deep feed-forward neural networks is the vanishing gradient problem. As the signal that updates the parameters travels through the long network, it gradually diminishes until weights at the front of the network are not changed or utilized at all.

A ResNet employs skip connections, which propagate signals across a ‘jumped’ layer. This reduces the vanishing gradient problem by employing connections that are less vulnerable to it. Over time, the network learns to restore skipped layers as it learns the feature space.

Popular Posts

Spread the knowledge
 
  

Leave a Reply

Your email address will not be published. Required fields are marked *