
Optimizers
Optimizers define how a neural network learns. They define the value of parameters during the training such that the loss function is at its lowest.
Gradient descent is an optimization algorithm for finding the minima of a function or the minimum value of a cost function. This is useful to us as we want to minimize the cost function. So, to find the local minimum, we take steps proportional to the negative of the gradient.
Let's go through a very simple example in one dimension, shown in the following plot:
On the y axis, we have the cost (the result of the cost function), and on the x axis, we have the particular weight we are trying to choose (we chose the random weight). The weight minimizes the cost function and we can see that, basically, the parameter value is at the bottom of the parabola. We have to minimize the value of the cost function to the minimum value. Finding the minimum is really simple for one dimension, but in our case, we have a lot more parameters, and we can't do this visually. We are going to use linear algebra and a deep learning library, where we can get the best parameters for minimizing the cost function.
Now, let's see how we can quickly adjust the optimal parameters or weights across our entire network. This is where we need backpropagation.
Backpropagation is used to calculate the error contribution from each neuron after a batch of data is processed. It relies heavily on the chain rule to go back through the network and calculate these errors. Backpropagation works by calculating the error at the output and then updates the weight back through the network layers. It requires a known desired output for each input value.
In the next section, we will learn about hyperparameters, which help tweak neural networks so that they can learn features more effectively.