Hands-On Convolutional Neural Networks with TensorFlow
上QQ阅读APP看书,第一时间看更新

Artificial neural networks

Very vaguely inspired by the biological network of neurons residing in our brain, artificial neural networks (ANNs) are made up of a collection of units named artificial neurons that are organized into the following three types of layers:

  • Input layer
  • Hidden layer
  • Output layer

The basic artificial neuron works (see the following image) by calculating a dot product between an input and its internal weights, and the results is then passed to a nonlinear activation function f (sigmoid, in this example). These artificial neurons are then connected together to form a network. During the training of this network, the aim is to find the proper set of weights that will help with whatever task we want our network to do:

Next, we have an example of a 2-layer feed forward artificial neural network. Imagine that the connections between neurons are the weights that will be learned during training. In this example, Layer L1 will be the input layer, L2 the hidden layer, and L3 the output layer. By convention, when counting the number of layers, we only include layers that have learnable weights; therefore, we do not include the input layer. This is why, it is only a 2-layer network:

Neural networks with more than one layer are examples of nonlinear hypothesis, where the model can learn to classify much more complex relations than linear classifiers can. In fact, they are actually universal approximators capable of approximating any continuous function.