Hands-On Python Deep Learning for the Web
上QQ阅读APP看书,第一时间看更新

Demystifying neural networks

Let's start this section by finding the answers to the question, “Why are neural networks called 'neural'?”. What is the significance behind this term?

Our intuition says that it has something to do with our brains, which is correct, but only partially. Before we get to the reason why it is only partially correct, we need to have some familiarity with the structure of a brain. For this purpose, let's look at the anatomy of our own brains.

A human brain is composed of approximately 10 billion neurons, each connected to about 10,000 other neurons, which gives it a network-like structure. The inputs to the neurons are called dendrites and the outputs are called axons. The body of a neuron is called a soma. So, on a high level, a particular soma is connected to another soma. The word "neural" comes from the word "neuron," and in fact, neural is the adjective form of the word "neuron." In our brains, neurons are the most granular units that form this dense network we just discussed. We are slowly understanding the resemblance of an artificial neural network to the brain, and in order to continue our understanding of this similarity, we will briefly learn about the functionalities of a neuron.

A network is nothing but a graph-like structure that contains a set of nodes and edges that are connected to each other. In the case of our brains, or any brain in general, neurons are referred to as nodes and the dendrites are referred to as the vertices.

A neuron receives inputs from other neurons via their dendrites. These inputs are electrochemical in nature. Not all the inputs are equally powerful. If the inputs are powerful enough, then the connected neurons are activated and continue the process of passing the input to the other neurons. Their power is determined by a predefined threshold that allows the activation process to be selective so that it does not activate all the neurons that are present in the network at the same time.

To summarize, neurons receive a collective sum of inputs from other neurons, this sum is compared to a threshold, and the neurons are activated accordingly. An artificial neural network (ANN), or simply a neural network (NN), is based on this important fact, hence the resemblance.

So, what makes a network a neural one? What does it take to form an NN?

The following quote from the book Deep Learning For Computer Vision With Python by Adrian Rosebrock answers this question in a very commendable way:

Each node performs a simple computation. Each connection then carries a signal (i.e., the output of the computation) from one node to another, labeled by a weight indicating the extent to which the signal is amplified or diminished. Some connections have large, positive weights that amplify the signal, indicating that the signal is very important when making a classification. Others have negative weights, diminishing the strength of the signal, thus specifying that the output of the node is less important in the final classification. We call such a system an Artificial Neural Network if it consists of a graph structure with connection weights that are modifiable using a learning algorithm.

We have learned about the resemblance of neural networks to brains. We will now take this information and learn more about the granular units of ANNs. Let's start by learning what a simple neuron has to do in an ANN.