Hands-On Meta Learning with Python
上QQ阅读APP看书,第一时间看更新

Algorithm

The algorithm of the prototypical networks is shown here:

  1. Let's say we have the dataset, D, comprising {(x1, y1), (x2, y2), ... (xn, yn)} where x is the feature and y is the class label.
  2. Since we perform episodic training, we randomly sample n number of data points per each class from our dataset, D, and prepare our support set, S.
  3. Similarly, we select n number of data points and prepare our query set, Q.
  4. We learn the embeddings of the data points in our support set using our embedding function, f (). The embedding function can be any feature extractor—say, a convolutional network for images and an LSTM network for text.
  5. Once we have the embeddings for each data point, we compute the prototype of each class by taking the mean embeddings of the data points under each class:
  1. Similarly, we learn the query set embeddings.
  2. We calculate the Euclidean distance, d, between query set embeddings and the class prototype.
  3. We predict the probability, p(y = k|x), of the class of a query set by applying softmax over the distance d:
  1. We compute the loss function, J(∅), as a negative log probability, J(∅) = -logp(y=k|x), and we try to minimize the loss using stochastic gradient descent.