One-hot vectors
After shuffling, we do some preprocessing on the data labels. The labels loaded with the dataset is just a 150-length vector of integers representing which target class each datapoint belongs to, either 1, 2, or 3 in this case. When creating machine learning models, we like to transform our labels into a new form that is easier to work with by doing something called one-hot encoding.
Rather than a single number being the label for each datapoint, we use vectors instead. Each vector will be as long as the number of different target classes you have. So for example, if you have 5 target classes then each vector will have 5 elements; if you have 1,000 target classes then each vector will have 1,000 elements. Each column in the vectors represents one of our target classes and we can use binary values to identify what class the vector is the label for. This can be done by setting all values to 0 and putting a 1 in the column for the class we want the vector label to represent.
This is easily understood with an example. For labels in this particular problem, the transformed vectors will look like this:
1 = [1,0,0] 2 = [0,1,0] 3 = [0,0,1]