TensorFlow 1.x Deep Learning Cookbook
上QQ阅读APP看书,第一时间看更新

How to do it...

We proceed with the recipe as follows:

  1. The first step is to import all the packages that we will need:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
  1. In neural networks, all the inputs are added linearly to generate activity; for effective training, the inputs should be normalized, so we define a function to normalize the input data:
def normalize(X):
""" Normalizes the array X"""
mean = np.mean(X)
std = np.std(X)
X = (X - mean)/std
return X
  1. Now we load the Boston house price dataset using TensorFlow contrib datasets and separate it into X_train and Y_train. We can choose to normalize the data here:
# Data
boston = tf.contrib.learn.datasets.load_dataset('boston')
X_train, Y_train = boston.data[:,5], boston.target
#X_train = normalize(X_train) # This step is optional here
n_samples = len(X_train)
  1. We declare the TensorFlow placeholders for the training data:
# Placeholder for the Training Data
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
  1. We create TensorFlow variables for weight and bias with initial value zero:
# Variables for coefficients initialized to 0
b = tf.Variable(0.0)
w = tf.Variable(0.0)
  1. We define the linear regression model to be used for prediction:
# The Linear Regression Model
Y_hat = X * w + b
  1. Define the loss function:
 # Loss function
loss = tf.square(Y - Y_hat, name='loss')
  1. We choose the gradient descent optimizer:
 # Gradient Descent with learning rate of 0.01 to minimize loss
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(loss)
  1. Declare the initializing operator:
# Initializing Variables
init_op = tf.global_variables_initializer()
total = []
  1. Now, we start the computation graph. We run the training for 100 epochs:
# Computation Graph
with tf.Session() as sess:
# Initialize variables
sess.run(init_op)
writer = tf.summary.FileWriter('graphs', sess.graph)
# train the model for 100 epochs
for i in range(100):
total_loss = 0
for x,y in zip(X_train,Y_train):
_, l = sess.run ([optimizer, loss], feed_dict={X:x, Y:y})
total_loss += l
total.append(total_loss / n_samples)
print('Epoch {0}: Loss {1}'.format(i, total_loss/n_samples))
writer.close()
b_value, w_value = sess.run([b,w])
  1. View the result:
Y_pred = X_train * w_value + b_value
print('Done')
# Plot the result
plt.plot(X_train, Y_train, 'bo', label='Real Data')
plt.plot(X_train,Y_pred, 'r', label='Predicted Data')
plt.legend()
plt.show()
plt.plot(total)
plt.show()