Google Cloud Platform for Architects
上QQ阅读APP看书,第一时间看更新

Load balancing

Load balancing is yet another area where working with Kubernetes Engine instances is rather more complicated than working with Compute Engine VMs. With the Kubernetes Engine, you can make use of network-level load balancing, which works just out of the box. However, remember that the higher up the OSI stack you go, the more sophisticated your load balancing becomes. Extending that logic, the most sophisticated form of load balancing is going to be HTTP load balancing. This is something that does not work all that simply with Kubernetes Engines. If you want to use HTTP load balancing with container instances, you are going to have to do some interfacing of your own with the Compute Engine load balancing infrastructure:

  1. First of all, deploy a single replica nginx server by running its Docker image on port 80:
kubectl run nginx --image=nginx --port=80  
  1. Create a service resource to access nginx from your cluster. The NodePort type allows Kubernetes Engine to make your service available on a random high port number:
kubectl expose deployment nginx --target-port=80 --type=NodePort
  1. You can also verify the service creation. The following command should show you the name of the service and the port number it has been assigned:
kubectl get service nginx  
  1. Now you need to create and save an ingress resource that will contain rules and configurations of HTTP traffic:
nano basic-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: nginx
  1. This will create an HTTP load balancer. Run the file using the following command:
kubectl apply -f basic-ingress.yaml  
  1. Now you can find out the external IP address of your load balancer by calling ingress service:
kubectl get ingress basic-ingress  
  1. To remove this load balancer, use the following command:
kubectl delete ingress basic-ingress