GKE
Note that Kubernetes is not, by itself, a part of the GCP. You could run Kubernetes clusters on GCP, AWS, Azure, or even on-premise. That's the big appeal of Kubernetes. GKE is merely a managed service for running Kubernetes clusters on the Google Cloud.
So to clarify, GKE is a management and orchestration system for Docker container and container clusters that run within Google's public cloud services and GKE runs Kubernetes.
As we have already discussed, these GKE clusters have a master node running Kubernetes that controls a set of node instances that have Kubelets and inside those Kubelets are inpidual containers. Kubernetes can be considered an abbreviation for all of the container functionality available on GCP. Kubernetes is the orchestrator that runs on the master node, but really depends on the Kubelets that are running on the pods. In effect, a pod is a collection of closely coupled containers, all of which share the same underlying resources. For instance, they will all have the same IP address and they can share disk volumes. They may be a web server pod for instance, which could have one container for the server and then containers for the logging and the matrix infrastructure. Pods are defined using configuration files specified either in JSON or YAML. Each pod is, in effect, managed by a Kubelet, which is, in turn, controlled by the master node.
The VM instances that are contained in a container cluster are of two types, as we have already seen. There is one special instance that is a master, the one that run Kubernetes, and the others are node instances that run Kubelets.
As we have already seen, these node instances are pretty similar to each other. They are managed from the master to run the services that are necessary to support the Docker containers that contain the actual code that's being executed. Each node runs the Docker runtime and also holds the Kubelet agent that manages the Docker runtime, while also ensuring that all of the Docker containers that are scheduled on the host are running successfully. Let us further understand both these types in a little more detail.
The master endpoint runs the Kubernetes API server, which is responsible for servicing REST requests from wherever they come in, scheduling pod creation and deletion, and synchronizing information across different pods.
Are all instances in the cluster necessarily identical to each other? Actually, no. Within your container cluster, you might want to have groups of different instances that are similar to each other. Each of these are known as node pools. A node pool is the subset of machines within a cluster that shares the same configuration. As you might imagine, the ability to have different node pools helps with customizing instance profiles in your cluster, which, in turn, can become handy if you frequently make changes to your containers. However, you should be aware that it should be possible to run multiple Kubernetes node versions on each node pool in your cluster and have each of those node pools independently listen to different updates and different sets of deployments. Node pools are the most powerful way of customizing the inpidual instances within the clusters.
The GCP also has its own container builder. This is a piece of software that helps to execute container image builds on the GCP's infrastructure. Dockerfiles (text files) are turned into Docker images which can be stored into docker/container registry, downloaded and run on any machine having Docker installed.