Learn Kubernetes Security
上QQ阅读APP看书,第一时间看更新

Introducing the CNI and CNI plugins

In Kubernetes, CNI stands for the Container Network Interface. CNI is a Cloud Native Computing Foundation (CNCF) project—you can find further information on GitHub here: https://github.com/containernetworking/cni. Basically, there are three things in this project: a specification, libraries for writing plugins to configure network interfaces in Linux containers, and some supported plugins. When people talk about the CNI, they usually make reference to either the specification or the CNI plugins. The relationship between the CNI and CNI plugins is that the CNI plugins are executable binaries that implement the CNI specification. Now, let's look into the CNI specification and plugins at a high level, and then we will give a brief introduction to one of the CNI plugins, Calico.

CNI specification and plugins

The CNI specification is only concerned with the network connectivity of containers and removing allocated resources when the container is deleted. Let me elaborate more on this. First, from a container runtime's perspective, the CNI spec defines an interface for the Container Runtime Interface (CRI) component (such as Docker) to interact with—for example, add a container to a network interface when a container is created, or delete the network interface when a container dies. Secondly, from a Kubernetes network model's perspective, since CNI plugins are actually another flavor of Kubernetes network plugins, they have to comply with Kubernetes network model requirements, detailed as follows:

  1. Pods on a node can communicate with all pods in all the nodes without using NAT.
  2. Agents such as kubelet can communicate with pods in the same node.

There are a handful of CNI plugins available to choose—just to name a few: Calico, Cilium, WeaveNet, Flannel, and so on. The CNI plugins' implementation varies, but in general, what CNI plugins do is similar. They carry out the following tasks:

  • Manage network interfaces for containers
  • Allocate IP addresses for pods. This is usually done via calling other IP Address Management (IPAM) plugins such as host-local
  • Implement network policies (optional)

The network policy implementation is not required in the CNI specification, but when DevOps choose which CNI plugins to use, it is important to take security into consideration. Alexis Ducastel's article (https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-36475925a560) did a good comparison of the mainstream CNI plugins with the latest update in April 2019. The security comparison is notable, as can be seen in the following screenshot:

Figure 2.9 – CNI plugins comparison

You may notice that the majority of the CNI plugins on the list don't support encryption. Flannel does not support Kubernetes network policies, while kube-router supports ingress network policies only.

As Kubernetes comes with the default kubenet plugin, in order to use CNI plugins in a Kubernetes cluster, users must pass the --network-plugin=cni command-line option and specify a configuration file via the --cni-conf-dir flag or in the /etc/cni/net.d default directory. The following is a sample configuration defined within the Kubernetes cluster so that kubelet may know which CNI plugin to interact with:

{

  'name': 'k8s-pod-network',

  'cniVersion': '0.3.0',

  'plugins': [

    {

      'type': 'calico',

      'log_level': 'info',

      'datastore_type': 'kubernetes',

      'nodename': '127.0.0.1',

      'ipam': {

        'type': 'host-local',

        'subnet': 'usePodCidr'

      },

      'policy': {

        'type': 'k8s'

      },

      'kubernetes': {

        'kubeconfig': '/etc/cni/net.d/calico-kubeconfig'

      }

    },

    {

      'type': 'portmap',

      'capabilities': {'portMappings': true}

    }

  ]

}

The CNI configuration file tells kubelet to use Calico as a CNI plugin and use host-local to allocate IP addresses to pods. In the list, there is another CNI plugin called portmap that is used to support hostPort, which allows container ports to be exposed on the host IP.

When creating a cluster with Kubernetes Operations (kops), you can also specify the CNI plugin you would like to use, as illustrated in the following code block:

  export NODE_SIZE=${NODE_SIZE:-m4.large}

  export MASTER_SIZE=${MASTER_SIZE:-m4.large}

  export ZONES=${ZONES:-'us-east-1d,us-east-1b,us-east-1c'}

  export KOPS_STATE_STORE='s3://my-state-store'

  kops create cluster k8s-clusters.example.com \

  --node-count 3 \

  --zones $ZONES \

  --node-size $NODE_SIZE \

  --master-size $MASTER_SIZE \

  --master-zones $ZONES \

  --networking calico \

  --topology private \

  --bastion='true' \

  --yes

In this example, the cluster is created using the calico CNI plugin.

Calico

Calico is an open source project that enables cloud-native application connectivity and policy. It integrates with major orchestration systems such as Kubernetes, Apache Mesos, Docker, and OpenStack. Compared to other CNI plugins, here are a few things about Calico worth highlighting:

  1. Calico provides a flat IP network, which means there will be no IP encapsulation appended to the IP message (no overlays). Also, this means that each IP address assigned to the pod is fully routable. The ability to run without an overlay provides exceptional throughput characteristics.
  2. Calico has better performance and less resource consumption, according to Alexis Ducastel's experiments.
  3. Calico offers a more comprehensive network policy compared to Kubernetes' built-in network policy. Kubernetes' network policy can only define whitelist rules, while Calico network policies can define blacklist rules (deny).

When integrating Calico into Kubernetes, you will see three components running inside the Kubernetes cluster, as follows:

  • The calico/node is a DaemonSet service, which means that it runs on every node in the cluster. It is responsible for programming and routing kernel routes to local workloads, and enforces the local filtering rules required by the current network policies in the cluster. It is also responsible for broadcasting the routing tables to other nodes to keep the IP routes in sync across the cluster.
  • The CNI plugin binaries. This includes two binary executables (calico and calico-ipam) and a configuration file that integrates directly with the Kubernetes kubelet process on each node. It watches the pod creation event and then adds pods to the Calico networking.
  • The Calico Kubernetes controllers, running as a standalone pod, monitor the Kubernetes application programming interface (API) to keep Calico in sync.

Calico is a popular CNI plugin and also the default CNI plugin in Google Kubernetes Engine (GKE). Kubernetes administrators have full freedom to choose whatever CNI plugin fits their requirement. Just keep in mind that security is essential and is one of the decision factors. We've talked a lot about the Kubernetes network in the previous sections. Let's quickly review this again before you forget.

Wrapping up

In a Kubernetes cluster, every pod gets an IP address assigned, but this is an internal IP address and not accessible externally. Containers inside the same pod can communicate with each other via the name network interface, as they share the same network namespace. Containers inside the same pod also need to resolve the port resource conflict problem; however, this is quite unlikely to happen as applications run in different containers grouped in the same pod for a specific purpose. Also, it is worth noting that containers inside the same pod can communicate beyond the network through shared volume, IPC channel, and process signals.

The Kubernetes service helps pod-to-pod communication to be stabilized, as pods are usually ephemeral. The service also gets an IP address assigned but this is virtual, meaning no network interface is created for the service. The kube-proxy network magician actually routes all traffic to the target service to the backend pods. There are three different modes of kube-proxy: user space proxy, iptables proxy, and IPVS proxy. The Kubernetes service not only provides support for pod-to-pod communication but also enables communication from external sources.

There are a few ways to expose services so that they are accessible from external sources such as NodePort, LoadBalancer, and ExternalName. Also, you can create an Ingress object to achieve the same goal. Finally, though it is hard, we'll use the following single diagram to try to consolidate most of the knowledge we want to highlight in this chapter:

Figure 2.10 – Communications: inside pod, among pods, and from external sources

There is nearly always a load balancer sitting in front of a Kubernetes cluster. With the different service types we mentioned previously, this could be a single service that is exposed via the load balancer (this is service A), or it could be exposed via a NodePort. This is service B using node port 30000 in both nodes to accept external traffic. Though ingress is not a service type, it is powerful and cost-efficient compared to a LoadBalancer-type service. Service C and service D routing is controlled by the same ingress object. Every pod in the cluster may have an internal communication topology in the preceding callout diagram.