Kubernetes Networking and Sevice:

Networking within Kubernetes isn't so different from networking in the physical world. Remember Networking basics, and you will have no trouble enabling communication between Containers, Pods and Services.

Kubernetes networking addresses four concerns:

  1. Containers within a pod use networking to communicate via loopback.

  2. Cluster Networking provides communication between different pods.

  3. The service resources lets you expose an application running in pods to be reachable from outside your cluster.

  4. You can also use services to publish services only for consumption inside your cluster.

What Kubernetes networking solves

Kubernetes networking is designed to ensure that the different entity types within Kubernetes can communicate. The layout of a Kubernetes infrastructure has, by design, a lot of separation. Namespaces, containers, and Pods are meant to keep components distinct from one another, so a highly structured plan for communication is important.

Container-to-container networking

Container-to-container networking happens through the Pod network namespace. Network namespaces allow you to have separate network interfaces and routing tables that are isolated from the rest of the system and operate independently. Every Pod has its own network namespace, and containers inside that Pod share the same IP address and ports. All communication between these containers happens through localhost, as they are all part of the same namespace. (Represented by the green line in the diagram.)

Pod-to-Pod networking

With Kubernetes, every node has a designated CIDR range of IPs for Pods. This ensures that every Pod receives a unique IP address that other Pods in the cluster can see. When a new Pod is created, the IP addresses never overlap. Unlike container-to-container networking, Pod-to-Pod communication happens using real IPs, whether you deploy the Pod on the same node or a different node in the cluster.

The diagram shows that for Pods to communicate with each other, the traffic must flow between the Pod network namespace and the Root network namespace. This is achieved by connecting both the Pod namespace and the Root namespace by a virtual ethernet device or a veth pair (veth0 to Pod namespace 1 and veth1 to Pod namespace 2 in the diagram). A virtual network bridge connects these virtual interfaces, allowing traffic to flow between them using the Address Resolution Protocol (ARP).

When data is sent from Pod 1 to Pod 2, the flow of events is:

  1. Pod 1 traffic flows through eth0 to the Root network namespace's virtual interface veth0.

  2. Traffic then goes through veth0 to the virtual bridge, which is connected to veth1.

  3. Traffic goes through the virtual bridge to veth1.

  4. Finally, traffic reaches the eth0 interface of Pod 2 through veth1.

Services:

In Kubernetes, a Service is a method for exposing a network application that is running as one or more Pods in your cluster.

A key aim of Services in Kubernetes is that you don't need to modify your existing application to use an unfamiliar service discovery mechanism. You can run code in Pods, whether this is a code designed for a cloud-native world, or an older app you've containerized. You use a Service to make that set of Pods available on the network so that clients can interact with it.

If you use a Deployment to run your app, that Deployment can create and destroy Pods dynamically. From one moment to the next, you don't know how many of those Pods are working and healthy; you might not even know what those healthy Pods are named. Kubernetes Pods are created and destroyed to match the desired state of your cluster. Pods are ephemeral resources (you should not expect an individual Pod is reliable and durable).

Each Pod gets its IP address (Kubernetes expects network plugins to ensure this). For a given Deployment in your cluster, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.

For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves.

The Service abstraction enables this decoupling.

The set of Pods targeted by a Service is usually determined by a selector that you define. To learn about other ways to define Service endpoints, see Services without selectors.

If your workload speaks HTTP, you might choose to use an Ingress to control how web traffic reaches that workload. Ingress is not a Service type, but it acts as the entry point for your cluster. An Ingress lets you consolidate your routing rules into a single resource, so that you can expose multiple components of your workload, running separately in your cluster, behind a single listener.

The Gateway API for Kubernetes provides extra capabilities beyond Ingress and Service. You can add Gateway to your cluster - it is a family of extension APIs, implemented using Custom Resource Definitions - and then use these to configure access to network services that are running in your cluster.

Cluster Ip

A Cluster IP is a virtual IP address assigned to a Kubernetes Service. A Service is an abstraction layer that provides a stable IP address and DNS name for accessing a set of pods in a Kubernetes cluster.

When you create a Service, Kubernetes assigns a Cluster IP to it by default. This IP address is accessible only within the cluster, and it is used to route traffic to the pods that are part of the Service.

The Cluster IP is static, which means that it does not change unless the Service is deleted and recreated. This stability allows other Services and applications within the cluster to reliably communicate with the Service.

To access a Service from outside the cluster, you can use a NodePort or a LoadBalancer. NodePort exposes the Service on a static port on each Node in the cluster, and LoadBalancer provisions a load balancer to distribute traffic to the Service.

Overall, Cluster IP is a useful feature in Kubernetes that provides a stable and predictable way to access Services within the cluster.

Node Port

A NodePort is a way to expose a Service externally to the cluster by opening a static port on each Node in the cluster. This allows external traffic to be directed to the Service through any Node in the cluster.

When you create a Service with a NodePort type, Kubernetes allocates a port in the range of 30000-32767 by default. You can also specify a specific port number in this range when you create the Service.

The NodePort is a static port that remains the same as long as the Service exists, even if the Pods that are part of the Service are recreated. This allows external clients to access the Service through the same port, regardless of which Pod is currently serving the request.

To access a Service through a NodePort, you can use the Node's IP address and the assigned port number. For example, if the Node has an IP address of 10.0.0.1 and the Service is assigned NodePort 30000, you can access the Service externally by using the URL http://10.0.0.1:30000.

Overall, NodePort is a simple and effective way to expose a Service externally to the cluster, although it is generally not recommended for production environments where more advanced load balancing techniques, such as a LoadBalancer or an Ingress, may be more appropriate.

Load Balancer

A LoadBalancer is a way to expose a Service externally to the cluster by provisioning a load balancer from a cloud provider. The load balancer distributes traffic to the Service's Pods based on a configurable algorithm, such as round-robin, least connections, or IP hash.

When you create a Service with a LoadBalancer type, Kubernetes requests a load balancer from the cloud provider's API, which allocates a static IP address and configures the load balancer to route traffic to the Service. The load balancer is typically configured to use health checks to ensure that it routes traffic only to healthy Pods.

To access a Service through a LoadBalancer, you can use the load balancer's IP address or DNS name, which is assigned by the cloud provider. For example, if the load balancer has an IP address of 10.0.0.1, you can access the Service externally by using the URL http://10.0.0.1.

LoadBalancer is a powerful and flexible way to expose a Service externally to the cluster, and it is often used in production environments to distribute traffic across multiple Pods for high availability and scalability. However, it requires a cloud provider that supports load balancers, and it can be more expensive than other types of Service.