Reference notes.

Container networking enables communication between containers, pods, services, and the outside world. The networking model differs significantly between standalone Docker and Kubernetes orchestration.

Docker Networking

Network Drivers

DriverDescriptionUse Case
bridgeDefault. Containers on an isolated virtual network with NAT for external access.Single-host, development
hostContainer shares the host’s network namespace. No isolation, no NAT overhead.Performance-sensitive, single-host
overlayMulti-host networking using VXLAN tunnels.Docker Swarm, multi-host
macvlanContainer gets its own MAC address on the physical network. Appears as a physical device.Legacy integration, DHCP
noneNo networking.Security-sensitive workloads

Bridge Networking (Default)

Host
├── docker0 bridge (172.17.0.1)
│   ├── container1 (172.17.0.2) ──┐
│   └── container2 (172.17.0.3) ──┤ veth pairs
│                                  │
└── eth0 (host NIC) ← NAT/iptables rules

Each container gets a veth pair — one end in the container’s network namespace, the other attached to the docker0 bridge. Containers on the same bridge can communicate directly. External traffic goes through NAT (iptables masquerade).

Docker DNS

User-defined bridge networks provide built-in DNS — containers can reach each other by name. The default docker0 bridge does not provide DNS; use --link (deprecated) or create a custom network.

Kubernetes Networking Model

Kubernetes imposes three fundamental rules:

  1. Every pod gets its own IP — No NAT between pods, even across nodes
  2. All pods can reach all other pods — Flat network, no manual routing
  3. The IP a pod sees for itself is the same IP others see — No hidden NAT translation

This creates a clean, flat network model. The implementation is delegated to CNI plugins.

Pod Networking

Node 1                               Node 2
├── Pod A (10.244.1.2)                ├── Pod C (10.244.2.2)
│   ├── container1 ─┐ share          │   └── container1
│   └── container2 ─┘ localhost       │
│                                     │
├── Pod B (10.244.1.3)                ├── Pod D (10.244.2.3)
│                                     │
└── veth/bridge/eBPF ── overlay/BGP ──┘

Containers within a pod share a network namespace — they communicate via localhost. Each pod gets a unique cluster-wide IP.

CNI (Container Network Interface)

CNI plugins implement the Kubernetes networking model. They handle:

  • Assigning pod IPs (IPAM)
  • Configuring veth pairs or eBPF attachments
  • Setting up cross-node connectivity (overlay, BGP, or direct routing)
  • Enforcing NetworkPolicies

Plugin Comparison

PluginDataplaneRoutingPerformanceBest For
CiliumeBPFDirect routing, VXLAN, GeneveHighestProduction, security, observability
Calicoiptables or eBPFBGP, VXLAN, directHighLarge clusters, BGP environments
FlannelVXLANVXLAN overlayModerateSimple setups, learning
WeaveMesh (user-space)Encrypted meshLowerSmall clusters needing encryption

Cilium

The dominant production CNI as of 2025. eBPF-native, adopted by default in GKE, AKS, and increasingly in EKS.

Key advantages over iptables-based CNIs:

  • O(1) lookup for service routing — iptables is O(n) with number of services, causing performance degradation at scale
  • Replaces kube-proxy — Implements Services, load balancing, and NetworkPolicy entirely in eBPF
  • Hubble — Built-in observability: real-time service maps, flow visibility with identity metadata, DNS-aware monitoring
  • Tetragon — Runtime security enforcement at the kernel level (process execution, file access, network)
  • Cluster Mesh — Connect multiple Kubernetes clusters with shared service discovery and pod-to-pod connectivity
  • Gateway API support — Native implementation of the Kubernetes Gateway API, replacing traditional ingress controllers

Calico

Multi-dataplane by design — supports iptables (legacy), eBPF, and Windows. Uses BGP for direct routing in on-prem environments, avoiding overlay overhead. Strong NetworkPolicy support. Now also offers an eBPF dataplane, though not as deeply integrated as Cilium’s.

Services

Kubernetes Services provide a stable IP and DNS name for a set of pods.

TypeDescriptionAccessible From
ClusterIPInternal-only virtual IPWithin the cluster
NodePortExposes on each node’s IP at a static port (30000-32767)External, via node IP
LoadBalancerProvisions a cloud load balancerExternal, via LB IP
ExternalNameCNAME alias to external serviceWithin the cluster
Headless (ClusterIP: None)Returns pod IPs directly, no proxyService discovery (StatefulSets)

kube-proxy

Traditionally implements Services by programming iptables or IPVS rules on each node. In iptables mode, performance degrades linearly with service count — problematic at scale. IPVS mode uses kernel hash tables for O(1) lookup. Being replaced by eBPF-based implementations (Cilium) in modern clusters.

Ingress and Gateway API

Ingress

L7 routing rules for external HTTP(S) traffic. An Ingress controller (nginx, HAProxy, Traefik, Envoy) watches Ingress resources and configures the reverse proxy.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example
spec:
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 80

Gateway API

The successor to Ingress. More expressive, role-oriented, and supports TCP/UDP/gRPC natively.

Key improvements:

  • Role separation — Infrastructure provider deploys GatewayClass, cluster operator creates Gateway, developers define HTTPRoute
  • Protocol support — HTTPRoute, GRPCRoute, TCPRoute, UDPRoute, TLSRoute
  • Traffic management — Request mirroring, header modification, weighted routing for canary deployments
  • Supported by Cilium, Envoy Gateway, Istio, nginx Gateway Fabric, Traefik

Network Policies

Kubernetes-native L3/L4 firewall rules for pods. Default: all traffic is allowed. Network policies are additive (deny-by-default once any policy selects a pod).

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-policy
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - port: 8080

This allows only pods labelled app: frontend to reach app: api on port 8080. Requires a CNI that supports NetworkPolicy (Cilium, Calico — Flannel does not).

Cilium Network Policies extend this with L7 rules (HTTP method/path matching, DNS-aware policies, Kafka topic restrictions).

Service Meshes

Service meshes add L7 observability, security, and traffic management between services.

MeshApproachNotes
IstioSidecar proxy (Envoy) or ambient mesh (node-level proxy)Feature-rich, complex. Ambient mode (2025) removes sidecar overhead.
LinkerdSidecar proxy (Rust-based)Lightweight, simple. Good for smaller deployments.
CiliumeBPF (no sidecar)Mesh capabilities built into the CNI. No additional proxy overhead.

Do You Need a Service Mesh?

The trend in 2025-2026 is convergence — CNIs like Cilium now provide mTLS, observability, and traffic management that previously required a separate mesh. Evaluate whether your CNI already covers your requirements before adding a service mesh layer.

See Also

  • Load Balancing — L4/L7 load balancing concepts
  • eBPF and XDP — The technology underlying Cilium
  • VLANs — Traditional network segmentation (vs Kubernetes NetworkPolicies)
  • DNS — Kubernetes uses CoreDNS for internal service discovery

References