Reference notes.
Container networking enables communication between containers, pods, services, and the outside world. The networking model differs significantly between standalone Docker and Kubernetes orchestration.
Docker Networking
Network Drivers
| Driver | Description | Use Case |
|---|---|---|
| bridge | Default. Containers on an isolated virtual network with NAT for external access. | Single-host, development |
| host | Container shares the host’s network namespace. No isolation, no NAT overhead. | Performance-sensitive, single-host |
| overlay | Multi-host networking using VXLAN tunnels. | Docker Swarm, multi-host |
| macvlan | Container gets its own MAC address on the physical network. Appears as a physical device. | Legacy integration, DHCP |
| none | No networking. | Security-sensitive workloads |
Bridge Networking (Default)
Host
├── docker0 bridge (172.17.0.1)
│ ├── container1 (172.17.0.2) ──┐
│ └── container2 (172.17.0.3) ──┤ veth pairs
│ │
└── eth0 (host NIC) ← NAT/iptables rules
Each container gets a veth pair — one end in the container’s network namespace, the other attached to the docker0 bridge. Containers on the same bridge can communicate directly. External traffic goes through NAT (iptables masquerade).
Docker DNS
User-defined bridge networks provide built-in DNS — containers can reach each other by name. The default docker0 bridge does not provide DNS; use --link (deprecated) or create a custom network.
Kubernetes Networking Model
Kubernetes imposes three fundamental rules:
- Every pod gets its own IP — No NAT between pods, even across nodes
- All pods can reach all other pods — Flat network, no manual routing
- The IP a pod sees for itself is the same IP others see — No hidden NAT translation
This creates a clean, flat network model. The implementation is delegated to CNI plugins.
Pod Networking
Node 1 Node 2
├── Pod A (10.244.1.2) ├── Pod C (10.244.2.2)
│ ├── container1 ─┐ share │ └── container1
│ └── container2 ─┘ localhost │
│ │
├── Pod B (10.244.1.3) ├── Pod D (10.244.2.3)
│ │
└── veth/bridge/eBPF ── overlay/BGP ──┘
Containers within a pod share a network namespace — they communicate via localhost. Each pod gets a unique cluster-wide IP.
CNI (Container Network Interface)
CNI plugins implement the Kubernetes networking model. They handle:
- Assigning pod IPs (IPAM)
- Configuring veth pairs or eBPF attachments
- Setting up cross-node connectivity (overlay, BGP, or direct routing)
- Enforcing NetworkPolicies
Plugin Comparison
| Plugin | Dataplane | Routing | Performance | Best For |
|---|---|---|---|---|
| Cilium | eBPF | Direct routing, VXLAN, Geneve | Highest | Production, security, observability |
| Calico | iptables or eBPF | BGP, VXLAN, direct | High | Large clusters, BGP environments |
| Flannel | VXLAN | VXLAN overlay | Moderate | Simple setups, learning |
| Weave | Mesh (user-space) | Encrypted mesh | Lower | Small clusters needing encryption |
Cilium
The dominant production CNI as of 2025. eBPF-native, adopted by default in GKE, AKS, and increasingly in EKS.
Key advantages over iptables-based CNIs:
- O(1) lookup for service routing — iptables is O(n) with number of services, causing performance degradation at scale
- Replaces kube-proxy — Implements Services, load balancing, and NetworkPolicy entirely in eBPF
- Hubble — Built-in observability: real-time service maps, flow visibility with identity metadata, DNS-aware monitoring
- Tetragon — Runtime security enforcement at the kernel level (process execution, file access, network)
- Cluster Mesh — Connect multiple Kubernetes clusters with shared service discovery and pod-to-pod connectivity
- Gateway API support — Native implementation of the Kubernetes Gateway API, replacing traditional ingress controllers
Calico
Multi-dataplane by design — supports iptables (legacy), eBPF, and Windows. Uses BGP for direct routing in on-prem environments, avoiding overlay overhead. Strong NetworkPolicy support. Now also offers an eBPF dataplane, though not as deeply integrated as Cilium’s.
Services
Kubernetes Services provide a stable IP and DNS name for a set of pods.
| Type | Description | Accessible From |
|---|---|---|
| ClusterIP | Internal-only virtual IP | Within the cluster |
| NodePort | Exposes on each node’s IP at a static port (30000-32767) | External, via node IP |
| LoadBalancer | Provisions a cloud load balancer | External, via LB IP |
| ExternalName | CNAME alias to external service | Within the cluster |
| Headless (ClusterIP: None) | Returns pod IPs directly, no proxy | Service discovery (StatefulSets) |
kube-proxy
Traditionally implements Services by programming iptables or IPVS rules on each node. In iptables mode, performance degrades linearly with service count — problematic at scale. IPVS mode uses kernel hash tables for O(1) lookup. Being replaced by eBPF-based implementations (Cilium) in modern clusters.
Ingress and Gateway API
Ingress
L7 routing rules for external HTTP(S) traffic. An Ingress controller (nginx, HAProxy, Traefik, Envoy) watches Ingress resources and configures the reverse proxy.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
spec:
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80Gateway API
The successor to Ingress. More expressive, role-oriented, and supports TCP/UDP/gRPC natively.
Key improvements:
- Role separation — Infrastructure provider deploys GatewayClass, cluster operator creates Gateway, developers define HTTPRoute
- Protocol support — HTTPRoute, GRPCRoute, TCPRoute, UDPRoute, TLSRoute
- Traffic management — Request mirroring, header modification, weighted routing for canary deployments
- Supported by Cilium, Envoy Gateway, Istio, nginx Gateway Fabric, Traefik
Network Policies
Kubernetes-native L3/L4 firewall rules for pods. Default: all traffic is allowed. Network policies are additive (deny-by-default once any policy selects a pod).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-policy
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 8080This allows only pods labelled app: frontend to reach app: api on port 8080. Requires a CNI that supports NetworkPolicy (Cilium, Calico — Flannel does not).
Cilium Network Policies extend this with L7 rules (HTTP method/path matching, DNS-aware policies, Kafka topic restrictions).
Service Meshes
Service meshes add L7 observability, security, and traffic management between services.
| Mesh | Approach | Notes |
|---|---|---|
| Istio | Sidecar proxy (Envoy) or ambient mesh (node-level proxy) | Feature-rich, complex. Ambient mode (2025) removes sidecar overhead. |
| Linkerd | Sidecar proxy (Rust-based) | Lightweight, simple. Good for smaller deployments. |
| Cilium | eBPF (no sidecar) | Mesh capabilities built into the CNI. No additional proxy overhead. |
Do You Need a Service Mesh?
The trend in 2025-2026 is convergence — CNIs like Cilium now provide mTLS, observability, and traffic management that previously required a separate mesh. Evaluate whether your CNI already covers your requirements before adding a service mesh layer.
See Also
- Load Balancing — L4/L7 load balancing concepts
- eBPF and XDP — The technology underlying Cilium
- VLANs — Traditional network segmentation (vs Kubernetes NetworkPolicies)
- DNS — Kubernetes uses CoreDNS for internal service discovery