How to Lock Down Egress in Kubernetes (Without Breaking Everything)
You know how people talk a lot about Ingress in Kubernetes, and then casually mention “oh yeah, you should also lock down egress” as if that is a one-line YAML change?
This guide is the opposite of that.
We will walk through:
- What egress actually is in Kubernetes
- Why it matters for threat modeling and SSDLC
- How NetworkPolicies affect egress
- A step by step way to move from “everything can talk to everything” to “reasonably locked down without blowing up prod”
Beginner-friendly, but technically correct.
1. Quick mental model: ingress vs egress
Very simple:
- Ingress: who can talk to your Pods
- Egress: where your Pods are allowed to talk out to
In a typical Kubernetes cluster:
- If you do not use NetworkPolicies and your CNI supports normal routing:
- Pods can talk freely to each other inside the cluster
- Pods can usually talk out to the internet or to anything reachable from the node network :contentReference[oaicite:0]{index=0}
This “everything can talk to everything” default is convenient for developers, and terrible from a security perspective.
2. Why egress control matters for threat modeling
From a threat modeling angle:
- Assets:
- Data inside your apps
- Credentials, tokens, secrets
- Attacker goal:
- Get data or secrets off the cluster
- Common paths:
- Compromised container calling an attacker server
- Malicious dependency “phoning home”
- Misconfigured app that accidentally sends data to the wrong place
If every Pod can talk to any IP and any port, then any compromise can become data exfiltration with one HTTP request.
Locking down egress is how you say:
“Even if this Pod is compromised, it can only talk to a small set of places that we have decided are OK.”
That is least privilege for outbound traffic.
3. How Kubernetes NetworkPolicies affect egress
A few key facts about NetworkPolicies:
- NetworkPolicies are namespaced objects
- They work at IP/port level (layer 3/4) :contentReference[oaicite:1]{index=1}
- They can control ingress, egress, or both
- Your cluster needs a CNI plugin that actually enforces them (Calico, Cilium, etc) :contentReference[oaicite:2]{index=2}
The important bit for egress:
- If no egress NetworkPolicy applies to a Pod:
- The Pod is non-isolated for egress
- All egress is allowed (subject to underlying network) :contentReference[oaicite:3]{index=3}
- As soon as one egress NetworkPolicy selects a Pod:
- The Pod is isolated for egress
- Egress is only allowed if explicitly permitted by at least one of those policies (allow list model) :contentReference[oaicite:4]{index=4}
So: adding a single egress rule flips that Pod from “allow all” to “deny by default.”
4. Step 0: inventory legitimate egress first
Before writing YAML, figure out what “good” looks like.
For a given app or namespace, make a tiny egress inventory:
| Service / Destination | Type | Protocol / Port | Notes |
|---|---|---|---|
| Cluster DNS | Internal service | UDP/TCP 53 | Needed for name resolution |
| Internal API | Cluster Service | TCP 443 | payments-api namespace |
| Logging / metrics | SaaS or internal | TCP 443 | e.g. logs.example.com |
| Email provider | SaaS | TCP 443 | e.g. api.mailer.com |
You can do this during:
- Architecture design sessions
- Threat modeling workshops
- “What does this app depend on?” reviews
This list becomes the allow list that your NetworkPolicies will enforce.
5. Step 1: create a safe playground namespace
Do not start in production.
Create or pick a non-critical namespace to experiment in:
kubectl create namespace egress-playground
kubectl label namespace egress-playground environment=egress-playground
Deploy a test pod there
apiVersion: v1
kind: Pod
metadata:
name: debug-pod
namespace: egress-playground
labels:
app: debug
spec:
containers:
- name: debug
image: curlimages/curl:8.9.0
command: ["sleep", "3600"]
then start a shell
> kubectl -n egress-playground exec -it debug-pod -- sh
while in the pod test a few things
# DNS
nslookup kubernetes.default
# In-cluster HTTP (if you have a test service)
curl http://kubernetes.default.svc.cluster.local
# External HTTP
curl https://example.com
6. Step 2: add a “deny most, allow DNS” egress policy
Now we make egress restrictive. We will:
- Select all Pods in the namespace
- Mark the policy as Egress only
- Allow egress only to DNS ports (53 TCP + UDP) on any IP
- Deny all other egress
Note: allowing DNS to “any IP” is a common first step, but it is not bulletproof for data exfiltration. Attackers could send data over port 53. Later you can tighten this to specific IP ranges or the DNS Pod selector. Red Hat
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress-with-dns
namespace: egress-playground
spec:
podSelector: {} # select all Pods in this namespace
policyTypes:
- Egress
egress:
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Apply it
kubectl apply -f default-deny-egress-with-dns.yaml
Back in the debug pod
# DNS should still work
nslookup kubernetes.default
# But external HTTP should now fail or hang
curl https://example.com
At this point:
- Pods can resolve names
- Pods cannot actually connect to external endpoints on port 80/443
- Pods also cannot talk to internal services by IP unless those IPs are allowed
You have just moved that namespace from “anything goes” to “egress is mostly denied”.
7. Step 3: allow egress to specific internal services
Next, open only what you need. Example: allow egress to an internal API Service Assume you have an internal payments-api deployment in payments namespace, with label app: payments-api. You can allow Pods in egress-playground to call Pods that match those labels:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-to-payments
namespace: egress-playground
spec:
podSelector: {} # all Pods here
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: payments
podSelector:
matchLabels:
app: payments-api
ports:
- protocol: TCP
port: 443
Notes: You need to ensure the payments namespace has name=payments label, or adjust the selector. This policy is additive with the earlier DNS policy. The allowed egress from a Pod is the union of all matching policies. plural.sh In your debug Pod:
curl https://payments-api.payments.svc.cluster.local
8. Step 4: allow egress to specific external ranges
For external systems, you often cannot use Pod selectors. You instead allow IP or CIDR ranges with ipBlock.
Assume your logging provider has a fixed IP range 203.0.113.0/24 (example docs range):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-to-logging
namespace: egress-playground
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 203.0.113.0/24
ports:
- protocol: TCP
port: 443
Now the Pods in egress-playground can:
- Resolve DNS
- Talk to
payments-api - Talk to IPs in
203.0.113.0/24on port 443
Everything else is still blocked.
Reality check: some SaaS providers publish large or changing IP ranges. In those cases you may need to:
- Use your cloud’s egress NAT plus IP allowlists upstream, or
- Use CNI specific features like FQDN-based policies (for example Calico or GKE FQDNNetworkPolicy) ([docs.tigera.io][3])
9. Label strategy: group by service or team
Writing policies per individual Pod does not scale.
Instead, decide on a label scheme and use it consistently:
app: payments-apiteam: fraudenvironment: prod/staging
Then write policies with podSelector.matchLabels to match those logical groups.
Example restricted to a single app:
spec:
podSelector:
matchLabels:
app: payments-api
policyTypes:
- Egress
egress:
# rules...
This is where SSDLC meets operations:
- During design, you define which app/namespace may talk to what
- During implementation, you encode that intent as NetworkPolicies over labels
10. Testing your egress policies safely
10.1 Manual tests
Use the debug Pod pattern in each namespace:
Container image with:
curldigornslookup- maybe
nc(netcat)
From the Pod:
# Should work
nslookup kubernetes.default
curl https://your-allowed-internal-service
# Should fail
curl https://example.com
curl https://some-random-site.com
If you expect something to work but it fails:
Check which policies apply:
kubectl -n egress-playground get networkpolicy kubectl -n egress-playground describe networkpolicy <name>Confirm that the Pod’s labels match the
podSelectoryou think they do.
10.2 Be aware of CNI differences
Different CNIs may have:
- Additional policy types (for example, Calico’s own CRDs)
- Slightly different defaults for when policy enforcement starts or how logging works ([docs.tigera.io][4])
Always cross check your behavior with your CNI’s docs.
11. Where this fits in your SSDLC
Make egress control part of your regular development flow, not an afterthought.
Design & threat modeling
For each new service:
List all external dependencies:
- Internal APIs / namespaces
- External SaaS
Decide:
- Which namespaces should be able to reach them
- On what ports and protocols
Capture this in your threat model as “allowed egress list”.
Code review / PR templates
Add simple questions to your PR template:
- “Does this change introduce new outbound calls or dependencies?”
- “If yes, which NetworkPolicy needs to be updated?”
This keeps egress front of mind whenever code starts talking to the outside world.
Pre-prod checks
In your CI/CD pipeline, you can:
Check that every new namespace or app has:
- At least one NetworkPolicy manifest
- Explicit
policyTypesfields set (no relying on subtle defaults) ([Red Hat][1])
12. Opinionated egress lockdown checklist
You do not have to be perfect to be much better than “open egress everywhere”.
Use this checklist as you roll out:
I have a playground namespace where I can safely break things.
I have a simple inventory of required egress per app or namespace.
At least one non-critical namespace uses a default-deny egress policy with explicit DNS allowance.
For apps in that namespace, I created policies that:
- Allow only required internal services
- Allow only required external ranges
My team agreed on a label strategy (
app,team,environment) and policies use those labels.Design and code review templates include “new egress?” questions.
I have a repeatable way to test egress (debug Pods and simple curl/dig checks).
If you can honestly tick most of these, your Kubernetes egress posture is already far better than the default “let everything talk to everything.”
From there, you can grow into more advanced patterns like:
- Central egress gateways
- Service mesh egress policies
- FQDN-based rules from your CNI or cloud provider
But the foundation is the same: know what should be allowed out, and make that explicit in NetworkPolicies.