Ingress NGINX Retirement (Ingress-NGINX) — What DevOps Teams Should Do Next
If you’ve been around Kubernetes for a while, this combo probably feels like home:
- Deployment
- Service
- Ingress
ingress-nginx-controller
It’s been the default “good enough” edge for years.
But Kubernetes SIG Network + the Security Response Committee announced that the community Ingress-NGINX controller is being retired. Best-effort maintenance continues until March 2026 — after that: no releases, no bug fixes, no security patches. Your running setups won’t magically stop working, but you will be running an unmaintained edge component.
Source: Kubernetes blog announcement.
TL;DR (for busy humans)
- Ingress (API) is not “removed tomorrow” — your YAML will still apply.
- Ingress-NGINX (the community controller) is retiring → plan a replacement.
- Gateway API is the modern direction (cleaner model, fewer annotations, better multi-tenancy).
- You have 3 practical paths:
- move to Gateway API (recommended),
- move to a supported Ingress controller,
- move to a cloud-managed gateway / LB integration.
What exactly is being retired?
Important nuance:
- Ingress = Kubernetes API object (
kind: Ingress) - Ingress-NGINX = a specific community controller implementation (the one most people installed from
kubernetes/ingress-nginx)
The retirement is about the controller project’s maintenance lifecycle, not about your cluster instantly throwing 503s.
Why this is happening (in plain DevOps terms)
1) Too much risk sitting on too few maintainers
Ingress controllers sit at the edge. That means:
- internet traffic
- TLS termination
- auth headers
- request rewriting
- WAF-ish behaviors
If the project can’t sustainably ship fixes, keeping it “official” becomes a security problem.
2) The Ingress model hit its ceiling
Ingress was intentionally small. The ecosystem “extended” it with annotations, snippets, and controller-specific behaviors. That worked… until it got messy.
3) Kubernetes networking is moving to Gateway API
Gateway API is the “grown-up” traffic model:
- platform teams define entry points
- app teams define routes
- policies become first-class objects (less YAML archaeology)
Step 0: Check if you’re using Ingress-NGINX
# Find ingress-nginx pods (common label)
kubectl get pods -A -l app.kubernetes.io/name=ingress-nginx
# See which IngressClass points to it
kubectl get ingressclass -o wide
# See which apps still use Ingress
kubectl get ingress -A
If you see ingress-nginx-controller running, this post is for you.
What replaces Ingress-NGINX?
Short answer:
Gateway API + a supported controller (vendor or OSS).
Common options:
- Envoy Gateway
- Istio (Gateway API support)
- Traefik (Ingress + Gateway API)
- NGINX Gateway Fabric (different project than ingress-nginx)
- Cloud-managed L7 gateways (AKS/EKS/GKE controllers)
A useful community overview (with “what’s proposed next” framing):
And the upstream Gateway API project:
Before & After: Ingress vs Gateway API
Before: classic Ingress (what most clusters look like today)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
namespace: apps
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 80
After: Gateway + HTTPRoute (clean separation)
Gateway (owned by platform team):
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: shared-gw
namespace: networking
spec:
gatewayClassName: nginx
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
exposure: "public"
HTTPRoute (owned by app team):
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-app
namespace: apps
labels:
exposure: "public"
spec:
parentRefs:
- name: shared-gw
namespace: networking
hostnames:
- "app.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: my-app
port: 80
Why this feels better in real life:
- no controller-specific annotations for basic things
- platform team controls who can attach routes
- app teams don’t need cluster-admin to publish traffic
TLS example (recommended pattern)
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: shared-gw
namespace: networking
spec:
gatewayClassName: nginx
listeners:
- name: https
protocol: HTTPS
port: 443
hostname: app.example.com
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: wildcard-example-com
Canary / weighted routing (without annotation soup)
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-app
namespace: apps
spec:
parentRefs:
- name: shared-gw
namespace: networking
hostnames:
- "app.example.com"
rules:
- backendRefs:
- name: my-app-v1
port: 80
weight: 90
- name: my-app-v2
port: 80
weight: 10
This is one of those “why didn’t we always have this?” moments.
Real migration options (pick the one that fits your org)
Option A — Freeze + buy time (not great, but sometimes reality)
- Keep Ingress-NGINX running short term
- Lock versions
- Reduce risky features (snippets, exotic rewrites)
- Put a deadline on it
Good when: you’re mid-migration, teams are overloaded, and you need a controlled plan.
Option B — Switch to a supported Ingress controller (minimal YAML changes)
- Move to a vendor-backed / maintained controller
- Keep
Ingressresources for now - Migrate to Gateway API later
Good when: you want the smallest near-term change.
Option C — Move straight to Gateway API (recommended if you can)
- Adopt Gateway API and a controller that supports it well
- Migrate line-by-line, app-by-app
- Standardize how traffic gets exposed
Good when: platform maturity matters (multi-tenant, RBAC boundaries, standardization).
A pragmatic migration plan (what I’d do)
Inventory
kubectl get ingress -A -o yaml > ingress-inventory.yamlClassify
- easy: host + basic paths
- medium: rewrites, headers, auth
- hard: custom snippets, regex heavy, weird edge cases
Stand up Gateway API controller in parallel
- new Gateway
- one test namespace
- easy app first
Migrate easy apps
- prove: TLS, logs, metrics, dashboards
- prove: rollback works
Handle hard apps
- decide if they should:
- stay on Ingress with a supported controller temporarily, or
- be refactored into Gateway API patterns
- decide if they should:
Bonus: ingress2gateway — automated “first draft” Gateway API YAML
To help teams hit by the Ingress-NGINX retirement, Kubernetes SIG Network maintains an open-source helper called ingress2gateway:
- What it does: reads your existing
Ingressresources (and common Ingress-NGINX patterns) and generates the closest equivalent Gateway API objects (Gateway,HTTPRoute, etc.). - Why it’s useful: it turns a big manual rewrite into a review + tweak job, which is usually where you want your engineers spending time.
Think of it as a migration accelerator — not a magic wand. You still need to review the output, validate behavior in staging, and adjust any controller-specific annotations/snippets that don’t translate 1:1.
Project:
Quick start (review-first workflow)
# Export current ingresses
kubectl get ingress -A -o yaml > ingress-inventory.yaml
# Generate a draft Gateway API manifest (from that inventory)
ingress2gateway print --providers=ingress-nginx \
--input-file=ingress-inventory.yaml \
> gateway-api-draft.yaml
# Review it like code, then apply in staging
kubectl apply -f gateway-api-draft.yaml
Handy links
Kubernetes retirement announcement (timeline + details):
https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/Gateway API docs:
https://gateway-api.sigs.k8s.io/Upstream guide: migrating from Ingress to Gateway API:
https://gateway-api.sigs.k8s.io/guides/migrating-from-ingress/Community writeup on “what’s proposed next”:
https://medium.com/@prayag-sangode/kubernetes-ingress-deprecation-whats-proposed-next-d1350f9f0345
Final thoughts
Ingress-NGINX has been the default edge for a lot of clusters — including serious production.
This retirement isn’t a panic button, but it is a deadline for platform teams.
If you do nothing, you’ll still route traffic in 2026… but you’ll be doing it with an unpatched edge.
The best time to start migrating was “sometime last quarter”.
The second best time is this week.