Upstream Settings Policy API
Learn how to use the UpstreamSettingsPolicy API.
The UpstreamSettingsPolicy API allows Application Developers to configure the behavior of a connection between NGINX and the upstream applications.
The settings in UpstreamSettingsPolicy correspond to the following NGINX directives:
zonekeepalivekeepalive_requestskeepalive_timekeepalive_timeoutrandomleast_connleast_timeupstreamip_hashhashvariables
UpstreamSettingsPolicy is a Direct Policy Attachment that can be applied to one or more services in the same namespace as the policy.
UpstreamSettingsPolicies can only be applied to HTTP or gRPC services, in other words, services that are referenced by an HTTPRoute or GRPCRoute.
See the custom policies document for more information on policies.
This guide will show you how to use the UpstreamSettingsPolicy API to configure the load balancing method, upstream zone size and keepalives for your applications.
For all the possible configuration options for UpstreamSettingsPolicy, see the API reference.
- Install NGINX Gateway Fabric.
Create the coffee and tea example applications:
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffee
spec:
replicas: 1
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: coffee
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: coffee
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tea
spec:
replicas: 1
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: tea
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: tea
EOFThis will create two services and pods in the default namespace:
kubectl get svc,pod -n defaultNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coffee ClusterIP 10.244.0.14 <none> 80/TCP 23h
service/tea ClusterIP 10.244.0.15 <none> 80/TCP 23h
NAME READY STATUS RESTARTS AGE
pod/coffee-676c9f8944-n9g6n 1/1 Running 0 23h
pod/tea-6fbfdcb95d-cf84d 1/1 Running 0 23hCreate a Gateway:
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: gateway
spec:
gatewayClassName: nginx
listeners:
- name: http
port: 80
protocol: HTTP
hostname: "*.example.com"
EOFAfter creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
Save the public IP address and port of the NGINX Service into shell variables:
GW_IP=XXX.YYY.ZZZ.III
GW_PORT=<port number>Lookup the name of the NGINX pod and save into shell variable:
NGINX_POD_NAME=<NGINX Pod>In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.
Create HTTPRoutes for the coffee and tea applications:
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: coffee
spec:
parentRefs:
- name: gateway
sectionName: http
hostnames:
- "cafe.example.com"
rules:
- matches:
- path:
type: Exact
value: /coffee
backendRefs:
- name: coffee
port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: tea
spec:
parentRefs:
- name: gateway
sectionName: http
hostnames:
- "cafe.example.com"
rules:
- matches:
- path:
type: Exact
value: /tea
backendRefs:
- name: tea
port: 80
EOFTest the configuration:
You can send traffic to the coffee and tea applications using the external IP address and port for the NGINX Service.
Send a request to coffee:
curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/coffeeThis request should receive a response from the coffee Pod:
Server address: 10.244.0.9:8080
Server name: coffee-76c7c85bbd-cf8nzSend a request to tea:
curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/teaThis request should receive a response from the tea Pod:
Server address: 10.244.0.9:8080
Server name: tea-76c7c85bbd-cf8nzYou can use UpstreamSettingsPolicy to configure the load balancing method for the coffee and tea applications. In this example, the coffee service uses the random two least_time=header method, and the tea service uses the hash consistent method with $upstream_addr as the hash key.
You need to specify an NGINX variable ashashMethodKeywhen using load balancing methodshashandhash consistent.
Create the following UpstreamSettingsPolicy resources:
kubectl apply -f - <<EOF
apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
name: lb-method
spec:
targetRefs:
- group: core
kind: Service
name: coffee
loadBalancingMethod: "random two least_time=header"
EOFkubectl apply -f - <<EOF
apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
name: lb-method-hash
spec:
targetRefs:
- group: core
kind: Service
name: tea
loadBalancingMethod: "hash consistent"
hashMethodKey: "$upstream_addr"
EOFThese two UpstreamSettingsPolicy resources target the coffee and tea Services and configure different load balancing methods for their upstreams. Verify that the UpstreamSettingsPolicies are Accepted:
kubectl describe upstreamsettingspolicies.gateway.nginx.org lb-methodYou should see the following status:
Status:
Ancestors:
Ancestor Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: gateway
Namespace: default
Conditions:
Last Transition Time: 2025-12-09T20:41:55Z
Message: The Policy is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Controller Name: gateway.nginx.org/nginx-gateway-controllerThe lb-method-hash policy should show the same Accepted condition.
Next, verify that the policies have been applied to the coffee and tea upstreams by inspecting the NGINX configuration:
kubectl exec -it -n <NGINX-pod-namespace> $NGINX_POD_NAME -- nginx -TYou should see the random two least_time=header directive on the coffee upstreams and hash $upstream_addr consistent in the tea upstream:
upstream default_coffee_80 {
random two least_time=header;
zone default_coffee_80 1m;
state /var/lib/nginx/state/default_coffee_80.conf;
keepAlive 16;
}
upstream default_tea_80 {
hash $upstream_addr consistent;
zone default_tea_80 1m;
state /var/lib/nginx/state/default_tea_80.conf;
keepAlive 16;
}NGINX Open Source supports the following load-balancing methods:round_robin,least_conn,ip_hash,hash,hash consistent,random,random two, andrandom two least_conn. NGINX Plus supports all of the methods available in NGINX Open Source, and adds the following methods:random two least_time=header,random two least_time=last_byte,least_time header,least_time last_byte,least_time header inflight, andleast_time last_byte inflight.
To set the upstream zone size to 1 megabyte for both the coffee and tea services, create the following UpstreamSettingsPolicy:
kubectl apply -f - <<EOF
apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
name: 1m-zone-size
spec:
targetRefs:
- group: core
kind: Service
name: tea
- group: core
kind: Service
name: coffee
zoneSize: 1m
EOFThis UpstreamSettingsPolicy targets both the coffee and tea services we created in the setup by specifying both services in the targetRefs field. It limits the upstream zone size of the coffee and tea services to 1 megabyte.
Verify that the UpstreamSettingsPolicy is Accepted:
kubectl describe upstreamsettingspolicies.gateway.nginx.org 1m-zone-sizeYou should see the following status:
Status:
Ancestors:
Ancestor Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: gateway
Namespace: default
Conditions:
Last Transition Time: 2025-01-07T20:06:55Z
Message: Policy is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Controller Name: gateway.nginx.org/nginx-gateway-controller
Events: <none>Next, verify that the policy has been applied to the coffee and tea upstreams by inspecting the NGINX configuration:
kubectl exec -it -n <NGINX-pod-namespace> $NGINX_POD_NAME -- nginx -TYou should see the zone directive in the coffee and tea upstreams both specify the size 1m:
upstream default_coffee_80 {
random two least_conn;
zone default_coffee_80 1m;
server 10.244.0.14:8080;
keepAlive 16;
}
upstream default_tea_80 {
random two least_conn;
zone default_tea_80 1m;
server 10.244.0.15:8080;
keepAlive 16;
}By default, the keepAlive directive is enabled with a value of 16. You can override this value or disable keepAlive entirely by configuring an UpstreamSettingsPolicy. To disable keepalive, set the connections field to 0.
The following example creates an UpstreamSettingsPolicy that configures keepalive connections for the coffee Service with a value of 32:
kubectl apply -f - <<EOF
apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
name: upstream-keepalives
spec:
targetRefs:
- group: core
kind: Service
name: coffee
keepAlive:
connections: 32
EOFThis UpstreamSettingsPolicy targets the coffee service in the targetRefs field. It sets the number of keepalive connections to 32, which activates the cache for connections to the service’s pods and sets the maximum number of idle connections to 32.
Verify that the UpstreamSettingsPolicy is Accepted:
kubectl describe upstreamsettingspolicies.gateway.nginx.org upstream-keepalivesYou should see the following status:
Status:
Ancestors:
Ancestor Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: gateway
Namespace: default
Conditions:
Last Transition Time: 2025-01-07T20:06:55Z
Message: Policy is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Controller Name: gateway.nginx.org/nginx-gateway-controller
Events: <none>Next, verify that the policy has been applied to the coffee upstreams, by inspecting the NGINX configuration:
kubectl exec -it -n <NGINX-pod-namespace> $NGINX_POD_NAME -- nginx -TYou should see that the coffee upstream has the keepalive directive set to 32:
upstream default_coffee_80 {
random two least_conn;
zone default_coffee_80 1m;
server 10.244.0.14:8080;
keepalive 32;
}To disable keepAlive directive lets create an UpstreamSettingsPolicy targeting the tea service with value 0:
kubectl apply -f - <<EOF
apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
name: upstream-unset-keepAlive
spec:
targetRefs:
- group: core
kind: Service
name: tea
keepAlive:
connections: 0
EOFVerify that the UpstreamSettingsPolicy is Accepted:
kubectl describe upstreamsettingspolicies.gateway.nginx.org upstream-unset-keepAliveYou should see the following status:
Status:
Ancestors:
Ancestor Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: gateway
Namespace: default
Conditions:
Last Transition Time: 2026-01-03T00:35:45Z
Message: The Policy is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Controller Name: gateway.nginx.org/nginx-gateway-controllerNext, verify that the policy has been applied to the tea upstream, by inspecting the NGINX configuration:
kubectl exec -it -n <NGINX-pod-namespace> $NGINX_POD_NAME -- nginx -T
```text
upstream default_tea_80 {
random two least_conn;
zone default_tea_80 1m;
server 10.244.0.15:8080;
}- Custom policies: learn about how NGINX Gateway Fabric custom policies work.
- API reference: all configuration fields for the
UpstreamSettingsPolicyAPI.