Upstream Settings Policy API
Learn how to use the UpstreamSettingsPolicy API.
The UpstreamSettingsPolicy API allows Application Developers to configure the behavior of a connection between NGINX and the upstream applications.
The settings in UpstreamSettingsPolicy correspond to the following NGINX directives:
UpstreamSettingsPolicy is a Direct Policy Attachment that can be applied to one or more services in the same namespace as the policy.
UpstreamSettingsPolicies can only be applied to HTTP or gRPC services, in other words, services that are referenced by an HTTPRoute or GRPCRoute.
See the custom policies document for more information on policies.
This guide will show you how to use the UpstreamSettingsPolicy API to configure the upstream zone size and keepalives for your applications.
For all the possible configuration options for UpstreamSettingsPolicy, see the API reference.
-
Install NGINX Gateway Fabric.
-
Save the public IP address and port of NGINX Gateway Fabric into shell variables:
GW_IP=XXX.YYY.ZZZ.III GW_PORT=<port number> -
Lookup the name of the NGINX Gateway Fabric pod and save into shell variable:
NGF_POD_NAME=<NGF Pod>Note: In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.
Create the coffee and tea example applications:
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffee
spec:
replicas: 1
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: coffee
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: coffee
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tea
spec:
replicas: 1
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: tea
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: tea
EOFThis will create two services and pods in the default namespace:
kubectl get svc,pod -n defaultNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coffee ClusterIP 10.244.0.14 <none> 80/TCP 23h
service/tea ClusterIP 10.244.0.15 <none> 80/TCP 23h
NAME READY STATUS RESTARTS AGE
pod/coffee-676c9f8944-n9g6n 1/1 Running 0 23h
pod/tea-6fbfdcb95d-cf84d 1/1 Running 0 23hCreate a Gateway:
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: gateway
spec:
gatewayClassName: nginx
listeners:
- name: http
port: 80
protocol: HTTP
hostname: "*.example.com"
EOFCreate HTTPRoutes for the coffee and tea applications:
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: coffee
spec:
parentRefs:
- name: gateway
sectionName: http
hostnames:
- "cafe.example.com"
rules:
- matches:
- path:
type: Exact
value: /coffee
backendRefs:
- name: coffee
port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: tea
spec:
parentRefs:
- name: gateway
sectionName: http
hostnames:
- "cafe.example.com"
rules:
- matches:
- path:
type: Exact
value: /tea
backendRefs:
- name: tea
port: 80
EOFTest the configuration:
You can send traffic to the coffee and tea applications using the external IP address and port for NGINX Gateway Fabric.
Send a request to coffee:
curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/coffeeThis request should receive a response from the coffee Pod:
Server address: 10.244.0.9:8080
Server name: coffee-76c7c85bbd-cf8nzSend a request to tea:
curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/teaThis request should receive a response from the tea Pod:
Server address: 10.244.0.9:8080
Server name: tea-76c7c85bbd-cf8nzTo set the upstream zone size to 1 megabyte for both the coffee and tea services, create the following UpstreamSettingsPolicy:
kubectl apply -f - <<EOF
apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
name: 1m-zone-size
spec:
targetRefs:
- group: core
kind: Service
name: tea
- group: core
kind: Service
name: coffee
zoneSize: 1m
EOFThis UpstreamSettingsPolicy targets both the coffee and tea services we created in the setup by specifying both services in the targetRefs field. It limits the upstream zone size of the coffee and tea services to 1 megabyte.
Verify that the UpstreamSettingsPolicy is Accepted:
kubectl describe upstreamsettingspolicies.gateway.nginx.org 1m-zone-sizeYou should see the following status:
Status:
Ancestors:
Ancestor Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: gateway
Namespace: default
Conditions:
Last Transition Time: 2025-01-07T20:06:55Z
Message: Policy is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Controller Name: gateway.nginx.org/nginx-gateway-controller
Events: <none>Next, verify that the policy has been applied to the coffee and tea upstreams by inspecting the NGINX configuration:
kubectl exec -it -n nginx-gateway $NGF_POD_NAME -c nginx -- nginx -TYou should see the zone directive in the coffee and tea upstreams both specify the size 1m:
upstream default_coffee_80 {
random two least_conn;
zone default_coffee_80 1m;
server 10.244.0.14:8080;
}
upstream default_tea_80 {
random two least_conn;
zone default_tea_80 1m;
server 10.244.0.15:8080;
}To enable keepalive connections for the coffee service, create the following UpstreamSettingsPolicy:
kubectl apply -f - <<EOF
apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
name: upstream-keepalives
spec:
targetRefs:
- group: core
kind: Service
name: coffee
keepAlive:
connections: 32
EOFThis UpstreamSettingsPolicy targets the coffee service in the targetRefs field. It sets the number of keepalive connections to 32, which activates the cache for connections to the service’s pods and sets the maximum number of idle connections to 32.
Verify that the UpstreamSettingsPolicy is Accepted:
kubectl describe upstreamsettingspolicies.gateway.nginx.org upstream-keepalivesYou should see the following status:
Status:
Ancestors:
Ancestor Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: gateway
Namespace: default
Conditions:
Last Transition Time: 2025-01-07T20:06:55Z
Message: Policy is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Controller Name: gateway.nginx.org/nginx-gateway-controller
Events: <none>Next, verify that the policy has been applied to the coffee upstreams, by inspecting the NGINX configuration:
kubectl exec -it -n nginx-gateway $NGF_POD_NAME -c nginx -- nginx -TYou should see that the coffee upstream has the keepalive directive set to 32:
upstream default_coffee_80 {
random two least_conn;
zone default_coffee_80 1m;
server 10.244.0.14:8080;
keepalive 32;
}Notice, that the tea upstream does not contain the keepalive directive, since the upstream-keepalives policy does not target the tea service:
upstream default_tea_80 {
random two least_conn;
zone default_tea_80 1m;
server 10.244.0.15:8080;
}- Custom policies: learn about how NGINX Gateway Fabric custom policies work.
- API reference: all configuration fields for the
UpstreamSettingsPolicyAPI.