Use Manifests to install NGINX Gateway Fabric with NGINX Open Source

This page describes how to use Manifests to install NGINX Gateway Fabric with NGINX Open Source.

It explains how to install the Gateway API resources and add certificates for secure authentication, then deploy NGINX Gateway Fabric and its CRDs (Custom resource definitions).

By following these instructions, you will finish with a functional NGINX Gateway Fabric instance for your Kubernetes cluster.

To learn which Gateway API resources NGINX Gateway Fabric currently supports, view the Gateway API Compatibility topic.

Before you begin

To complete this guide, you will need the following pre-requisites:

Install the Gateway API resources

If you have already installed Gateway API resources in your cluster, ensure they are a version supported by NGINX Gateway Fabric

To install the Gateway API resources, use kubectl kustomize:

kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v2.2.2" | kubectl apply -f -
Example output
text
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/grpcroutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io created

You should also create the nginx-gateway namespace, which is used by the Manifest files by default:

kubectl create namespace nginx-gateway

Add certificates for secure authentication

These steps use a self-signed issuer, which should not be used in production environments. For production environments, you should use a real CA issuer.

First, create a CA (certificate authority) issuer:

yaml
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: selfsigned-issuer
  namespace: nginx-gateway
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: nginx-gateway-ca
  namespace: nginx-gateway
spec:
  isCA: true
  commonName: nginx-gateway
  secretName: nginx-gateway-ca
  privateKey:
    algorithm: RSA
    size: 2048
  issuerRef:
    name: selfsigned-issuer
    kind: Issuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: nginx-gateway-issuer
  namespace: nginx-gateway
spec:
  ca:
    secretName: nginx-gateway-ca
EOF
Example output
text
issuer.cert-manager.io/selfsigned-issuer created
Warning: spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
certificate.cert-manager.io/nginx-gateway-ca created
issuer.cert-manager.io/nginx-gateway-issuer created

You will then need to create a server certificate for the NGINX Gateway Fabric control plane (server):

The default service name is nginx-gateway, and the namespace is nginx-gateway, so the dnsNames value should be nginx-gateway.nginx-gateway.svc.

This value becomes the name of the NGINX Gateway Fabric control plane service.

yaml
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: nginx-gateway
  namespace: nginx-gateway
spec:
  secretName: server-tls
  usages:
  - digital signature
  - key encipherment
  dnsNames:
  - ngf-nginx-gateway-fabric.nginx-gateway.svc
  issuerRef:
    name: nginx-gateway-issuer
EOF

Since the TLS Secrets are mounted into each pod that uses them, the NGINX agent (client) Secret is duplicated by the NGINX Gateway Fabric control plane into whichever namespace NGINX is deployed into.

All updates to the source Secret are propagated to the duplicate Secrets.

Add the certificate for the NGINX agent (client):

yaml
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: nginx
  namespace: nginx-gateway
spec:
  secretName: agent-tls
  usages:
  - "digital signature"
  - "key encipherment"
  dnsNames:
  - "*.cluster.local"
  issuerRef:
    name: nginx-gateway-issuer
EOF

agent-tls is the default name: if you use a different name, provide it when installing NGINX Gateway Fabric with the agent-tls-secret argument.

You should see the Secrets created in the nginx-gateway namespace:

kubectl -n nginx-gateway get secrets
Example output
text
agent-tls          kubernetes.io/tls   3      3s
nginx-gateway-ca   kubernetes.io/tls   3      15s
server-tls         kubernetes.io/tls   3      8s

Deploy the NGINX Gateway Fabric CRDs

Deploy the NGINX Gateway Fabric CRDs using kubectl apply:

kubectl apply --server-side -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v2.2.2/deploy/crds.yaml
Example output
text
customresourcedefinition.apiextensions.k8s.io/clientsettingspolicies.gateway.nginx.org serverside-applied
customresourcedefinition.apiextensions.k8s.io/nginxgateways.gateway.nginx.org serverside-applied
customresourcedefinition.apiextensions.k8s.io/nginxproxies.gateway.nginx.org serverside-applied
customresourcedefinition.apiextensions.k8s.io/observabilitypolicies.gateway.nginx.org serverside-applied
customresourcedefinition.apiextensions.k8s.io/snippetsfilters.gateway.nginx.org serverside-applied
customresourcedefinition.apiextensions.k8s.io/upstreamsettingspolicies.gateway.nginx.org serverside-applied

Deploy NGINX Gateway Fabric

By default, NGINX Gateway Fabric is installed in the nginx-gateway namespace.

If you want to deploy it in another namespace, you must modify the Manifest files.

Your next step is dependent on how you intend to expose NGINX Gateway Fabric:

To deploy NGINX Gateway Fabric with NGINX Open Source, use this kubectl command:

kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v2.2.2/deploy/default/deploy.yaml

To deploy NGINX Gateway Fabric with NGINX Open Source, use this kubectl command:

kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v2.2.2/deploy/default/deploy.yaml

To set up an AWS Network Load Balancer service, add these annotations to your Gateway infrastructure field:

yaml
spec:
  infrastructure:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "external"
      service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"

To deploy NGINX Gateway Fabric with NGINX Open Source and nodeSelector, use this kubectl command:

kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v2.2.2/deploy/azure/deploy.yaml

To deploy NGINX Gateway Fabric with NGINX Open Source and a NodePort service, use this kubectl command:

kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v2.2.2/deploy/nodeport/deploy.yaml

To deploy NGINX Gateway Fabric with NGINX Open Source on OpenShift, use this kubectl command:

kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v2.2.2/deploy/openshift/deploy.yaml
Example output
text
namespace/nginx-gateway configured
serviceaccount/nginx-gateway created
serviceaccount/nginx-gateway-cert-generator created
role.rbac.authorization.k8s.io/nginx-gateway-cert-generator created
clusterrole.rbac.authorization.k8s.io/nginx-gateway created
rolebinding.rbac.authorization.k8s.io/nginx-gateway-cert-generator created
clusterrolebinding.rbac.authorization.k8s.io/nginx-gateway created
service/nginx-gateway created
deployment.apps/nginx-gateway created
job.batch/nginx-gateway-cert-generator created
gatewayclass.gateway.networking.k8s.io/nginx created
nginxgateway.gateway.nginx.org/nginx-gateway-config created
nginxproxy.gateway.nginx.org/nginx-gateway-proxy-config created

Verify the Deployment

To confirm that NGINX Gateway Fabric is running, check the pods in the nginx-gateway namespace:

kubectl get pods -n nginx-gateway

The output should look similar to this (The pod name will include a unique string):

text
NAME                             READY   STATUS    RESTARTS   AGE
nginx-gateway-694897c587-bbz62       1/1     Running     0          29s

Access NGINX Gateway Fabric

When NGINX Gateway Fabric is installed, it provisions a ClusterIP Service used only for internal communication between the control plane and data planes.

To deploy NGINX itself and get a LoadBalancer Service, you should follow the Deploy a Gateway for data plane instances instructions.

Next steps