Releases: netscaler/netscaler-k8s-ingress-controller
Release 3.1.34
Version 3.1.34
What's new
Certificate key bundle support in NetScaler by using the NetScaler Ingress Controller
NetScaler Ingress Controller now supports certificate bundle (certkeybundle) functionality, which is supported on NetScaler starting from release 14.1 build 21.x. With this functionality, the issue with the certificate chain and the additional handling that is required when two certificates share an intermediate CA are resolved. For more information on certificate key bundle support in NetScaler, see Support for SSL certificate key bundle.
Enhanced WAF policy control with the exclude option
You can now use the exclude option to define which URLs, headers, and methods the WAF policy must ignore. If this option is not configured, the WAF inspects all URLs or the targets by default.
This enhancement improves the efficiency of managing WAF policies for microservices-based applications. You can create detailed lists of URLs to be excluded from WAF scanning, allowing for more precise policy enforcement. For example, you can configure the WAF to scan the URL /a
while excluding /a/c
from inspection. Also, this enhancement allows specifying headers and HTTP methods to be excluded, offering greater flexibility and control over WAF policy configuration.
For more information, see Configure web application firewall policies with the NetScaler Ingress Controller.
Fixed Issues
-
NetScaler ingress controller does not work as expected for ingresses if SSL profile settings are present in the ConfigMap.
-
The Policy-Based Routing (PBR) configurations performed by the NetScaler Ingress Controller (NSIC) on VPX might not work as expected in the following scenarios:
- When the Kubernetes worker node, for which NSIC has configured the PBR route on VPX, is deleted.
- When SNIPs of NetScaler VPX provided for the PBR route are not in the correct format in the NSIC ConfigMap.
-
During reconciliation, NetScaler ingress controller expects that the certificate binding is present in the content switching virtual server. But, NetScaler ingress controller does not check if the binding exists.
-
If there are multiple references to the same
HTTPRoute
within the ingress configuration, then the content-switching policy bindings are removed. -
During reconciliation of configurations by NetScaler Ingress Controller, the rewrite responder policy bindings for HTTPRoute configuration might get deleted and then added back.
Release 3.0.5
Version 3.0.5
What's new
ConfigMap for configuring local site preference
GSLB controller now has an option to support local site selection order for GSLB decision. Choosing the virtual IP addresses and applications deployed in the Kubernetes or OpenShift Cluster that is geographically closest to the ADNS IP address ensures efficient traffic routing and minimized latency.
The localSiteSelection
parameter is added to the [Netscaler-gslb-controller](https://github.com/netscaler/netscaler-helm-charts/tree/master/netscaler-gslb-controller)
Helm chart to enable local site preference. Setting this parameter to true automatically adds the configuration to the GSLB device. The parameter creates a ConfigMap for the GSLB controller, which supports on-the-fly addition, modification, and deletion of the configuration.
For more information, see ConfigMap for configuring local site preference.
Fixed issues
- When a load balancing virtual server is bound to a service group, the order is automatically set to 1. This behavior might triggers an SNMP trap on the client’s side, which might cause unexpected notifications.
- The GSLB controller is incompatible with the NetScaler 13.1-57.x release due to the mandatory GSLB site password requirement.
Release 2.3.15
Version 2.3.15
What's new
Support to bind preconfigured monitors to Kubernetes backend services
You can now bind preconfigured monitors in NetScaler to Kubernetes services using the existing annotation ingress.citrix.com/monitor
. This enhancement enables you to perform the following actions:
-
Bind the same preconfigured monitor to multiple backend services.
-
Bind a different preconfigured monitor to each backend service.
You can now bind preconfigured monitors in NetScaler to services of type LoadBalancer using the existing service annotation service.citrix.com/monitor
.
For more information, see annotations.
Enhanced security for communication between GSLB sites
For NetScaler GSLB controller deployment, an additional key sitesyncpassword
is supported when creating a secret that GSLB controller uses to connect to GSLB devices and push the configuration. This key enhances the security of communication between the GSLB sites. For more information, see Deploy NetScaler GSLB controller.
Fixed issues
-
The service account token generated in Kubernetes for Kubernetes API authentication expires periodically. As a result, the controller restarts periodically to retrieve the newly generated token for continued authentication.
-
Configuring the default SSL profile
ns_default_ssl_profile_frontend
using the annotationservice.citrix.com/frontend-sslprofile
in services of type LoadBalancer does not bind the default SSL profile to the content switching virtual server. -
GSLB monitor modifications fail intermittently. As a result, changes are not reflected in NetScaler.
Release 2.2.10
Version 2.2.10
What's new
Remote content inspection or content transformation service using ICAP
The Internet Content Adaptation Protocol (ICAP) is a simple lightweight protocol for running a value-added transformation service on HTTP messages. In a NetScaler setup, NetScaler (ICAP client) forwards HTTP requests and responses to one or more ICAP servers for processing. The ICAP servers perform content transformation on the requests and send back responses with an appropriate action to take on the request or response.
In a Kubernetes environment, to enable ICAP on NetScaler through NetScaler Ingress Controller, NetScaler provides the ICAP Custom Resource Definition (CRD). By enabling ICAP, you can perform the following actions:
- Block URLs with a specified string
- Block a set of IP addresses to mitigate DDoS attacks
- Mandate HTTP to HTTPS
For more information, see Remote content inspection or content transformation service using ICAP.
Infoblox integration with NetScaler IPAM Controller
With Infoblox integration, NetScaler IPAM controller assigns IP addresses to services, ingress, or listener resources from Infoblox.
Infoblox integration helps in the following ways:
- Request an available IP address from the specified range.
- Request the IP address associated with a domain name, ensuring the retrieval of a pre-existing IP address.
- Guarantee that the application deployed across various clusters can be accessed using a single, consistent IP address.
Note: After you upgrade to NetScaler IPAM Controller 2.2.10, make sure to upgrade the VIP CRD.
For more information, see Infoblox integration with IPAM controller.
Listener is now supported with NetScaler IPAM Controller
Listener is now supported with NetScaler IPAM Controller. To configure listener support for IPAM, specify the annotation listeners.citrix.com/ipam-range: (<range>)
in the listener CRD resource.
Note: After you upgrade to NetScaler IPAM Controller 2.2.10, make sure to upgrade the VIP CRD.
Address
field of the ingress resource is updated
In an NSIC sidecar deployment, in which the NetScaler CPX is exposed using a service of type ClusterIP, NodePort, or LoadBalancer, the Address (Status.LoadBalancer.IP
) field of the ingress is updated. To enable the ingress status update, specify the updateIngressStatus
Helm chart parameter as True
.
For more information, see ingress status update for sidecar deployments.
Fixed issues
-
After the NSIC upgrade, the CRD specifications in the cluster are deleted, causing the liveness and readiness probes to fail repeatedly. As a result, the NSIC pod gets stuck in a restart loop.
For information on how to upgrade NSIC, see Upgrade NetScaler Ingress Controller.
-
Canary deployment configuration using an ingress annotation such as
ingress.citrix.com/canary-weight
does not work in a namespace containing a hyphen ("-") in its name. -
When an NSIC pod restarts, SSL profiles are deleted for services of type LoadBalancer.
-
NSIC creates a duplicate route entry with the same gateway on NetScaler when there is a change in the node pod CIDR.
This fix ensures that NSIC deletes the stale route entry before creating a new one for any gateway, preventing duplicate route entries.
Release 2.1.4
Version 2.1.4
What's new
Multi-monitor support for GSLB
In a GSLB setup, you can now configure multiple monitors to monitor services of the same host. The monitors can be of different types, depending on the request protocol used to check the health of the services. For example, HTTP, HTTPS, and TCP.
In addition to configuring multiple monitors, you can define additional parameters for a monitor. You can also define the combination of parameters for each monitor as per your requirement. For more information, see Multi-monitor support for GSLB
.
Note:
When you upgrade to NSIC version 2.1.4, you must reapply the GTP CRD using the following command:
kubectl apply -f https://raw.githubusercontent.com/netscaler/netscaler-k8s-ingress-controller/master/gslb/Manifest/gtp-crd.yaml
.
Support to bind multiple SSL certificates for a service of type LoadBalancer
You can now bind multiple SSL certificates as front-end server certificates for a service of type LoadBalancer by using the following annotations: service.citrix.com/secret
and service.citrix.com/preconfigured-certkey
. For more information, see SSL certificate for services of type LoadBalancer through the Kubernetes secret resource.
Fixed issues
- NSIC doesn't process node update events in certain cases.
Release 2.0.6
Version 2.0.6
What's new
Support for multi-cluster ingress solution
NetScaler multi-cluster ingress solution enables NetScaler to load balance applications distributed across clusters using a single front-end IP address. The load-balanced applications can be either the same application, different applications of the same domain, or entirely different applications.
Earlier, to load balance applications in multiple clusters, a dedicated content switching virtual server was required on NetScaler for each instance of NetScaler Ingress Controller (NSIC) running in the clusters. With NetScaler multi-cluster ingress solution, multiple ingress controllers can share a content switching virtual server. Therefore, applications deployed across clusters can be load balanced using the same content switching virtual server IP (VIP) address. For more information, see Multi-cluster ingress.
New parameters within ConfigMap
The parameters metrics.service
and transactions.service
parameters are added under the endpoint
object for analytics configuration using a ConfigMap.
-
metrics.service
: Set this value as the IP address or DNS address of the observability endpoint.Note:
The
metrics.service
parameter replaces theserver
parameter starting from NSIC release 2.0.6. -
transactions.service
: Set this value as theIP address
ornamespace/service
of the NetScaler Observability Exporter service.Note:
The
transactions.service
parameter replaces theservice
parameter starting from NSIC release 2.0.6.
You can now change all the ConfigMap settings at runtime while NetScaler Ingress Controller is operational.
Fixed issues
- Sometimes, the content switching virtual servers in NetScaler are deleted because of a Kubernetes error. Meanwhile, when NetScaler Ingress Controller (NSIC) restarts, it looks for the content switching virtual servers in NetScaler and because those servers are not found, NSIC remains in the reconciliation loop. With this fix, NSIC no longer looks for the content switching virtual servers in NetScaler and proceeds with further configuration.
Release 1.43.7
Version 1.43.7
What's new
Implementation of Liveness and Readiness probes in NetScaler Ingress Controller (NSIC)
Liveness and Readiness probes are critical for ensuring the reliability and availability of containers within a pod to handle traffic in Kubernetes/OpenShift. These probes are designed to manage traffic flow effectively and maintain container health by performing specific checks.
- Liveness probe: Determines if a container is running (alive). If the container fails this check, Kubernetes/OpenShift automatically restarts the container.
- Readiness probe: Determines the readiness of containers to receive traffic. If the containers fail this check, the traffic is not directed to that pod. The pod itself is not terminated; instead, the containers are given time to complete their initialization process.
With the implementation of these probes, traffic is only directed to pods that are fully prepared to handle requests. If a container in a pod is not ready, Kubernetes/OpenShift temporarily stops sending traffic to that pod and allows the pod to initialize properly. For information about enabling and configuring the probes for NSIC, see the Helm chart release notes for NSIC 1.43.7.
For NSIC OpenShift deployments, DeploymentConfig
objects are replaced with Deployment
objects
Release 1.42.12
Version 1.42.12
Fixed issues
- When multiple NetScaler Ingress Controllers (NSIC) coexist in a cluster, an NSIC associated with a specific ingress class processes the rewrite policies associated with a different ingress class.
- After the NSIC pod restart, if the Kube API fails and is unreachable, NSIC deletes the configuration in NetScaler.
- NSIC logs a traceback error when a route is deployed prior to the service mentioned in that route.
- The exception handling is faulty in the following scenarios, resulting in an incorrect configuration of metrics server in NetScaler.
- When an NSIC tries to configure a metrics server in NetScaler and the metrics server already exists.
- When multiple NSIC instances try to configure the metrics server in NetScaler simultaneously.
- The responder policy parameters, such as redirect-status-code and redirect-reason, are not configured on the corresponding virtual server on NetScaler, even though a responder policy is successfully applied to a service in the Kubernetes cluster.
- Sometimes, NSIC fails to update NetScaler based on the updates to Kubernetes’ resource configuration and NetScaler returns an error. In such cases, NSIC clears the existing NetScaler configuration; when the configuration is cleared on NetScaler, an event notification is not logged in Kubernetes.
Release 1.41.5
Version 1.41.5
What's new
Support to specify a custom header for the GSLB-endpoint monitoring traffic
You can now specify a custom header that you want to add to the GSLB-endpoint monitoring traffic by adding the "customHeader" argument under the monitor
parameter in the global traffic policy (GTP). Earlier, the host URL specified in the GTP YAML was added to the custom header of GSLB-endpoint monitoring traffic by default.
The following GTP excerpt shows the usage of customHeader
argument under monitoring.
monitor:
- monType: HTTPS
uri: ''
customHeader: "Host: <custom hostname>\r\n x-b3-traceid: afc38bae00096a96\r\n\r\n"
respCode: '200,300,400'
Fixed issues
- Even though a responder policy was successfully applied to a service in the Kubernetes cluster, the responder policy parameters, such as
redirect-status-code
andredirect-reason
, were not configured on the corresponding virtual server on NetScaler. This issue is fixed now. - NetScaler Ingress Controller (NSIC) logged a traceback error when it attempted to get the analytics endpoints for NetScaler Observability Exporter service specified in the ConfigMap. This issue is fixed now.
- Installation of NetScaler Ingress Controller using NetScaler Operator failed because of certain settings in 'analyticsConfig' with the
lookup: nodes is forbidden
error. This failure was because of a lack of ClusterRole permission to run API calls to get node-specific information. This issue is fixed now.
Release 1.40.12
What's new
Support to bind SNI SSL certificate to NetScaler
NetScaler Ingress Controller (NSIC) now accepts default-ssl-sni-certificate
argument using which you can provide a secret that is used to configure SSL SNI certificate on NetScaler for HTTPS ingresses and routes.
Configure default-ssl-sni-certificate
argument in the NSIC deployment YAML by providing the secret name and the namespace where the secret has been deployed in the cluster as following: --default-ssl-sni-certificate <NAMESPACE>/<SECRET_NAME>
.
Support for namespace-specific NSIC in OpenShift
NSIC can now be deployed at the namespace level in the OpenShift cluster. In this deployment mode, NSIC processes resources pertaining to the given namespace instead of managing all the resources across the entire cluster.
Note:
If NSIC requires access to clusterwide resources such as
config.openshift.io
,network.openshift.io
, etc., it must be deployed with ClusterRole privileges.
ImagePullSecret
support for GSLB Controller
The GSLB controller Helm chart now supports the imagePullSecret
option that ensures smooth integration with container registries that require authentication. Before deploying the Helm chart, you must ensure the corresponding Kubernetes secret is created within the same namespace to enable seamless image pull during helm installation.
Fixed issues
- When NSIC was deployed to configure VPX in OpenShift environment without specifying a VIP address (nsVIP) for the VPX, the NSIC attempted to process the ingress or route resources repeatedly resulting in failures. This issue is fixed now.
- NSIC encountered traceback errors when the container port was absent from the service deployment YAML. This issue is fixed now.
- The removal of stale endpoint labels resulted in reinitialization of NSIC. This issue is fixed now.
- The
ingressClass
annotation was not supported when NSIC was deployed with a local RBAC role. This issue is fixed now.