BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News NGINX Controller Application Delivery Modules Improve Health Checks and Caching Configurations

NGINX Controller Application Delivery Modules Improve Health Checks and Caching Configurations

This item in japanese

Bookmarks

NGINX has released new versions of their NGINX Controller Application Delivery Module, a control plane solution for NGINX Plus load balancers. The new features include enhanced workload health-checks, improvements to caching configuration, and instance groups.

NGINX Controller provides a centralized orchestration and analytics platform for managing fleets of NGINX Plus instances. NGINX Plus is an integrated load balancer, server, and content cache. The Application Delivery Module provides a control plane solution for NGINX Plus instances. This provides an interface for configuring, securing, monitoring, and troubleshooting load balancing for the applications.

With these new versions, NGINX has enhanced the workload health checks. Two new events are now generated per component per instance. The first is an event triggered when workload group members change state from healthy to unhealthy. The second is an event that provides a snapshot of the current state of workload group members, sent every few minutes. In addition to these two new events, it is now possible to configure the headers in health-check probes sent by the NGINX Plus data plane.

The release also brings a new feature called snippets that allows for configuring NGINX directives that are not represented within the controller API. Snippets can be added to the http, main, stream, and upstream blocks as well as the components server and location blocks and the gateway's server blocks. For example, a snippet could be used to implement an HTTP Strict Transport Security (HSTS) policy as follows:

{
    "metadata": {
        "name": "<gateway-name>"
    },
    "desiredState": {
        "configSnippets": {
            "uriSnippets": [
                {
                    "applicableUris": [
                        {
                            "uri": "http://172.16.0.238:81"
                        }
                    ],
                    "directives": [
                        {
                            "directive": "add_header",
                            "args": ["Strict-Transport-Security", "max-age=31536000; includeSubDomains", "always"] 
                        }
                    ]
                }
            ]
        },
        "ingress": {
            "uris": {
                "http://example.com:8020": {}
            },
            "placement":  {
                "instanceRefs": [
                    {
                        "ref": "/infrastructure/locations/unspecified/instances/<instance-name>"
                    }
                ]
            }
        }
    }
}

Note that snippets are applied to the nginx.conf file as is. There is no validation step performed by NGINX Controller prior to the snippet being applied. As such, NGINX strongly recommends validating snippets in a lab environment before pushing to production.

Caching can now be configured via the API or the UI. Basic caching can be enabled by adding a disk store to a component. Note that the specified directory must exist and the NGINX process must have both read and write permission to it. With a single disk store, NGINX Controller modifies the nginx.conf file by adding proxy_cache_path in the top level http context and proxy_cache in the component's location block.

Cache splitting is supported via the split config settings. This allows for splitting of cached content by either percentage or pattern matching. NGINX Controller will add a split_clients configuration block with either the percentage split or a map configuration block with a string (for pattern matching) to the http context of the generated nginx.conf file.

Advanced caching is possible through the use of snippets. In the following example, the configSnippets.uriSnippets API is used to set a cache duration time of 1 minute for all requests. In addition, this sets up cache splitting across three storage paths, with /tmp/default as the default storage location.

{
    "desiredState": {
        "configSnippets": {
            "uriSnippets": [
                {
                    "directives": [
                        {
                            "directive": "proxy_cache_valid",
                            "args": [
                                "any",
                                "1m"
                            ]
                        }
                    ]
                }
            ]
        },
        "caching": {
            "splitConfig": {
                "criteriaType": "PERCENTAGE",
                "key": "$request_uri"
            },
            "diskStores": [
                {
                    "inMemoryStoreSize": "100m",
                    "inactiveTime": "1m",
                    "isDefault": false,
                    "maxSize": "5G",
                    "minFree": "10k",
                    "path": "/tmp/hdd1",
                    "percentCriteria": "20%"
                },
                {
                    "inMemoryStoreSize": "100m",
                    "inactiveTime": "10s",
                    "isDefault": false,
                    "maxSize": "5g",
                    "minFree": "10k",
                    "path": "/tmp/hdd2",
                    "percentCriteria": "50%"
                },
                {
                    "inMemoryStoreSize": "100m",
                    "inactiveTime": "15s",
                    "isDefault": true,
                    "maxSize": "2g",
                    "minFree": "10k",
                    "path": "/tmp/default"
                }
            ]
        }
    }
}

The newly released Instance Groups create logical groups of NGINX Plus instances that will receive identical configuration at once. This allows for configuration of multiple instances at scale in one step.

For more details and additional features included in this release, please review the official announcement on the NGINX blog. NGINX Controller Application Delivery Module can be trialed as part of NGINX Controller.

About the Author

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT