BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News NGINX Releases Microservices Platform, OpenShift Ingress Controller, and Service Mesh Preview

NGINX Releases Microservices Platform, OpenShift Ingress Controller, and Service Mesh Preview

This item in japanese

Bookmarks

At nginx.conf, held in Portland, USA, NGINX Inc released the NGINX Application Platform, a suite of four products that are built upon open source technology, which aim to be a "one stop shop" for developers to deploy, manage and observe microservices. Additional releases announced included a Kubernetes Ingress Controller solution for load balancing on the Red Hat OpenShift Container Platform, and an implementation of NGINX as a service proxy for the Istio service mesh control plane.

The new NGINX Application Platform consists of the following components:

NGINX Plus is a combined web server, content cache and load balancer. NGINX Web Application Firewall (WAF) is a commercial tool built upon the open source ModSecurity WAF, and provides protection against Layer 7 attacks, such as SQL injection or cross-site scripting, and enables traffic to be blocked or allowed depending on rules based on, for example, IP addresses and headers. NGINX WAF runs as a dynamic module that plugs into NGINX Plus, and would be deployed at the edge of a network to protect internal web services and applications from DDoS attacks and bad actors.

NGINX Unit is a new open source application server designed by Igor Sysoev and implemented by the core NGINX software development team. Unit is "completely dynamic", and allows blue/green style deployment (switching) of a new application version seamlessly, without restarting any processes. All Unit configuration is handled through a built-in REST API using JSON configuration syntax, and there is no configuration file. Currently Unit will run code written in recent versions of PHP, Python, and Go. A mix of the supported languages and differing language versions can be run within the same server. Additional support for more languages, including Java and Node.JS, is coming soon.

NGINX Controller is a centralised monitoring and management platform for NGINX Plus. Controller acts as a control plane, and allows "the management of hundreds of NGINX Plus servers from a single location" using a graphical user interface. The interface allows the creation of new instances of NGINX Plus servers, and enables the central configuration of load balancing, URL routing, and SSL termination. The Controller also has monitoring capabilities to provide observation of application health and performance.

NGINX Application Platform

Figure 1. NGINX Applicaiton Platform (Image taken from the NGINX Blog)

The newly released NGINX Plus (Kubernetes) Ingress Controller solution is based upon the open source NGINX kubernetes-ingress project, and is tested, certified, and supported to provide load balancing for the Red Hat OpenShift Container Platform. The solution adds support for the advanced features found in NGINX Plus, including advanced load balancing algorithms, Layer 7 routing, end-to end authentication, request/rate limiting, and a content cache and web server.

NGINX has also released nginmesh, an open source preview version of NGINX as a service proxy for Layer 7 load balancing and proxying within the Istio service mesh platform. It aims to provide key capabilities and integration with Istio when deployed as a sidecar container, and will facilitate communication between services in a "standard, reliable, and secure manner". Additionally, NGINX will collaborate as part of the Istio community by joining the Istio networking special interest group.

The concept of a "service mesh" has risen in popularity recently, as it allows developers to implement loosely coupled microservices-based applications with an underlying mesh (or communication bus) to manage traffic flows between services, enforce access policies, and aggregate telemetry data. Istio is an open source service mesh project led by Google, IBM, Lyft and others, and aims to provide a control plane to the service proxies’ data plane. Currently Istio is tightly integrated into Kubernetes, but there are plans to also support platforms such as virtual machines, PaaS like Cloud Foundry, and potentially FaaS "serverless" offerings.

By default Istio uses the Envoy service proxy, which was created by Matt Klein and the team at Lyft, and has been in production use at Lyft for a number of years. NGINX appears to not be the only company to realise the potential benefits of providing (and owning) the service proxy component within a microservices mesh, as Buoyant are also in the process of modifying their JVM-based service proxy, Linkerd (which was spawned from the Twitter Finagle stack), for integration with Istio.

The NGINX nginmesh Istio service proxy module - written in Golang rather than C as used for the NGINX web server itself - integrates with an open source NGINX running as a sidecar (shown in Figure 2), and claims to offers "a small footprint, high-performance proxy with advanced load balancing algorithms, caching, SSL termination, scriptability with Lua and nginScript, and various security features with granular access control."

NGINX nginmesh

Figure 2. NGINX nginmesh Architecture (Image from nginmesh GitHub repo)

Additional details on all of the NGINX releases and announcements made at nginx.conf can be found on the NGINX blog.

Rate this Article

Adoption
Style

BT