BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Embracing Cloud-Native for Apache DolphinScheduler with Kubernetes: a Case Study

Embracing Cloud-Native for Apache DolphinScheduler with Kubernetes: a Case Study

Bookmarks

Key Takeaways

  • The more efficient cloud-native deployment mode of DolphinScheduler saves more than 95% of human resources and working time over the original deployment model.
  • By integrating GitOps technology, we improved the DevOps management capability of DolphinScheduler and improved the software delivery efficiency and security audit capability.
  • By integrating newer cloud-native technologies, we were able to add horizontal scaling, health probes, and rolling deployments to DolphinScheduler.
  • Integrating observability technologies, such as Prometheus, into the infrastructure and service mesh dramatically improved the monitoring features available with DolphinScheduler.
  • Deep integration with Kubernetes job technology enabled a hybrid scheduler for both traditional virtual machines and container-based running environments.

DolphinScheduler is a distributed and easily scalable visual workflow task scheduling platform open sourced by Analysys.

In the field I’m working in, the use of DolphinScheduler can quickly solve the top ten pain points of data development for enterprises:

  • Multi-source data connection and access: most common data sources in the technical field can be accessed, and adding new data sources does not require many changes
  • Diversified, specialized, and massive data task management: this revolves around the problems of big data (Hadoop family, Flink, etc.) task scheduling, and is significantly different from traditional schedulers
  • Graphical task arrangement: this provides a convenient user experience and competitive ability with commercial products, especially for most foreign open-source products that cannot directly generate data tasks by dragging and dropping
  • Task details: the rich viewing of tasks, log, and time-running axis display, which meet developers’ requirements for refined data tasks management, quickly locating slow SQL and performance bottlenecks
  • Support for a variety of distributed file systems: this enriches the users’ choices for unstructured data
  • Native multi-tenant management: meets the data task management and isolation requirements of large organizations
  • Fully automatic distributed scheduling algorithm to balance all scheduling tasks
  • Native cluster monitoring: can monitor CPU, memory, number of connections, and Zookeeper status and is suitable for one-stop operation and maintenance by SMEs
  • Native task alarm function: minimizes the risk of task operation
  • Strong community-based operations: listening to the real voice of customers, constantly adding new functions, and continuously optimizing the customer experience

Based on the early microservice technology, DolphinScheduler adopts the concept of a service registry to perform distributed management of the cluster by using Zookeeper (many big data technologies use Zookeeper as decentralized cluster management). Worker primary nodes can be added arbitrarily, or you can deploy API management and alarm management independently. As an enterprise-level technical module, it realizes good technical characteristics of microservice separation, independent deployment, and modular management. However, in the era of rapid development of containerized cloud-native applications, this kind of basic technology pattern has some deficiencies:

  • It needs deployment from the ground, whether it is installed on a physical machine or a virtual machine, DolphinScheduler requires hundreds of shell operations, and a cluster of nodes may require thousands of shell operations
  • A standardized enterprise-level DolphinScheduler involves the management of a large number of basic environments, and usually requires more than eight nodes, hosts, and IP addresses. This infrastructure information brings certain management difficulties
  • When adding a node, dozens of operations are also required (installing Java, configuring the host, setting the DS Linux user, setting the password-less login, modifying the installation node configuration file), and the entire cluster needs to be stopped and restarted
  • Large enterprises usually have multiple clusters to support different business units, which will bring a lot of repetition in workloads
  • The scheduler has some observability functions, but cannot be integrated with mainstream tools
  • The scheduler, as a whole, still needs routine daily inspection work, such as investigating an abnormal exit of the core java process
  • There is no effective management mechanism or tool for the configuration settings of the scheduler under different requirements and scenarios

The core ideas for addressing these technical deficiencies are:

  • How to integrate DolphinScheduler into today’s mainstream cloud-native technology
  • How to deploy DolphinScheduler with less human resources, and can a fully automatic cluster installation and deployment mode be realized
  • How to have a fully serverless DolphinScheduler and greatly reduce the management cost of config management
  • How to standardize technical component implementation specifications
  • Can it operate unsupervised and the system self-healing
  • How to build and integrate it into an existing observability platform

Leveraging  Kubernetes Technology

As the de facto standard of cloud-native system technology, Kubernetes has brought a revolutionary change to the entire IT application technology system. Kubernetes is primarily based on core technical characteristics of service registration and discovery, load balancing, automated software release and rollback, containerized isolation, software self-healing, and distributed configuration management. 

Not just Kubernetes, our team has also integrated many excellent projects from the Cloud Native Computing Foundation (CNCF), and carried out the following work:

  • The deployment technology of DolphinScheduler has been improved. We have used Helm and Argo CD to greatly simplify and perform one-click deployment.
  • GitOps management mechanism for the configuration content is implemented by Argo CD, and this enables the complete audit capability of modern DevOps.
  • The Horizontal Pod Autoscaling technology of Kubernetes greatly simplifies the operation difficulty of application scaling.
  • The standardized health probe technology of Kubernetes enables the powerful self-healing ability of all technical components of the scheduler.
  • The rolling release technology of Kubernetes and Argo CD enables the elegant and simple upgrade of DolphinScheduler tools.
  • The use of Kube-Prometheus technology brings standardized observability capabilities to DolphinScheduler.
  • Powerful UI technology simplifies CMDB visual management, component configuration management based on Kubernetes, application log management, etc.

We have also introduced more powerful tools to DolphinScheduler to obtain richer cloud-native features:

  • Achieved easier service access by Kubernetes service registration discovery and ingress technology
  • Introduced Linkerd to bring the function of a service mesh to DolphinScheduler and improve the management and monitoring capabilities of all APIs
  • Combined DolphinScheduler with Argo Workflows or standard Kubernetes job
  • Introduced the object storage technology MinIO, unifying the technology storing unstructured data with DolphinScheduler

All the work we have done is aimed to make DolphinScheduler stronger and more stable in operation, with lower demand for operator time, better observability, and a more abundant and complete ecology.

The Initial Transition to a Cloud-Native Platform

To fully embrace being a cloud-native technology, DolphinScheduler first needs to quickly implement cloud-native deployments and operations, i.e., migrating most enterprise applications to the Kubernetes environment. 

Thanks to the open-source community contribution, we quickly constructed the Docker image of the DolphinScheduler Java application jar package and used the excellent Helm toolkit to implement scripting based on the Kubernetes declarative-style deployment. Becoming managed objects of Kubernetes is the most important step to integrating into cloud native. These tasks not only make the use of tools more convenient and faster for cloud-native users and organizations but also have drastically improved work efficiency for users of DolphinScheduler to use the tools.

To deploy DolphinScheduler on Kubernetes you can now:

  1. Obtain the Helm package at the folder ./dolphinscheduler-1.3.9/docker/kubernetes/dolphinscheduler in the dolphinscheduler-1.3.9.tar.gz file from the GitHub repo.
  2. Use the following command to deploy a Kubernetes-managed DolphinScheduler instance:
kubectl create ns ds139
helm install dolphinscheduler . -n ds139
  1. Sometimes, DolphinScheduler users need to integrate DataX, the MySQL JDBC driver, or the Oracle JDBC driver for ETL and database connection purposes. We can download the necessary components, build the new Docker image, and upgrade the Kubernetes-managed DolphinScheduler instance:

#Download the additional components
https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.49/mysql-connector-java-
5.1.49.jar
https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/
https://github.com/alibaba/DataX/blob/master/userGuid.md

#Create a new docker image with new tag by this Dockerfile
FROM apache/dolphinscheduler:1.3.9
COPY *.jar /opt/dolphinscheduler/lib/
RUN mkdir -p /opt/soft/datax
COPY datax /opt/soft/datax

#Edit image tag of helm value.yaml file, and execute helm upgrade.
helm upgrade dolphinscheduler -n ds139

Generally, it is recommended to use an independent, external PostgreSQL as the management database for the production environment. We found that after switching to an external database, even if we completely deleted and redeployed DolphinScheduler in Kubernetes, we did not have to recreate the DolphinScheduler’s application data (e.g. user-defined data processing tasks). This once again proved high availability and data integrity. Also, we recommend configuring the PersistentVolume for DolphinScheduler components because the historical DolphinScheduler’s applications logs would be lost if pods restart or upgrade.

Compared with the operation of hundreds of shell commands in the traditional model, with one configuration file modification, and a single line installation command, you can have eight components of DolphinScheduler automatically installed, saving a lot of labor costs and operational errors. For multiple DolphinScheduler clusters, it would be a huge human cost reduction, and the waiting time of the business department would be reduced from several days to less than an hour, or even possibly ten minutes.

Adding GitOps Based on Argo CD

Argo CD is a declarative GitOps continuous delivery tool based on Kubernetes and an incubation project of the CNCF, and a best practice tool for GitOps.

GitOps can bring the following advantages to the implementation of Apache DolphinScheduler.

  • Graphical and one-click installation of clustered software
  • Git records the full release process and enables one-click rollback
  • Convenient DolphinScheduler tool log viewing

Once implemented, we can see the pod, configmap, secret, service, ingress, and other resources that are automatically deployed by Argo CD, and it also displays the manifest commit information and username, which completely records all release event information. At the same time, you can also roll back to the historical version with one click.

Relevant resource information can be seen through the kubectl command:

[root@tpk8s-master01 ~]# kubectl get po -n ds139
                           NAME                               READY   STATUS  RESTARTS           AGE
Dolphinscheduler-alert-96c74dc84-72cc9	1/1	Running	0		22m
Dolphinscheduler-api-78db664b7b-gsltq	1/1	Running	0		22m
Dolphinscheduler-master-0			1/1	Running	0		22m
Dolphinscheduler-master-1			1/1	Running	0		22m
Dolphinscheduler-master-2			1/1	Running	0		22m
Dolphinscheduler-worker-0			1/1	Running	0		22m
Dolphinscheduler-worker-1			1/1	Running	0		22m
Dolphinscheduler-worker-2			1/1	Running	0		22m

[root@tpk8s-master01 ~]# kubectl get statefulset -n ds139
              NAME                                READY              AGE
Dolphinscheduler-master		3/3		22m
Dolphinscheduler-worker		3/3		22m

[root@tpk8s-master01 ~]# kubectl get cm -n ds139
          NAME                                     DATA                AGE
Dolphinscheduler-alert		15		23m
Dolphinscheduler-api			1		23m
Dolphinscheduler-common		29		23m
Dolphinscheduler-master		10		23m
Dolphinscheduler-worker		7		23m

[root@tpk8s-master01 ~]# kubectl get service -n ds139
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
Dolphinscheduler-api	ClusterIP	10.43.238.5	<none> 12345/TCP 23m
Dolphinscheduler-master-headless ClusterIP None	<none> 5678/TCP 23m
Dolphinscheduler-worker-headless ClusterIP None 	<none> 1234/TCP,50051/TCP 23m

[root@tpk8s-master01 ~]# kubectl get ingress -n ds139
      NAME                               CLASS                       HOSTS ADDRESS
Dolphinscheduler		<none>		ds139.abc.com

You can also see that all pods are deployed on different hosts in the Kubernetes cluster, for example, workers 1 and 2 are on different nodes. 

Once we have configured ingress, we could use the domain name to access the DolphinScheduler web UI in our company’s intranet, let’s take the DNS subdomain name abc.com as an example: http://ds139.abc.com/dolphinscheduler/ui/#/home. We can view the internal logs of each component of Apache DolphinScheduler at Argo CD:

With Argo CD, it is very convenient to modify the number of replicas of components such as master, worker, API, or alert. Apache DolphinScheduler’s Helm configuration also reserves the setting information of CPU and memory. Here we modify the replica’s setting in the value.yaml file. After modification, we can push it to the company’s internal source code system:

master:
  podManagementPolicy: "Parallel"
  replicas: "5"
worker:
  podManagementPolicy: "Parallel"
  replicas: "5"
alert:
  replicas: "3"
api:
  replicas: "3"

Just click sync on Argo CD to synchronize, and the corresponding pods will be added as required.

[root@tpk8s-master01 ~]# kubectl get po -n ds139
                   NAME                                      READY   STATUS              RESTARTS            AGE
Dolphinscheduler-alert-96c74dc84-72cc9	1/1	Running		0		43m
Dolphinscheduler-alert-96c74dc84-j6zdh	1/1	Running		0		2m27s
Dolphinscheduler-alert-96c74dc84-rn9wb	1/1	Running		0		2m27s
Dolphinscheduler-api-78db664b7b-6j8rj	1/1	Running		0		2m27s
Dolphinscheduler-api-78db664b7b-bsdgv	1/1	Running		0		2m27s
Dolphinscheduler-api-78db664b7b-gsltq	1/1	Running		0		43m
Dolphinscheduler-master-0			1/1	Running		0		43m
Dolphinscheduler-master-1			1/1	Running		0		43m
Dolphinscheduler-master-2			1/1	Running		0		43m
Dolphinscheduler-master-3			1/1	Running		0		2m27s
Dolphinscheduler-master-4			1/1	Running		0		2m27s
Dolphinscheduler-worker-0			1/1	Running		0		43m
Dolphinscheduler-worker-1			1/1	Running		0		43m
Dolphinscheduler-worker-2			1/1	Running		0		43m
Dolphinscheduler-worker-3			1/1	Running		0		2m27s
Dolphinscheduler-worker-4			1/1	Running		0		2m27s

Not only Helm, but Argo CD-based GitOps technology gives graphical, automated, traceable, auditable, and powerful DevOps, rollback, and monitoring capabilities to the entire set of DolphinScheduler tool, without any code modification to DolphinScheduler.

Incorporating Self-Healing into DolphinScheduler

As we all know, the contemporary IT environment is always in an unstable state. In other words, our technology systems regard various failures of servers, operating systems, and networks as regular events in the cluster. When the end user cannot normally access a task management page of DolphinScheduler through the browser, or when DolphinScheduler fails to run a regular big data task, it is already too late. 

However, before DolphinScheduler turns to cloud native, it can only rely on daily monitoring to check whether the master/worker/API and other components are running normally, by managing the UI through DolphinScheduler, or checking whether the Java process exists through jps. When an enterprise has multiple hundreds of scheduling environments, it will not only cost a lot but more importantly, there will be huge risks to the availability of the system.

It is worth noting that the Kubernetes technology itself can automatically restart and recover for stateful and deployment types of standardized applications, and even CRDs themselves can automatically restart and resume. When the application fails, an abnormal event will be recorded, and the application will be re-pulled to restart the application, and Kubernetes will record the number of times the pod is restarted so that technicians can quickly locate the fault.

In addition to standardized self-healing, there are active health monitoring methods. That is to build a service interface to actively probe the pods which are running DolphinScheduler through livenessProbe, and this mechanism can automatically restart the pods after the detection exceeds the number of failed retries. In addition, by using the readinessProbe, the Kubernetes cluster can automatically cut off the traffic to the abnormal pod when an exception is captured by the probe, and automatically resumes the traffic request to the pod after the abnormal event disappears.

livenessProbe:
  enabled: true
  initialDelaySeconds: "30"
  periodSeconds: "30"
  timeoutSeconds: "5"
  failureThreshold: "3"
  successThreshold: "1"
readinessProbe:
  enabled: true
  initialDelaySeconds: "30"
  periodSeconds: "30"
  timeoutSeconds: "5"
  failureThreshold: "3"
  successThreshold: "1"

Enhancing the Observability of DolphinScheduler

We know that Prometheus is already the de facto standard for monitoring tools in cloud-native systems, and incorporating DolphinScheduler’s standard monitoring into the Prometheus system made the most sense to us. The Kube-Prometheus technology can monitor all resources under the Kubernetes cluster. Stateful set, namespace, and pod are the three main resource features of DolphinScheduler. Through Kube-Prometheus technology, routine monitoring of CPU, memory, network, IO, replication, etc., is automatically done without any additional development or configuration.

We use the Kube-Prometheus operator technology in Kubernetes to automatically monitor the resources of each component of Apache DolphinScheduler after deploying. However, please pay attention that the version of Kube-Prometheus needs to correspond to the major version of Kubernetes. 

Service Mesh Integration

As a data service provider, DolphinScheduler implements the observability management of service links through service mesh technology and incorporates it into the internal service governance system.

DolphinScheduler not only needs general resource monitoring but also requires the monitoring technology of the service invocation chain. Through service mesh technology, the internal service calls of DolphinScheduler and the observability analysis of external calls of DolphinScheduler API can be realized, optimizing the service of DolphinScheduler products.

In addition, as a service component of data tools, DolphinScheduler can be seamlessly integrated into the internal service mode of the enterprise through service mesh tools. It enables the features such as TLS service communication capability, client service communication retry mechanism, and cross-cluster service registration discovery without modifying the DolphinScheduler code. Through service mesh technology, the observability analysis of API external service calls and internal calls of Apache DolphinScheduler can be realized to optimize the Apache DolphinScheduler product services.

We used Linkerd as a service mesh product for integration, which is also one of the CNCF’s excellent graduate projects. By modifying the annotations in the value.yaml file of the Apache DolphinScheduler helm and redeploying, it can quickly inject the mesh proxy sidecar to master, worker, API, alerts, and other components of DolphinScheduler.

annotations:
  linkerd.io/inject: enabled

You can also observe the quality of service communication between components including the number of requests per second.

Incorporating Cloud-Native Workflow Scheduling

To be a genuine cloud-native scheduling tool, DolphinScheduler needs to be able to schedule cloud-native job flows.

The tasks scheduled by DolphinScheduler are all executed in fixed pods. In this mode, the isolation requirements of the task development technology are relatively high. 

Especially under the Python language environment, there will be different versions of the Python basic and dependent packages in a team, and the difference between versions may even appear in hundreds of combinations. A small difference in the dependent packages will lead to a Python program run error. This is also the obstacle that stops DolphinScheduler from running a large number of Python applications. We recommend the following methods so that DolphinScheduler can quickly integrate with the Kubernetes job system, with powerful task isolation and concurrency capabilities:

  • Use the standard Kubernetes API system for job submission. You can submit tasks directly through kubectl cli commands or REST API.
  • Upload the kubectl command file to DolphinScheduler and submit it via the shell task of DolphinScheduler.
  • Use the Argo CLI command of the Argo Workflows project or the rest API command to submit.

Whether it is Kubernetes or Argo Workflows, the watch function needs to be added, because Kubernetes is an asynchronous technology and needs to wait for the completion of the task.

Here we use the example of Argo Workflows, we can create a new shell task or step at DolphinScheduler and paste the following commands in it. As a result, we can combine normal data jobs (e.g. database sql job, spark job, or flink job) and cloud-native jobs together to perform a much more comprehensive job flow. For example, this job is a hive SQL task to export a web app’s user-click data: 

beeline -u "jdbc:hive2://192.168.1.1:10006" --outputformat=csv2 -e "select * from database.user-click" > user-click.csv 

This example job is a Python Tensorflow task to build a machine learning model by training the data. This job is run via HTTP. First, we run the job:

#Http way to run a Python Tensorflow job

curl --request POST -H "Authorization: ${ARGO_TOKEN}" -k \
       --url https://argo.abc.com/api/v1/workflows/argo \
       --header 'content-type: application/json' \
       --data '{
                "namespace": "argo",
                "serverDryRun": false,
                "workflow": {
                "metadata": {
                    "name": "python-tensorflow-job",
                    "namespace": "argo"
                },
                "spec": {
                    "templates": [
                    {
                        "name": "python-tensorflow",
                        "container": {
                        "image": "tensorflow/tensorflow:2.9.1",
                        "command": [
                            "python"
                        ],
                        "args": [
                            "training.py"
                        ],
                        "resources": {}
                        }
                    }
                    ],
                    "entrypoint": "python-tensorflow",
                    "serviceAccountName": "argo",
                    "arguments": {}
                   }
                }
               }'

We can then check the job information and status:

#Http way to check the Python Tensorflow job information and status
curl --request GET -H "Authorization: ${ARGO_TOKEN}" -k \
       --url https:/argo.abc.com/api/v1/workflows/argo/python-tensorflow-job

Upgrading From HDFS to S3 File Technology

Distributed algorithm is one of the fields of cloud-native enabling technologies, such as Google’s Kubeflow technology, which perfectly combines TensorFlow and Kubernetes. Distributed algorithms usually use files, and S3 is the de facto standard for storing large data files that can be easily accessed. Of course, DolphinScheduler also integrates the MinIO technology, which enables S3 file management via simple configuration.

Start by modifying the configmap part in the Helm value.yaml file to point to a MinIO server.

configmap:
  DOLPHINSCHEDULER_OPTS: ""
  DATA_BASEDIR_PATH: "/tmp/dolphinscheduler"
  RESOURCE_STORAGE_TYPE: "S3"
  RESOURCE_UPLOAD_PATH: "/dolphinscheduler"
  FS_DEFAULT_FS: "s3a://dfs"
  FS_S3A_ENDPOINT: "http://192.168.1.100:9000"
  FS_S3A_ACCESS_KEY: "admin"
  FS_S3A_SECRET_KEY: "password"

The name of the bucket that stores files in MinIO is called “dolphinscheduler”. The shared files which are uploaded by users via DolphinScheduler UI are stored in this folder.

Conclusion

As a new-generation cloud-native big data tool, Apache DolphinScheduler is expected to integrate more excellent tools and features in the Kubernetes ecosystem in the future to meet more requirements of diversified user groups and scenarios. Some of the items we currently have on our roadmap include:

  • Using a sidecar to periodically delete worker job logs to realize carefree operation and maintenance
  • Integrating more with Argo Workflows, users can call Argo Workflows for a single job, DAG job, and periodic job in Apache DolphinScheduler through the API, CLI, etc.
  • Using HPA (Horizontal Pod Autoscaling) to automatically scale up or down any DolphinScheduler’s component to achieve a more resilient running environment and deal with the uncertain workload
  • Integrating the Spark operator and Flink operator to perform a comprehensive cloud-native based distributed computing
  • Implementing multi-cloud and multi-cluster distributed job scheduling, and strengthening the architectural attributes of serverless and FAAS classes;

For more information about Apache DolphinScheduler or to participate in the developer community, please reach out to us on our Slack channel.

About the Author

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT