このセクションでは、DaemonSet を RollingUpdate でアップデートする方法について紹介します。 DaemonSet では、1 ノードに複数の同一 Pod を作成できないため、Deployment とは異なり一度に超過可能な Pod の数( maxSurge )を設定できません。一度に停止可能な Pod の数( maxUnavailable )のみを指定して ローリングアップデートを行います。
サンプルのマニフェストファイルを新規作成し、以下コードを記述します。
[root@kube-master sample-daemonset]# vi sample-daemonset-rollingupdate.yaml
spec の updateStrategy に type: RollingUpdate を指定します。また、rollingUpdate に maxUnavailable を指定します。ここでは、Pod を一つずつアップデートするために、maxUnavailable を 1 としています。
apiVersion: apps/v1 kind: DaemonSet metadata: name: sample-daemonset-rollingupdate spec: updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12
Master サーバーから 作成したマニフェストを実行し、Kubernetes クラスタ上にリソースを作成します。
[root@kube-master sample-daemonset]# kubectl apply -f sample-daemonset-rollingupdate.yaml --record daemonset.apps/sample-daemonset-rollingupdate created [root@kube-master sample-daemonset]#
Master サーバーから Kubernetes クラスタ上の Pod リソースを確認します。各ノードに Pod が 一つずつ起動していることが確認できます。
[root@kube-master sample-daemonset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES sample-daemonset-rollingupdate-hgg4x 1/1 Running 0 4s 10.244.2.81 kube-work2 <none> <none> sample-daemonset-rollingupdate-scgmk 1/1 Running 0 4s 10.244.1.151 kube-work1 <none> <none> [root@kube-master sample-daemonset]#
Master サーバーから Kubernetes クラスタ上の Pod の詳細情報を確認します。ここでは、各 Pod 上のコンテナで nginx 1.2 のイメージが起動していることが確認できます。
[root@kube-master sample-daemonset]# kubectl describe pods sample-daemonset-rollingupdate Name: sample-daemonset-rollingupdate-hgg4x Namespace: default Priority: 0 PriorityClassName: <none> Node: kube-work2/192.168.25.102 Start Time: Sat, 02 Feb 2019 00:48:25 +0900 Labels: app=sample-app controller-revision-hash=d88f4f445 pod-template-generation=1 Annotations: <none> Status: Running IP: 10.244.2.81 Controlled By: DaemonSet/sample-daemonset-rollingupdate Containers: nginx-container: Container ID: docker://d5f03f5aab0f7bafbd12840c895e18acf33a93092a81c7f991ef146f8c2c6e2b Image: nginx:1.12 Image ID: docker-pullable://nginx@sha256:72daaf46f11cc753c4eab981cbf869919bd1fee3d2170a2adeac12400f494728 Port: <none> Host Port: <none> State: Running Started: Sat, 02 Feb 2019 00:48:27 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-75dfq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-75dfq: Type: Secret (a volume populated by a Secret) SecretName: default-token-75dfq Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m24s default-scheduler Successfully assigned default/sample-daemonset-rollingupdate-hgg4x to kube-work2 Normal Pulled 2m23s kubelet, kube-work2 Container image "nginx:1.12" already present on machine Normal Created 2m23s kubelet, kube-work2 Created container Normal Started 2m22s kubelet, kube-work2 Started container Name: sample-daemonset-rollingupdate-scgmk Namespace: default Priority: 0 PriorityClassName: <none> Node: kube-work1/192.168.25.101 Start Time: Sat, 02 Feb 2019 00:48:25 +0900 Labels: app=sample-app controller-revision-hash=d88f4f445 pod-template-generation=1 Annotations: <none> Status: Running IP: 10.244.1.151 Controlled By: DaemonSet/sample-daemonset-rollingupdate Containers: nginx-container: Container ID: docker://471f695699082509c83dfc68842ce785ec1733be05e027b61d24c57709ee7575 Image: nginx:1.12 Image ID: docker-pullable://nginx@sha256:72daaf46f11cc753c4eab981cbf869919bd1fee3d2170a2adeac12400f494728 Port: <none> Host Port: <none> State: Running Started: Sat, 02 Feb 2019 00:48:26 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-75dfq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-75dfq: Type: Secret (a volume populated by a Secret) SecretName: default-token-75dfq Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m24s default-scheduler Successfully assigned default/sample-daemonset-rollingupdate-scgmk to kube-work1 Normal Pulled 2m23s kubelet, kube-work1 Container image "nginx:1.12" already present on machine Normal Created 2m23s kubelet, kube-work1 Created container Normal Started 2m23s kubelet, kube-work1 Started container [root@kube-master sample-daemonset]#
サンプルのマニフェストファイルを編集し、コンテナイメージを変更します。
[root@kube-master sample-daemonset]# vi sample-daemonset-rollingupdate.yaml
spec の containers イメージ を nginx:1.2 から nginx:1.13 に変更します。
apiVersion: apps/v1 kind: DaemonSet metadata: name: sample-daemonset-rollingupdate spec: updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.13
Master サーバーから 変更したマニフェストを実行し、Kubernetes クラスタ上にリソースを更新してみます。
[root@kube-master sample-daemonset]# kubectl apply -f sample-daemonset-rollingupdate.yaml --record daemonset.apps/sample-daemonset-rollingupdate configured [root@kube-master sample-daemonset]#
Master サーバーから Kubernetes クラスタ上の Pod リソースを確認します。ここでは、Pod リソースの名前が変わっていることが確認できます。
[root@kube-master sample-daemonset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES sample-daemonset-rollingupdate-6szg4 1/1 Running 0 9s 10.244.2.82 kube-work2 <none> <none> sample-daemonset-rollingupdate-g8qvn 1/1 Running 0 12s 10.244.1.152 kube-work1 <none> <none> [root@kube-master sample-daemonset]#
Master サーバーから Kubernetes クラスタ上の Pod の詳細情報を確認します。ここでは、各 Pod 上でのコンテナーイメージが nginx 1.3 になっていることが確認できます。
[root@kube-master sample-daemonset]# kubectl describe pods sample-daemonset-rollingupdate Name: sample-daemonset-rollingupdate-6szg4 Namespace: default Priority: 0 PriorityClassName: <none> Node: kube-work2/192.168.25.102 Start Time: Sat, 02 Feb 2019 00:55:25 +0900 Labels: app=sample-app controller-revision-hash=679c64dbd8 pod-template-generation=2 Annotations: <none> Status: Running IP: 10.244.2.82 Controlled By: DaemonSet/sample-daemonset-rollingupdate Containers: nginx-container: Container ID: docker://2c2aa9d26ffb5ced6280db110fcb6739914341722f7634d6965b981fc97ff72e Image: nginx:1.13 Image ID: docker-pullable://nginx@sha256:b1d09e9718890e6ebbbd2bc319ef1611559e30ce1b6f56b2e3b479d9da51dc35 Port: <none> Host Port: <none> State: Running Started: Sat, 02 Feb 2019 00:55:26 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-75dfq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-75dfq: Type: Secret (a volume populated by a Secret) SecretName: default-token-75dfq Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m30s default-scheduler Successfully assigned default/sample-daemonset-rollingupdate-6szg4 to kube-work2 Normal Pulled 2m30s kubelet, kube-work2 Container image "nginx:1.13" already present on machine Normal Created 2m30s kubelet, kube-work2 Created container Normal Started 2m29s kubelet, kube-work2 Started container Name: sample-daemonset-rollingupdate-g8qvn Namespace: default Priority: 0 PriorityClassName: <none> Node: kube-work1/192.168.25.101 Start Time: Sat, 02 Feb 2019 00:55:22 +0900 Labels: app=sample-app controller-revision-hash=679c64dbd8 pod-template-generation=2 Annotations: <none> Status: Running IP: 10.244.1.152 Controlled By: DaemonSet/sample-daemonset-rollingupdate Containers: nginx-container: Container ID: docker://687b2d1d676bec38ec0f08625e5319add125ac5b8547cb2189aaa6fde8bceb95 Image: nginx:1.13 Image ID: docker-pullable://nginx@sha256:b1d09e9718890e6ebbbd2bc319ef1611559e30ce1b6f56b2e3b479d9da51dc35 Port: <none> Host Port: <none> State: Running Started: Sat, 02 Feb 2019 00:55:23 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-75dfq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-75dfq: Type: Secret (a volume populated by a Secret) SecretName: default-token-75dfq Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m33s default-scheduler Successfully assigned default/sample-daemonset-rollingupdate-g8qvn to kube-work1 Normal Pulled 2m32s kubelet, kube-work1 Container image "nginx:1.13" already present on machine Normal Created 2m32s kubelet, kube-work1 Created container Normal Started 2m32s kubelet, kube-work1 Started container [root@kube-master sample-daemonset]#
Master サーバーから 作成した DaemonSet リソースを削除します。
[root@kube-master sample-daemonset]# kubectl delete daemonset sample-daemonset-rollingupdate daemonset.extensions "sample-daemonset-rollingupdate" deleted [root@kube-master sample-daemonset]#
Master サーバーから Kubernetes クラスタ上の Pod リソースを確認します。ここでは、DaemonSet リソースで作成した Pod が削除されていることが確認できます。
[root@kube-master sample-daemonset]# kubectl get pods -o wide No resources found. [root@kube-master sample-daemonset]#