EKSでPodSecurityPolicyを適用する

EKSでPodSecurityPolicyをちゃんと設定する方法を確認してみたメモ。

コンポーネント バージョン 備考
eksctl 0.31.0
Kubernetes バージョン 1.18
プラットフォームのバージョン eks.2
kube-psp-advisor 2.0.0

参考リンク

設定方法

全体の流れとしては以下。

  1. restricted(最も厳しく制限したPSP)を作成する
  2. eks.privileged(デフォルトの無制限のPSP)を削除する
  3. kube-psp-advisorで必要なPSPを調べる(SA単位またはNamespace単位)
  4. PSP/Role/Rolebindingを作成する

手順

クラスターの作成

クラスターを作成する。

eksctl create cluster \
  --name=psp \
  --version 1.18 \
  --nodes=2 --managed \
  --ssh-access --ssh-public-key=default

デフォルト状態の確認

デフォルトの状態を確認する。psp-utilプラグインを使うと簡単に確認できる。cluster-adminは*で許可されているはずだがここには表示されていない。

$ k psp-util list
PSP              ClusterRole                        ClusterRoleBinding                    NS/Role   NS/RoleBinding   Managed
eks.privileged   eks:podsecuritypolicy:privileged   eks:podsecuritypolicy:authenticated                              false
$ k psp-util tree
📙 PSP eks.privileged
└── 📕 ClusterRole eks:podsecuritypolicy:privileged
    └── 📘 ClusterRoleBinding eks:podsecuritypolicy:authenticated
        └── 📗 Subject{Kind: Group, Name: system:authenticated, Namespace: }

PSPeks.privilegedのひとつしかなく、制限がないPSPである。

$ k get psp eks.privileged -o yaml | k neat
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    kubernetes.io/description: privileged allows full unrestricted access to pod features,
      as if the PodSecurityPolicy controller was not enabled.
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
  labels:
    eks.amazonaws.com/component: pod-security-policy
    kubernetes.io/cluster-service: "true"
  name: eks.privileged
spec:
  allowPrivilegeEscalation: true
  allowedCapabilities:
  - '*'
  fsGroup:
    rule: RunAsAny
  hostIPC: true
  hostNetwork: true
  hostPID: true
  hostPorts:
  - max: 65535
    min: 0
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - '*'

このPSPを使うことができるClusterRoleはeks:podsecuritypolicy:privilegedである。

$ k get clusterrole eks:podsecuritypolicy:privileged -o yaml | k neat
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    eks.amazonaws.com/component: pod-security-policy
    kubernetes.io/cluster-service: "true"
  name: eks:podsecuritypolicy:privileged
rules:
- apiGroups:
  - policy
  resourceNames:
  - eks.privileged
  resources:
  - podsecuritypolicies
  verbs:
  - use

このClusterRoleがバインドされているのは、system:authenticatedである。

$ k get clusterrolebinding eks:podsecuritypolicy:authenticated -o yaml | k neat
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubernetes.io/description: Allow all authenticated users to create privileged
      pods.
  labels:
    eks.amazonaws.com/component: pod-security-policy
    kubernetes.io/cluster-service: "true"
  name: eks:podsecuritypolicy:authenticated
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: eks:podsecuritypolicy:privileged
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:authenticated

制限したPSPの作成

制限したPSPを作成する。EKSのベストプラクティスサイボウズのブログでほぼ同じだがアノテーションが違う。

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
    apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
    seccomp.security.alpha.kubernetes.io/defaultProfileName:  'runtime/default'
    apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
spec:
  privileged: false
  # Required to prevent escalations to root.
  allowPrivilegeEscalation: false
  # This is redundant with non-root + disallow privilege escalation,
  # but we can provide it for defense in depth.
  requiredDropCapabilities:
  - ALL
  # Allow core volume types.
  volumes:
  - 'configMap'
  - 'emptyDir'
  - 'projected'
  - 'secret'
  - 'downwardAPI'
  # Assume that persistentVolumes set up by the cluster admin are safe to use.
  - 'persistentVolumeClaim'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    # Require the container to run without root privileges.
    rule: 'MustRunAsNonRoot'
  seLinux:
    # This policy assumes the nodes are using AppArmor rather than SELinux.
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
    # Forbid adding the root group.
    - min: 1
      max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
    # Forbid adding the root group.
    - min: 1
      max: 65535
  readOnlyRootFilesystem: false
k apply -f restricted-psp.yaml

このPSPが全てのユーザーとサービスアカウントに適用されるようにする。サイボウズのブログではsystem:serviceaccountsグループも記載しているが、system:authenticatedグループに含まれていそうに思う。

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: psp:restricted
rules:
- apiGroups:
  - policy
  resources:
  - podsecuritypolicies
  verbs:
  - use
  resourceNames:
  - restricted
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: psp:restricted
roleRef:
  kind: ClusterRole
  name: psp:restricted
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  apiGroup: rbac.authorization.k8s.io
  name: system:authenticated
k apply -f restricted-clusterrole.yaml
k apply -f restricted-clusterrolebinding.yaml

無制限のPSPの削除

デフォルトのeks.privilegedPSP、ClusterRole、ClusterRoleBindingを残すべきかは判断に迷うが、複数のPSPが利用可能な場合の挙動は難しそうなので、避けた方がよさそう。

デフォルトのPSPを残した場合は、cluster-adminなど*で許可している場合に利用可能になるはず。

試しにeks.privilegedのClusterRoleBindingだけを削除した状態にする。psp-utilプラグインは表示できていないが、実際にはcluster-adminはこのPSPを利用できるはず。

k delete clusterrolebinding eks:podsecuritypolicy:authenticated
$ k psp-util tree
📙 PSP eks.privileged
└── 📕 ClusterRole eks:podsecuritypolicy:privileged

📙 PSP restricted
└── 📕 ClusterRole psp:restricted
    └── 📘 ClusterRoleBinding psp:restricted
        └── 📗 Subject{Kind: Group, Name: system:authenticated, Namespace: }

この状態でrootユーザーで実行されるNginxのPodを作成してみる。作成するのはcluster-adminなのでPodを作れる。

$ k run nginx --image=nginx
pod/nginx created
$ k get pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          7s
$ k delete pod nginx
pod "nginx" deleted

一方Deploymentを作成すると、ServiceAccountの権限でコントローラーが作成するので、作成できない。

$ k create deploy nginx --image=nginx
deployment.apps/nginx created
$ k get pod
NAME                    READY   STATUS    RESTARTS   AGE
nginx-f89759699-rgxn9   0/1     Blocked   0          6s
$ k delete deploy nginx
deployment.apps "nginx" deleted

ClusterRoleとPSPも削除する。

k delete clusterrole eks:podsecuritypolicy:privileged
k delete psp eks.privileged

こうするとcluster-adminでもPodを作成できなくなる。

$ k run nginx --image=nginx
pod/nginx created
$ k get pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   0/1     Blocked   0          4s
$ k delete pod nginx
pod "nginx" deleted

適切なPSPの割り当て

もともと特権を持って動いているaws-nodeなどは、そのまま動いているが、削除すると再作成されない。

$ k get pod -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-mzv8g             1/1     Running   0          31s
kube-system   aws-node-vml6w             1/1     Running   0          19s
kube-system   coredns-86f7d88d77-hvmhb   1/1     Running   0          3d19h
kube-system   coredns-86f7d88d77-xxbgt   1/1     Running   0          3d19h
kube-system   kube-proxy-fxlp9           1/1     Running   0          13m
kube-system   kube-proxy-l4b8v           1/1     Running   0          3d19h
$ k delete pod -n kube-system aws-node-mzv8g
pod "aws-node-mzv8g" deleted
$ k get pod -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-vml6w             1/1     Running   0          68s
kube-system   coredns-86f7d88d77-hvmhb   1/1     Running   0          3d19h
kube-system   coredns-86f7d88d77-xxbgt   1/1     Running   0          3d19h
kube-system   kube-proxy-fxlp9           1/1     Running   0          13m
kube-system   kube-proxy-l4b8v           1/1     Running   0          3d19h

先ほどはBlockedだったが、今回はそもそも何もでない。イベントでFailedCreateが確認できる。

$ kubectl get ev --sort-by=.lastTimestamp
LAST SEEN   TYPE      REASON             OBJECT                 MESSAGE
(省略)
2m5s        Warning   FailedCreate       daemonset/aws-node     Error creating: pods "aws-node-" is forbidden: unable to validate against any pod security policy: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[4]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[0].securityContext.capabilities.add: Invalid value: "NET_ADMIN": capability may not be added spec.containers[0].hostPort: Invalid value: 61678: Host port 61678 is not allowed to be used. Allowed ports: []]

必要な権限を割り当てるため、kube-psp-advisorプラグインを使う。krewの場合はadvise-pspという名前になっている。 このツールはマニフェストファイルまたは環境で動いているPodをもとに、必要なPSPの定義を作成してくれるようだ。 したがって、現在のPodが適切にハードニングされていないと、不要な権限まで許可するPSPができてしまうと思われる。

  • ./kube-psp-advisor inspect --reportで分析結果を表示してくれる
  • ./kube-psp-advisor inspect --grantでServiceAccountに個別にPSPを設定するのに必要なyamlが生成される
  • ./kube-psp-advisor inspect --namespace=<ns>でNamespaceの全てのServiceAccountに割り当てるべきPSPが生成される

以下のどちらのかの方針で、設定の大変さを考えてどちらかを選択するのがよさそう。

  • ServiceAccount毎にPSP/Role/RoleBindingを生成し、適用する
  • Namespace毎に必要なPSPを生成し、適用する

Namespace単位で割り当て

kube-systemで必要なPSPを生成する。

$ kubectl advise-psp inspect --namespace kube-system
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  creationTimestamp: null
  name: pod-security-policy-kube-system-20201124000007
spec:
  allowedCapabilities:
  - NET_ADMIN
  allowedHostPaths:
  - pathPrefix: /var/run/dockershim.sock
    readOnly: true
  - pathPrefix: /run/xtables.lock
    readOnly: true
  - pathPrefix: /var/log/aws-routed-eni
    readOnly: true
  - pathPrefix: /var/run/aws-node
    readOnly: true
  - pathPrefix: /var/log
    readOnly: true
  - pathPrefix: /lib/modules
    readOnly: true
  - pathPrefix: /opt/cni/bin
    readOnly: true
  - pathPrefix: /etc/cni/net.d
    readOnly: true
  fsGroup:
    rule: RunAsAny
  hostNetwork: true
  hostPorts:
  - max: 0
    min: 0
  - max: 61678
    min: 61678
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - hostPath
  - secret
  - configMap
  - emptyDir

バグがあるようで、修正が必要。

  • allowedHostPathsが全てreadOnly: trueになってしまっている
  • corednsが全てのケーパビリティをdropしてからNET_BIND_SERVICEをaddしているが、これがくみ取られていない

バグ部分を修正し、kube-systemの全てのServiceAccountにこのPSPを割り当てるためのyamlに作成して適用してみる。

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: pod-security-policy-kube-system
  namespace: kube-system
spec:
  allowedCapabilities:
  - NET_ADMIN
  - NET_BIND_SERVICE
  allowedHostPaths:
  - pathPrefix: /var/log/aws-routed-eni
    readOnly: false
  - pathPrefix: /var/run/aws-node
    readOnly: false
  - pathPrefix: /lib/modules
    readOnly: true
  - pathPrefix: /var/log
    readOnly: false
  - pathPrefix: /opt/cni/bin
    readOnly: false
  - pathPrefix: /etc/cni/net.d
    readOnly: false
  - pathPrefix: /var/run/dockershim.sock
    readOnly: false
  - pathPrefix: /run/xtables.lock
    readOnly: false
  fsGroup:
    rule: RunAsAny
  hostNetwork: true
  hostPorts:
  - max: 0
    min: 0
  - max: 61678
    min: 61678
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - hostPath
  - secret
  - configMap
  - emptyDir
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: use-psp-by-kube-system
  namespace: kube-system
rules:
- apiGroups:
  - policy
  resourceNames:
  - pod-security-policy-kube-system
  resources:
  - podsecuritypolicies
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: use-psp-by-kube-system
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: use-psp-by-kube-system
subjects:
- kind: Group
  name: system:serviceaccounts:kube-system
k apply -f by-namespace.yaml
$ k psp-util tree
📙 PSP pod-security-policy-kube-system
└── 📓 Role kube-system/use-psp-by-kube-system
    └── 📓 RoleBinding kube-system/use-psp-by-kube-system
        └── 📗 Subject{Kind: Group, Name: system:serviceaccounts:kube-system, Namespace: }

📙 PSP restricted
└── 📕 ClusterRole psp:restricted
    └── 📘 ClusterRoleBinding psp:restricted
        └── 📗 Subject{Kind: Group, Name: system:authenticated, Namespace: }

Podを削除して再作成されることを確認する。

$ k get pod -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-8sz9l             1/1     Running   0          17m
kube-system   aws-node-p7cb6             1/1     Running   0          17m
kube-system   coredns-86f7d88d77-smnr9   1/1     Running   0          13s
kube-system   coredns-86f7d88d77-wlbdr   1/1     Running   0          13s
kube-system   kube-proxy-9crr5           1/1     Running   0          16m
kube-system   kube-proxy-l4b8v           1/1     Running   0          3d19h
$ k delete pod -n kube-system aws-node-8sz9l coredns-86f7d88d77-smnr9 kube-proxy-9crr5
pod "aws-node-8sz9l" deleted
pod "coredns-86f7d88d77-smnr9" deleted
pod "kube-proxy-9crr5" deleted
$ k get pod -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-p7cb6             1/1     Running   0          18m
kube-system   aws-node-qzgh7             1/1     Running   0          16s
kube-system   coredns-86f7d88d77-9mklc   1/1     Running   0          33s
kube-system   coredns-86f7d88d77-wlbdr   1/1     Running   0          70s
kube-system   kube-proxy-86sbr           1/1     Running   0          28s
kube-system   kube-proxy-l4b8v           1/1     Running   0          3d19h

作成したPSP/Role/RoleBindingを削除する。

k delete -f by-namespace.yaml

ServiceAccount単位で割り当て

--grantオプションでServiceAccount毎ににPSPを設定するのに必要はyamlをRoleとRoleBindingも含めて生成してくれる。

$ k advise-psp inspect --grant --namespace kube-system
# Pod security policies will be created for service account 'aws-node' in namespace 'kube-system' with following workdloads:
#   Kind: DaemonSet, Name: aws-node, Image: 602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/amazon-k8s-cni-init:v1.7.5-eksbuild.1
#   Kind: DaemonSet, Name: aws-node, Image: 602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/amazon-k8s-cni:v1.7.5-eksbuild.1
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  creationTimestamp: null
  name: psp-for-kube-system-aws-node
spec:
  allowedCapabilities:
  - NET_ADMIN
  allowedHostPaths:
  - pathPrefix: /etc/cni/net.d
    readOnly: true
  - pathPrefix: /var/run/dockershim.sock
    readOnly: true
  - pathPrefix: /run/xtables.lock
    readOnly: true
  - pathPrefix: /var/log/aws-routed-eni
    readOnly: true
  - pathPrefix: /var/run/aws-node
    readOnly: true
  - pathPrefix: /opt/cni/bin
    readOnly: true
  fsGroup:
    rule: RunAsAny
  hostNetwork: true
  hostPorts:
  - max: 61678
    min: 61678
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - hostPath
  - secret
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: use-psp-by-kube-system:aws-node
  namespace: kube-system
rules:
- apiGroups:
  - policy
  resourceNames:
  - psp-for-kube-system-aws-node
  resources:
  - podsecuritypolicies
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: null
  name: use-psp-by-kube-system:aws-node-binding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: use-psp-by-kube-system:aws-node
subjects:
- kind: ServiceAccount
  name: aws-node
  namespace: kube-system
---
# Pod security policies will be created for service account 'coredns' in namespace 'kube-system' with following workdloads:
#   Kind: Deployment, Name: coredns, Image: 602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/eks/coredns:v1.7.0-eksbuild.1
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  creationTimestamp: null
  name: psp-for-kube-system-coredns
spec:
  allowPrivilegeEscalation: false
  defaultAddCapabilities:
  - NET_BIND_SERVICE
  fsGroup:
    rule: RunAsAny
  hostPorts:
  - max: 0
    min: 0
  readOnlyRootFilesystem: true
  requiredDropCapabilities:
  - all
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - emptyDir
  - configMap
  - secret
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: use-psp-by-kube-system:coredns
  namespace: kube-system
rules:
- apiGroups:
  - policy
  resourceNames:
  - psp-for-kube-system-coredns
  resources:
  - podsecuritypolicies
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: null
  name: use-psp-by-kube-system:coredns-binding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: use-psp-by-kube-system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
# Pod security policies will be created for service account 'kube-proxy' in namespace 'kube-system' with following workdloads:
#   Kind: DaemonSet, Name: kube-proxy, Image: 602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/eks/kube-proxy:v1.18.8-eksbuild.1
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  creationTimestamp: null
  name: psp-for-kube-system-kube-proxy
spec:
  allowedHostPaths:
  - pathPrefix: /var/log
    readOnly: true
  - pathPrefix: /run/xtables.lock
    readOnly: true
  - pathPrefix: /lib/modules
    readOnly: true
  fsGroup:
    rule: RunAsAny
  hostNetwork: true
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - hostPath
  - configMap
  - secret
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: use-psp-by-kube-system:kube-proxy
  namespace: kube-system
rules:
- apiGroups:
  - policy
  resourceNames:
  - psp-for-kube-system-kube-proxy
  resources:
  - podsecuritypolicies
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: null
  name: use-psp-by-kube-system:kube-proxy-binding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: use-psp-by-kube-system:kube-proxy
subjects:
- kind: ServiceAccount
  name: kube-proxy
  namespace: kube-system
---

やはり、allowedHostPathsが全てreadOnly: trueになっているいるので修正が必要。corednsのNET_BIND_SERVICEは大丈夫そう。

yamlを上記のバグの部分を修正して適用する。

k apply -f by-serviceaccount.yaml
$ k psp-util tree
📙 PSP psp-for-kube-system-aws-node
└── 📓 Role kube-system/use-psp-by-kube-system:aws-node
    └── 📓 RoleBinding kube-system/use-psp-by-kube-system:aws-node-binding
        └── 📗 Subject{Kind: ServiceAccount, Name: aws-node, Namespace: kube-system}

📙 PSP psp-for-kube-system-coredns
└── 📓 Role kube-system/use-psp-by-kube-system:coredns
    └── 📓 RoleBinding kube-system/use-psp-by-kube-system:coredns-binding
        └── 📗 Subject{Kind: ServiceAccount, Name: coredns, Namespace: kube-system}

📙 PSP psp-for-kube-system-kube-proxy
└── 📓 Role kube-system/use-psp-by-kube-system:kube-proxy
    └── 📓 RoleBinding kube-system/use-psp-by-kube-system:kube-proxy-binding
        └── 📗 Subject{Kind: ServiceAccount, Name: kube-proxy, Namespace: kube-system}

📙 PSP restricted
└── 📕 ClusterRole psp:restricted
    └── 📘 ClusterRoleBinding psp:restricted
        └── 📗 Subject{Kind: Group, Name: system:authenticated, Namespace: }

Podを削除して再作成されることを確認する。

$ k get pod -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-p7cb6             1/1     Running   0          19m
kube-system   aws-node-qzgh7             1/1     Running   0          80s
kube-system   coredns-86f7d88d77-9mklc   1/1     Running   0          97s
kube-system   coredns-86f7d88d77-wlbdr   1/1     Running   0          2m14s
kube-system   kube-proxy-86sbr           1/1     Running   0          92s
kube-system   kube-proxy-l4b8v           1/1     Running   0          3d19h
$ k delete pod -n kube-system aws-node-p7cb6 coredns-86f7d88d77-9mklc kube-proxy-86sbr
pod "aws-node-p7cb6" deleted
pod "coredns-86f7d88d77-9mklc" deleted
pod "kube-proxy-86sbr" deleted
$ k get pod -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-qzgh7             1/1     Running   0          2m5s
kube-system   aws-node-tg859             1/1     Running   0          11s
kube-system   coredns-86f7d88d77-wlbdr   1/1     Running   0          2m59s
kube-system   coredns-86f7d88d77-wsjdc   1/1     Running   0          23s
kube-system   kube-proxy-bw299           1/1     Running   0          21s
kube-system   kube-proxy-l4b8v           1/1     Running   0          3d19h