Starboard Operatorを試す

Starboard Operatorを試してみたメモ。

StarboardはCLIとOperatorとある。

導入

インストールはマニフェスト、Helm、Operator Lifecycle Managerによる方法がある。ここではHelmでインストールする。

helm repo add aqua https://aquasecurity.github.io/helm-charts/
helm repo update
helm upgrade --install starboard-operator aqua/starboard-operator \
  -n starboard-operator --create-namespace \
  --set=targetNamespaces="" \
  --version 0.5.3

オペレーターにECRの読み取り権限を与える。

eksctl create iamserviceaccount \
  --name starboard-operator \
  --namespace starboard-operator \
  --cluster staging \
  --attach-policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
  --approve \
  --override-existing-serviceaccounts

GitHubのレートリミットを避けるため、パーソナルアクセストークンを作成して設定する。権限は不要。

GITHUB_TOKEN=<your token>

kubectl patch secret starboard -n starboard-operator \
  --type merge \
  -p "$(cat <<EOF
{
  "data": {
    "trivy.githubToken": "$(echo -n $GITHUB_TOKEN | base64)"
  }
}
EOF
)"

念のため一度Podを削除。

k delete pod -n starboard-operator --all

Podを確認する。

$ k get pod -n starboard-operator
NAME                                  READY   STATUS    RESTARTS   AGE
starboard-operator-7fff5747c4-zwp59   1/1     Running   0          55s

スキャン

脆弱性スキャンの結果を確認する。

$ kubectl get vulnerabilityreports -o wide -A
NAMESPACE            NAME                                                                      REPOSITORY                                     TAG                  SCANNER   AGE     CRITICAL   HIGH   MEDIUM   LOW   UNKNOWN
argocd               replicaset-argocd-dex-server-5dd657bd9-dex                                dexidp/dex                                     v2.27.0              Trivy     80s     0          9      8        3     0
argocd               replicaset-argocd-dex-server-66ff89cb7b-dex                               dexidp/dex                                     v2.27.0              Trivy     111s    0          9      8        3     0
argocd               replicaset-argocd-dex-server-fd74c7c8c-dex                                dexidp/dex                                     v2.27.0              Trivy     110s    0          9      8        3     0
argocd               replicaset-argocd-redis-66b48966cb-redis                                  library/redis                                  5.0.10-alpine        Trivy     56s     0          5      2        0     0
argocd               replicaset-argocd-redis-759b6bc7f4-redis                                  library/redis                                  6.2.1-alpine         Trivy     112s    0          0      0        0     0
argocd               replicaset-argocd-repo-server-6c495f858f-argocd-repo-server               argoproj/argocd                                v2.0.0               Trivy     28s     0          6      43       110   0
argocd               replicaset-argocd-repo-server-79d884f4f6-argocd-repo-server               argoproj/argocd                                v1.8.2               Trivy     40s     4          138    95       471   5
argocd               replicaset-argocd-repo-server-84d58ff546-argocd-repo-server               argoproj/argocd                                v2.0.1               Trivy     81s     0          3      38       108   0
argocd               replicaset-argocd-server-6dccb89f65-argocd-server                         argoproj/argocd                                v1.8.2               Trivy     35s     4          138    95       471   5
argocd               replicaset-argocd-server-7fd556c67c-argocd-server                         argoproj/argocd                                v2.0.1               Trivy     2m51s   0          3      38       108   0
argocd               replicaset-argocd-server-859b4b5578-argocd-server                         argoproj/argocd                                v2.0.0               Trivy     2m29s   0          6      43       110   0
argocd               statefulset-argocd-application-controller-argocd-application-controller   argoproj/argocd                                v2.0.1               Trivy     2m53s   0          3      38       108   0
backend              replicaset-backend-678944684b-backend                                     backend                                        75994d8              Trivy     75s     2          22     11       74    0
backend              replicaset-backend-7945cd669c-backend                                     backend                                        c65764f              Trivy     3m23s   2          22     11       74    0
backend              replicaset-backend-7d8b8f99cc-backend                                     backend                                        9ac248f              Trivy     80s     2          22     11       74    0
backend              replicaset-backend-b68bc665c-backend                                      backend                                        3d5f54d              Trivy     2m28s   0          7      4        2     0
calico-system        daemonset-calico-node-calico-node                                         calico/node                                    v3.17.1              Trivy     3m23s   0          0      0        0     0
calico-system        replicaset-calico-kube-controllers-5d786d9bbc-calico-kube-controllers     calico/kube-controllers                        v3.17.1              Trivy     110s    0          0      0        0     0
calico-system        replicaset-calico-typha-74fdb8b6f-calico-typha                            calico/typha                                   v3.17.1              Trivy     3m22s   0          0      0        0     0
cert-manager         replicaset-cert-manager-649c5f88bc-cert-manager                           jetstack/cert-manager-controller               v1.0.2               Trivy     2m26s   0          0      0        0     0
cert-manager         replicaset-cert-manager-68ff46b886-cert-manager                           jetstack/cert-manager-controller               v1.1.1               Trivy     82s     0          0      0        0     0
cert-manager         replicaset-cert-manager-cainjector-7cdbb9c945-cert-manager                jetstack/cert-manager-cainjector               v1.1.1               Trivy     2m26s   0          0      0        0     0
cert-manager         replicaset-cert-manager-cainjector-9747d56-cert-manager                   jetstack/cert-manager-cainjector               v1.0.2               Trivy     2m51s   0          0      0        0     0
cert-manager         replicaset-cert-manager-webhook-67584ff488-cert-manager                   jetstack/cert-manager-webhook                  v1.1.1               Trivy     3m22s   0          0      0        0     0
cert-manager         replicaset-cert-manager-webhook-849c7b574f-cert-manager                   jetstack/cert-manager-webhook                  v1.0.2               Trivy     82s     0          0      0        0     0
default              replicaset-nginx-6d4cf56db6-nginx                                         library/nginx                                  1.16                 Trivy     2m48s   13         45     29       92    0
default              replicaset-nginx-db749865c-nginx                                          library/nginx                                  1.17                 Trivy     2m52s   13         43     27       92    0
external-secrets     replicaset-external-secrets-56fbfc9687-kubernetes-external-secrets        external-secrets/kubernetes-external-secrets   7.2.1                Trivy     2m47s   0          0      0        0     0
external-secrets     replicaset-external-secrets-658cc9b744-kubernetes-external-secrets        godaddy/kubernetes-external-secrets            6.0.0                Trivy     3m18s   0          12     9        2     0
external-secrets     replicaset-external-secrets-69444c8577-kubernetes-external-secrets        external-secrets/kubernetes-external-secrets   6.1.0                Trivy     2m47s   0          10     9        2     0
external-secrets     replicaset-external-secrets-7cfc59f6d7-kubernetes-external-secrets        external-secrets/kubernetes-external-secrets   7.2.1                Trivy     2m51s   0          0      0        0     0
frontend             replicaset-frontend-57b979f9bb-frontend                                   frontend                                       bc03a29              Trivy     2m26s   2          22     11       74    0
frontend             replicaset-frontend-66bc7f9b57-frontend                                   frontend                                       0845ad7              Trivy     110s    0          7      4        2     0
frontend             replicaset-frontend-66d48f89df-frontend                                   frontend                                       9f0263c              Trivy     3m21s   2          22     11       74    0
frontend             replicaset-frontend-675b6f8bfb-frontend                                   frontend                                       a12db35              Trivy     2m27s   2          22     11       74    0
frontend             replicaset-frontend-7cc57c4fb4-frontend                                   frontend                                       48aa94e              Trivy     76s     2          22     11       74    0
frontend             replicaset-frontend-844fb64db4-frontend                                   frontend                                       aa38612              Trivy     55s     2          22     11       74    0
frontend             replicaset-frontend-dc89db794-frontend                                    frontend                                       0845ad7              Trivy     112s    0          7      4        2     0
frontend             replicaset-frontend-f487b9f88-frontend                                    frontend                                       8b40ef7              Trivy     78s     2          22     11       74    0
gatekeeper-system    replicaset-gatekeeper-audit-54b5f86d57-manager                            openpolicyagent/gatekeeper                     v3.3.0               Trivy     110s    0          0      0        0     0
gatekeeper-system    replicaset-gatekeeper-controller-manager-5b96bd668-manager                openpolicyagent/gatekeeper                     v3.3.0               Trivy     55s     0          0      0        0     0
kube-system          daemonset-aws-node-aws-node                                               amazon-k8s-cni                                 v1.7.10-eksbuild.1   Trivy     3m20s   0          0      0        0     0
kube-system          daemonset-kube-proxy-kube-proxy                                           eks/kube-proxy                                 v1.19.6-eksbuild.2   Trivy     3m20s   2          22     12       75    0
kube-system          replicaset-aws-load-balancer-controller-85ff4bfbc7-controller             amazon/aws-alb-ingress-controller              v2.1.0               Trivy     52s     0          0      0        0     0
kube-system          replicaset-aws-load-balancer-controller-dd979d56b-controller              amazon/aws-alb-ingress-controller              v2.1.3               Trivy     2m44s   0          0      0        0     0
kube-system          replicaset-coredns-59847d77c8-coredns                                     eks/coredns                                    v1.8.0-eksbuild.1    Trivy     2m30s   0          0      0        0     0
kube-system          replicaset-coredns-86f7d88d77-coredns                                     eks/coredns                                    v1.7.0-eksbuild.1    Trivy     111s    0          0      0        0     0
starboard-operator   replicaset-starboard-operator-7fff5747c4-starboard-operator               aquasec/starboard-operator                     0.10.3               Trivy     110s    0          0      0        0     0
tigera-operator      replicaset-tigera-operator-657cc89589-tigera-operator                     tigera/operator                                v1.13.2              Trivy     83s     0          0      0        0     0

構成スキャンの結果を確認する。

$ kubectl get configauditreports -o wide -A
NAMESPACE            NAME                                                 SCANNER   AGE   DANGER   WARNING   PASS
argocd               replicaset-argocd-dex-server-5dd657bd9               Polaris   25m   2        12        11
argocd               replicaset-argocd-dex-server-66ff89cb7b              Polaris   25m   2        12        11
argocd               replicaset-argocd-dex-server-fd74c7c8c               Polaris   25m   2        12        11
argocd               replicaset-argocd-redis-66b48966cb                   Polaris   25m   1        8         8
argocd               replicaset-argocd-redis-759b6bc7f4                   Polaris   25m   1        8         8
argocd               replicaset-argocd-repo-server-6c495f858f             Polaris   25m   0        7         10
argocd               replicaset-argocd-repo-server-79d884f4f6             Polaris   25m   1        7         9
argocd               replicaset-argocd-repo-server-84d58ff546             Polaris   25m   0        7         10
argocd               replicaset-argocd-server-6dccb89f65                  Polaris   22m   1        7         9
argocd               replicaset-argocd-server-7fd556c67c                  Polaris   25m   0        7         10
argocd               replicaset-argocd-server-859b4b5578                  Polaris   23m   0        7         10
argocd               statefulset-argocd-application-controller            Polaris   25m   0        7         10
backend              replicaset-backend-678944684b                        Polaris   24m   1        9         7
backend              replicaset-backend-7945cd669c                        Polaris   23m   1        9         7
backend              replicaset-backend-7d8b8f99cc                        Polaris   24m   1        9         7
backend              replicaset-backend-b68bc665c                         Polaris   22m   1        9         7
calico-system        daemonset-calico-node                                Polaris   25m   4        11        10
calico-system        replicaset-calico-kube-controllers-5d786d9bbc        Polaris   23m   1        8         8
calico-system        replicaset-calico-typha-74fdb8b6f                    Polaris   24m   1        9         7
cert-manager         replicaset-cert-manager-649c5f88bc                   Polaris   23m   1        1         7
cert-manager         replicaset-cert-manager-68ff46b886                   Polaris   22m   1        1         7
cert-manager         replicaset-cert-manager-cainjector-7cdbb9c945        Polaris   24m   1        1         7
cert-manager         replicaset-cert-manager-cainjector-9747d56           Polaris   24m   1        1         7
cert-manager         replicaset-cert-manager-webhook-67584ff488           Polaris   23m   1        1         7
cert-manager         replicaset-cert-manager-webhook-849c7b574f           Polaris   22m   1        1         7
default              replicaset-nginx-6d4cf56db6                          Polaris   23m   1        9         7
default              replicaset-nginx-db749865c                           Polaris   22m   1        9         7
external-secrets     replicaset-external-secrets-56fbfc9687               Polaris   25m   1        8         8
external-secrets     replicaset-external-secrets-658cc9b744               Polaris   22m   1        8         8
external-secrets     replicaset-external-secrets-69444c8577               Polaris   23m   1        8         8
external-secrets     replicaset-external-secrets-7cfc59f6d7               Polaris   24m   1        8         8
frontend             replicaset-frontend-57b979f9bb                       Polaris   22m   1        9         7
frontend             replicaset-frontend-66bc7f9b57                       Polaris   24m   1        9         7
frontend             replicaset-frontend-66d48f89df                       Polaris   24m   1        9         7
frontend             replicaset-frontend-675b6f8bfb                       Polaris   24m   1        9         7
frontend             replicaset-frontend-7cc57c4fb4                       Polaris   23m   1        9         7
frontend             replicaset-frontend-844fb64db4                       Polaris   23m   1        9         7
frontend             replicaset-frontend-dc89db794                        Polaris   22m   1        9         7
frontend             replicaset-frontend-f487b9f88                        Polaris   24m   1        9         7
gatekeeper-system    replicaset-gatekeeper-audit-54b5f86d57               Polaris   25m   0        1         16
gatekeeper-system    replicaset-gatekeeper-controller-manager-5b96bd668   Polaris   23m   0        1         16
kube-system          daemonset-aws-node                                   Polaris   25m   4        11        10
kube-system          daemonset-kube-proxy                                 Polaris   24m   1        1         3
kube-system          replicaset-aws-load-balancer-controller-85ff4bfbc7   Polaris   23m   0        2         15
kube-system          replicaset-aws-load-balancer-controller-dd979d56b    Polaris   23m   0        2         15
kube-system          replicaset-coredns-59847d77c8                        Polaris   23m   0        3         14
kube-system          replicaset-coredns-86f7d88d77                        Polaris   24m   0        3         14
starboard-operator   replicaset-starboard-operator-7fff5747c4             Polaris   20m   0        5         12
tigera-operator      replicaset-tigera-operator-657cc89589                Polaris   23m   1        10        6

kube-benchのスキャン結果を確認する。

$ kubectl get ciskubebenchreports -o wide
NAME                                             SCANNER      AGE   FAIL   WARN   INFO   PASS
ip-10-1-108-42.ap-northeast-1.compute.internal   kube-bench   28m   0      38     0      14
ip-10-1-71-185.ap-northeast-1.compute.internal   kube-bench   28m   0      38     0      14

kube-hunterは未対応。

$ kubectl get kubehunterreports -o wide
No resources found

Operatorにはダッシュボードが付属しているわけではないので、この先の結果の確認はCLIの場合と同じ。

Starboard CLIを試す

Starboard CLIを試してみたメモ。

StarboardはCLIとOperatorとある。

導入

CLIの導入はバイナリ、kubectlプラグイン、コンテナイメージなどが利用可能だが、今回はバイナリをダウンロードした。

$ starboard version
Starboard Version: {Version:0.10.3 Commit:5bd33431a239b98be4a3287563b8664a9b3d5707 Date:2021-05-14T12:20:34Z}

starboard initクラスターにセットアップが行われる。ネームスペースが作成され、セキュリティレポートのCRDや設定を格納したConfigMapが作成される。

$ starboard init
$ k get cm -n starboard
NAME                       DATA   AGE
starboard                  11     8s
starboard-polaris-config   1      7s
$ kubectl api-resources --api-group aquasecurity.github.io
NAME                   SHORTNAMES    APIVERSION                        NAMESPACED   KIND
ciskubebenchreports    kubebench     aquasecurity.github.io/v1alpha1   false        CISKubeBenchReport
configauditreports     configaudit   aquasecurity.github.io/v1alpha1   true         ConfigAuditReport
kubehunterreports      kubehunter    aquasecurity.github.io/v1alpha1   false        KubeHunterReport
vulnerabilityreports   vuln,vulns    aquasecurity.github.io/v1alpha1   true         VulnerabilityReport

実行

4種類のスキャンが実行可能。

Trivy

NginxのDeploymentを作成する。

$ kubectl -n default create deployment nginx --image nginx:1.16
deployment.apps/nginx created

スキャナを実行する。対象のリソース指定は必須。

starboard -n default scan vulnerabilityreports deployment/nginx

このとき、裏側ではJob経由でPodが実行される。

レポートが作成されたことを確認する。レポートはコンテナ単位。

$ kubectl get vulnerabilityreports -o wide -A
NAMESPACE   NAME                     REPOSITORY      TAG    SCANNER   AGE   CRITICAL   HIGH   MEDIUM   LOW   UNKNOWN
default     deployment-nginx-nginx   library/nginx   1.16   Trivy     52s   13         45     29       92    0

レポートを確認する。出力が多いので結果は省略。

starboard -n default get vulnerabilities deployment/nginx -o yaml
# or
kubectl -n default get vulnerabilityreports deployment-nginx-nginx -o yaml

Polaris

Porarisを実行する。対象のリソース指定は必須。

starboard -n default scan configauditreports deployment/nginx

レポートが作成されたことを確認する。

$ kubectl get configauditreport -o wide -A
NAME               SCANNER   AGE   DANGER   WARNING   PASS
deployment-nginx   Polaris   24s   1        9         7

レポートを確認する。出力が多いので結果は省略。

starboard -n default get configaudit deployment/nginx -o yaml

HTML出力

レポートをHTML出力して確認する。

starboard -n default get report deployment/nginx > nginx.deploy.html
open nginx.deploy.html

TrivyとPolarisの両方の結果がHTMLで出力される。

f:id:sotoiwa:20210519071514p:plain

f:id:sotoiwa:20210519071530p:plain

対象はNamespaceやNodeの指定も可能。

kube-bench

スキャナを実行する。対象の指定は不要で全ノードに対して実行される。

starboard scan ciskubebenchreports

レポートが作成されたことを確認する。

$ kubectl get ciskubebenchreports -o wide
NAME                                             SCANNER      AGE    FAIL   WARN   INFO   PASS
ip-10-1-108-42.ap-northeast-1.compute.internal   kube-bench   8m1s   0      38     0      14
ip-10-1-71-185.ap-northeast-1.compute.internal   kube-bench   8m1s   0      38     0      14

レポートを確認する。こちらはstarboard getコマンドは対応していない。出力が多いので結果は省略。

kubectl get ciskubebenchreports ip-10-1-108-42.ap-northeast-1.compute.internal -o yaml

HTML出力は可能。最近マージされたばかり(#396)。

starboard get report node/ip-10-1-108-42.ap-northeast-1.compute.internal > ip-10-1-108-42.ap-northeast-1.compute.internal.html
open ip-10-1-108-42.ap-northeast-1.compute.internal.html

f:id:sotoiwa:20210519071555p:plain

kube-hunter

スキャナを実行する。対象はクラスターのため対象指定は不要。

starboard scan kubehunterreports

レポートが作成されたことを確認する。

$ kubectl get kubehunterreports -o wide
NAME      SCANNER       AGE   HIGH   MEDIUM   LOW
cluster   kube-hunter   65s   0      1        1

レポートを確認する。

$ kubectl get kubehunterreports cluster -o yaml
apiVersion: aquasecurity.github.io/v1alpha1
kind: KubeHunterReport
metadata:
  creationTimestamp: "2021-05-11T16:05:19Z"
  generation: 1
  labels:
    starboard.resource.kind: Cluster
    starboard.resource.name: cluster
  name: cluster
  resourceVersion: "44234551"
  selfLink: /apis/aquasecurity.github.io/v1alpha1/kubehunterreports/cluster
  uid: 37c647e2-f006-45d0-8f98-3a418d39bfb4
report:
  scanner:
    name: kube-hunter
    vendor: Aqua Security
    version: 0.4.1
  summary:
    highCount: 0
    lowCount: 1
    mediumCount: 1
    unknownCount: 0
  updateTimestamp: "2021-05-11T16:05:19Z"
  vulnerabilities:
  - avd_reference: https://avd.aquasec.com/kube-hunter/none/
    category: Access Risk
    description: |-
      CAP_NET_RAW is enabled by default for pods.
          If an attacker manages to compromise a pod,
          they could potentially take advantage of this capability to perform network
          attacks on other pods running on the same node
    evidence: ""
    severity: low
    vulnerability: CAP_NET_RAW Enabled
  - avd_reference: https://avd.aquasec.com/kube-hunter/khv002/
    category: Information Disclosure
    description: 'The kubernetes version could be obtained from the /version endpoint '
    evidence: v1.19.6-eks-49a6c0
    severity: medium
    vulnerability: K8s Version Disclosure

こちらはHTML出力は未対応。

クリーンアップ

掃除する。

starboard cleanup

Polarisを試す

Polarisを試してみたメモ。

Polarisは3つのモードで実行が可能。

CLI

まずはコマンドラインツールを試す。

brew tap FairwindsOps/tap
brew install FairwindsOps/tap/polaris

適当なマニフェストを作成する。

k create deploy nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

検査を実行する。デフォルトでディレクトリを指定するようになっている。

$ polaris audit --format=pretty --audit-path ./


Polaris audited Path ./ at 0001-01-01T00:00:00Z
    Nodes: 0 | Namespaces: 0 | Controllers: 1
    Final score: 46

Deployment nginx in namespace 
    hostIPCSet                           🎉 Success
        Security - Host IPC is not configured
    hostNetworkSet                       🎉 Success
        Security - Host network is not configured
    hostPIDSet                           🎉 Success
        Security - Host PID is not configured
  Container nginx
    runAsRootAllowed                     😬 Warning
        Security - Should not be allowed to run as root
    livenessProbeMissing                 😬 Warning
        Reliability - Liveness probe should be configured
    memoryLimitsMissing                  😬 Warning
        Efficiency - Memory limits should be set
    readinessProbeMissing                😬 Warning
        Reliability - Readiness probe should be configured
    cpuRequestsMissing                   😬 Warning
        Efficiency - CPU requests should be set
    privilegeEscalationAllowed           ❌ Danger
        Security - Privilege escalation should not be allowed
    pullPolicyNotAlways                  😬 Warning
        Reliability - Image pull policy should be "Always"
    runAsPrivileged                      🎉 Success
        Security - Not running as privileged
    cpuLimitsMissing                     😬 Warning
        Efficiency - CPU limits should be set
    dangerousCapabilities                🎉 Success
        Security - Container does not have any dangerous capabilities
    memoryRequestsMissing                😬 Warning
        Efficiency - Memory requests should be set
    notReadOnlyRootFilesystem            😬 Warning
        Security - Filesystem should be read only
    tagNotSpecified                      ❌ Danger
        Reliability - Image tag should be specified
    hostPortSet                          🎉 Success
        Security - Host port is not configured
    insecureCapabilities                 😬 Warning
        Security - Container should not have insecure capabilities

ダッシュボード

マニフェストかHelmでインストールする。

helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace

ポートフォワードでダッシュボードにアクセスする。

kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80

こんな感じでクラスターのオーバービューが見られる。

f:id:sotoiwa:20210511185846p:plain

カテゴリ別の結果も見られる。

f:id:sotoiwa:20210511185910p:plain

Namespaceの各ワークロードの結果も見られる。

f:id:sotoiwa:20210511185933p:plain

ダッシュボードをクラスターで動かさなくても、クラスターへのアクセスを許可したコンテナをローカルで起動してダッシュボードにアクセスすることもできる。ただし、EKSだとIAMで認証するために少し工夫がいるかも知れない。

アドミッションコントローラー

チャートまたはマニフェストでデプロイする。クラスターにcert-managerが必要。

helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace \
  --set webhook.enable=true --set dashboard.enable=false

Webhookが作成されている。

$ k get validatingwebhookconfiguration
NAME                                          WEBHOOKS   AGE
aws-load-balancer-webhook                     1          114d
cert-manager-webhook                          1          114d
gatekeeper-validating-webhook-configuration   2          4d23h
polaris-webhook                               1          91s
vpc-resource-validating-webhook               1          114d

先ほどのNginxのマニフェストで試す。ブロックされる。

$ k apply -f nginx.yaml 
Error from server (
Polaris prevented this deployment due to configuration problems:
- Container nginx: Privilege escalation should not be allowed
- Container nginx: Image tag should be specified
): error when creating "nginx.yaml": admission webhook "polaris.fairwinds.com" denied the request: 
Polaris prevented this deployment due to configuration problems:
- Container nginx: Privilege escalation should not be allowed
- Container nginx: Image tag should be specified

設定のカスタマイズconfig.yamlを渡すことで可能。

sKanを試す

Alcide sKanを試してみたメモ。

導入

バイナリをダウンロードしてパスの通ったディレクトリに置く。

実行

適当なマニフェストを作る。

k create deploy nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

検査を実行する。

$ skan manifest --report-passed -f nginx.yaml
[skan-this] Analyzing resources from '1' files/directories.
[skan-this] Loaded '1' objects
[skan-this] Ops Conformance | Workload Readiness & Liveness
[skan-this] Ops Conformance | Workload Capacity Planning
[skan-this] Workload Software Supply Chain | Image Registry Whitelist
[skan-this] Ingress Controllers & Services | Ingress Security & Hardening Configuration
[skan-this] Ingress Controllers & Services | Ingress Controller (nginx) 
[skan-this] Ingress Controllers & Services | Service Resource Checks
[skan-this] Pod Security | Workload Hardening
[skan-this] Secret Hunting | Find Secrets in ConfigMaps
[skan-this] Secret Hunting | Find Secrets in Pod Environment Variables
[skan-this] Admission Controllers | Validating Admission Controllers
[skan-this] Admission Controllers | Mutating Admission Controllers
[skan-this] Generating report (html) and saving as 'skan-result.html'
[skan-this] Summary:
[skan-this] Critical .... 0
[skan-this] High ........ 4
[skan-this] Medium ...... 6
[skan-this] Low ......... 0
[skan-this] Pass ........ 6

htmlの結果ファイルも作成されるので、結果をブラウザで見ることが可能。

open skan-result.html

f:id:sotoiwa:20210511172500p:plain

結果をyamljsonで出力することも可能。

skan manifest --report-passed -f nginx.yaml -o json --outputfile skan-result.json
skan manifest --report-passed -f nginx.yaml -o yaml --outputfile skan-result.yaml
AdvisorReportHeader:
  CreationTimeStamp: "2021-05-12T13:46:44+09:00"
  Info: nginx.yaml
  MSTimeStamp: 1620794804433
  ReportUID: 47155860-7de8-449f-b9d2-699fc0e2c754
  ScannerVersion: .
Reports:
  Ops Conformance:
    ResourceKind: Ops Conformance
    ResourceName: Ops Conformance
    ResourceNamespace: KubeAdvisor
    ResourceUID: dops.1
    Results:
    - Action: Alert
      Category: Ops Conformance
      Check:
        CheckId: "1"
        CheckTitle: Liveness Probe Configured
        GroupId: "1"
        GroupTitle: Workload Readiness & Liveness
        ModuleId: dops.1
        ModuleTitle: Ops Conformance
      CheckId: dops.1.1.1.1667744901853394230
      Message: '''Deployment.apps nginx'', is missing at least one Liveness Probe
        - '
      Platform: Kubernetes
      Recommendation: Deployment nginx - Configure liveness probe for your pod containers
        to ensure Pod liveness is managed and monitored by Kubernetes
      References:
      - https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: dops.1.1.1.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
    - Action: Alert
      Category: Ops Conformance
      Check:
        CheckId: "2"
        CheckTitle: Readiness Probe Configured
        GroupId: "1"
        GroupTitle: Workload Readiness & Liveness
        ModuleId: dops.1
        ModuleTitle: Ops Conformance
      CheckId: dops.1.1.2.1667744901853394230
      Message: '''Deployment.apps nginx'', is missing at least one Readiness Probe
        - '
      Platform: Kubernetes
      Recommendation: Deployment nginx - Configure readiness probe for your pod containers
        to ensure Pod enter a ready state at the right time and stage
      References:
      - https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: dops.1.1.2.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
    - Action: Alert
      Category: Ops Conformance
      Check:
        CheckId: "1"
        CheckTitle: CPU Limit & Request
        GroupId: "2"
        GroupTitle: Workload Capacity Planning
        ModuleId: dops.1
        ModuleTitle: Ops Conformance
      CheckId: dops.1.2.1.1667744901853394230
      Message: '''Deployment.apps nginx'', is missing a CPU request or limits definitions'
      Platform: Kubernetes
      Recommendation: Deployment nginx - Configure CPU limit or CPU request to help
        Kubernetes scheduler have better resource centric scheduling decisions
      References:
      - https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: dops.1.2.1.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
    - Action: Alert
      Category: Ops Conformance
      Check:
        CheckId: "2"
        CheckTitle: Memory Limit & Request
        GroupId: "2"
        GroupTitle: Workload Capacity Planning
        ModuleId: dops.1
        ModuleTitle: Ops Conformance
      CheckId: dops.1.2.2.1667744901853394230
      Message: '''Deployment.apps nginx'', is missing Memory request or limits definitions'
      Platform: Kubernetes
      Recommendation: Deployment nginx - Configure memory limit or memory request
        to help Kubernetes scheduler have better resource centric scheduling decisions
      References:
      - https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: dops.1.2.2.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
  Pod Security:
    ResourceKind: Workload Hardening
    ResourceName: Pod Security
    ResourceNamespace: KubeAdvisor
    ResourceUID: psec.1
    Results:
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "1"
        CheckTitle: Host Namespace Isolation
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.1.1667744901853394230
      Message: '''Deployment.apps nginx'', Modifying the default Pod namespace isolation
        allows the processes in a pod to run as if they were running natively on the
        host.'
      Platform: Pod
      Recommendation: Deployment nginx - Set the following Pod attributes 'hostNetwork',
        'hostIPC', 'hostPID' to false.
      References:
      - https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.1.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "3"
        CheckTitle: Privileged Containers
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.3.1667744901853394230
      Message: "The container(s) ''\n\t\t\t\t\t\t\t                  \n                                              has
        'privileged' set to true in the SecurityContext."
      Platform: Pod
      Recommendation: Deployment nginx - Set the 'Privileged' attribute in the Pod's
        container configuration to 'false'
      References:
      - https://kubernetes.io/docs/concepts/policy/security-context/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.3.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/policy/security-context/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "4"
        CheckTitle: High risk host file system mounts
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.4.1667744901853394230
      Message: '''Deployment.apps nginx'', mounts host directories that may impose
        higher risk level to the worker node - '''''
      Platform: Pod
      Recommendation: Deployment nginx - Adjust host volume mounts to comply with
        the blacklist, add an exception for this resource or use PodSecurityPolicy
        to deny admission for such workloads
      References:
      - https://kubernetes.io/docs/concepts/policy/pod-security-policy/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.4.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/policy/pod-security-policy/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "5"
        CheckTitle: Non-Root Containers
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.5.1667744901853394230
      Message: "Force Kubernetes to run containers as a non-root user to ensure least
        privilege - see container(s): 'nginx'\n\t\t\t\t\t\t\t                  \n
        \                                             "
      Platform: Pod
      Recommendation: Deployment nginx - The attribute 'runAsNonRoot' indicates whether
        the Kubernetes node agent will validate that the container images run as non-root.
        Container level security context settings are applied to the specific container
        and override settings made at the pod level where there is overlap
      References:
      - https://kubernetes.io/docs/concepts/policy/security-context/
      - https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.5.1667744901853394230@1667744901853394230
      Severity: High
      Url: https://kubernetes.io/docs/concepts/policy/security-context/,https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "6"
        CheckTitle: Immutable Containers
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.6.1667744901853394230
      Message: "An immutable root filesystem can prevent malicious binaries being
        added or overwrite existing binaries  - container(s): 'nginx'\n\t\t\t\t\t\t\t
        \                 \n                                              "
      Platform: Pod
      Recommendation: Deployment nginx - An immutable root filesystem prevents applications
        from writing to their local storage. In an exploit or intrusion event the
        attacker will not be able to tamper with the local filesystem or write foreign
        executables to disk. Set 'readOnlyRootFilesystem' to 'true' in your container
        securityContext
      References:
      - https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.6.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "7"
        CheckTitle: Run Container As User
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.7.1667744901853394230
      Message: "Set the user id to run the container process. This is the user id
        of the first process in the container   - container(s): 'nginx'\n\t\t\t\t\t\t\t
        \                 \n                                              "
      Platform: Pod
      Recommendation: Deployment nginx - Set the user id > 10000 and run the container
        with user id that differ from any host user id.  This setting can be configured
        using Pod SecurityContext for all containers and initContainers
      References:
      - https://kubernetes.io/docs/concepts/policy/security-context/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.7.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/policy/security-context/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "9"
        CheckTitle: Service Account Automount
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.9.1667744901853394230
      Message: '''Deployment.apps nginx'' - automountServiceAccountToken is not set
        to ''false'' in your Pod Spec. Consider reducing Kubernetes API Server access
        surface by disabling automount of service account. When you create a pod,
        if you do not specify a service account, it is automatically assigned the
        default service account in the same namespace'
      Platform: Pod
      Recommendation: Deployment nginx - Set automountServiceAccountToken is to 'false'
        in your Pod Spec. Following on the least privileges principle - if your Pod
        require no access to Kubernetes API Server, avoid the default behavior, by
        disabling the automatic provisioning of service access token.
      References:
      - https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
      - https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.9.1667744901853394230@1667744901853394230
      Severity: High
      Url: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/,https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "10"
        CheckTitle: Container Capabilities
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.10.1667744901853394230
      Message: '''Deployment.apps nginx'' - ''In container(s) ''nginx'' capabilities
        that should be dropped ''audit_write,chown,dac_override,fowner,fsetid,kill,mknod,net_bind_service,net_raw,net_broadcast,setfcap,setgid,setuid,setpcap,sys_chroot,sys_module,sys_boot,sys_time,sys_resource,ipc_lock,ipc_owner,sys_ptrace,block_suspend''
        or ''ALL'' and capabilities that one should avoid adding '''' '''
      Platform: Pod
      Recommendation: Deployment nginx - Review your resource security configuration,
        and specifically the securityContext of the various containers defined in
        it. If this is the intended behavior you can add this resource to check exception
        list
      References:
      - https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
      - https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.10.1667744901853394230@1667744901853394230
      Severity: High
      Url: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/,https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "11"
        CheckTitle: Do Not Run Pods on Master Nodes
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.11.1667744901853394230
      Message: '''Deployment.apps nginx'', The Kubernetes master nodes are the control
        nodes of the entire cluster.  Therefore, only certain items should be permitted
        to run on these nodes. To effectively limit what can run on these nodes, taints
        are placed on the nodes.If you encounter the toleration below on a Pod specification
        in one of your deployment resources, and your cluster is self-managed, it
        should be explicitly granted'
      Platform: Pod
      Recommendation: Deployment nginx - If you encounter the toleration 'node-role.kubernetes.io/master:NoSchedule'
        on a Pod specification in one of your deployment resources, and your cluster
        is self-managed, it should be explicitly granted by adding the resource to
        the exception list
      References:
      - https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.11.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "12"
        CheckTitle: Container ProcMount Configuration
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.12.1667744901853394230
      Message: '''Deployment.apps nginx'' - procMount is set to Unmasked. Consider
        changing this to DefaultProcMount which uses the container runtime defaults
        for readonly and masked paths for /proc.'
      Platform: Pod
      Recommendation: Deployment nginx - Remove the Unmasked procMount configuration
        in the PodSecurityContext or the SecurityContext of any of the containers.
      References:
      - https://kubernetes.io/docs/concepts/policy/pod-security-policy/
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.12.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
  Secret Hunting:
    ResourceKind: Secret
    ResourceName: Secret Hunting
    ResourceNamespace: KubeAdvisor
    ResourceUID: scrt.1
    Results:
    - Action: Alert
      Category: Secret
      Check:
        CheckId: "1"
        CheckTitle: Scan PodSpec Environment Variable
        GroupId: "2"
        GroupTitle: Find Secrets in Pod Environment Variables
        ModuleId: scrt.1
        ModuleTitle: Secret Hunting
      CheckId: scrt.1.2.1.1667744901853394230
      Message: 'This check hunts for secrets, api keys and passwords that may have
        been misplaced in environment variables. Check for - '
      Platform: Secret
      Recommendation: Deployment nginx - If check fails, you should consider using
        Secret resource instead of storing secrets in environment variables
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: scrt.1.2.1.1667744901853394230@1667744901853394230
      Severity: Pass
  Workload Software Supply Chain:
    ResourceKind: Cluster
    ResourceName: Workload Software Supply Chain
    ResourceNamespace: KubeAdvisor
    ResourceUID: sply.1
    Results:
    - Action: Alert
      Category: Cluster
      Check:
        CheckId: "1"
        CheckTitle: Container Image Registry Supply Chain Hygiene
        GroupId: "1"
        GroupTitle: Image Registry Whitelist
        ModuleId: sply.1
        ModuleTitle: Workload Software Supply Chain
      CheckId: sply.1.1.1.1667744901853394230
      Message: Verify that the container image(s) used by 'Deployment.apps nginx'
        provisioned from whitelisted registries - 'nginx in container nginx'
      Platform: Kubernetes
      Recommendation: Deployment nginx - Add the image registries to the scan profile
        or push the images to one of the whitelisted registry
      References:
      - https://kubernetes.io/docs/concepts/containers/images
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: sply.1.1.1.1667744901853394230@1667744901853394230
      Severity: High
      Url: https://kubernetes.io/docs/concepts/containers/images

kubesecを試す

kubesecを試したメモ。kubesecというとSecretのマニフェストを暗号化するshyiko/kubesecのほうが有名な気もするが、controlplaneio/kubesecのほう。

導入

導入は以下の方法で可能。

  • コンテナイメージ
  • バイナリ
  • アドミッションコントローラー
  • kubecltプラグイン

また、SaaSとしても利用可能。

ここでは、バイナリをバイナリをダウンロードしてきてパスの通ったディレクトリに配置した。

実行

適当なマニフェストを作成する。

k create deploy nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

検査する。検査項目はドキュメントに記載がある。

$ kubesec scan nginx.yaml
[
  {
    "object": "Deployment/nginx.default",
    "valid": true,
    "fileName": "nginx.yaml",
    "message": "Passed with a score of 0 points",
    "score": 0,
    "scoring": {
      "advise": [
        {
          "id": "ApparmorAny",
          "selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",
          "reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY",
          "points": 3
        },
        {
          "id": "ServiceAccountName",
          "selector": ".spec .serviceAccountName",
          "reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege",
          "points": 3
        },
        {
          "id": "SeccompAny",
          "selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",
          "reason": "Seccomp profiles set minimum privilege and secure against unknown threats",
          "points": 1
        },
        {
          "id": "LimitsCPU",
          "selector": "containers[] .resources .limits .cpu",
          "reason": "Enforcing CPU limits prevents DOS via resource exhaustion",
          "points": 1
        },
        {
          "id": "RequestsMemory",
          "selector": "containers[] .resources .limits .memory",
          "reason": "Enforcing memory limits prevents DOS via resource exhaustion",
          "points": 1
        },
        {
          "id": "RequestsCPU",
          "selector": "containers[] .resources .requests .cpu",
          "reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster",
          "points": 1
        },
        {
          "id": "RequestsMemory",
          "selector": "containers[] .resources .requests .memory",
          "reason": "Enforcing memory requests aids a fair balancing of resources across the cluster",
          "points": 1
        },
        {
          "id": "CapDropAny",
          "selector": "containers[] .securityContext .capabilities .drop",
          "reason": "Reducing kernel capabilities available to a container limits its attack surface",
          "points": 1
        },
        {
          "id": "CapDropAll",
          "selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",
          "reason": "Drop all capabilities and add only those required to reduce syscall attack surface",
          "points": 1
        },
        {
          "id": "ReadOnlyRootFilesystem",
          "selector": "containers[] .securityContext .readOnlyRootFilesystem == true",
          "reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost",
          "points": 1
        },
        {
          "id": "RunAsNonRoot",
          "selector": "containers[] .securityContext .runAsNonRoot == true",
          "reason": "Force the running image to run as a non-root user to ensure least privilege",
          "points": 1
        },
        {
          "id": "RunAsUser",
          "selector": "containers[] .securityContext .runAsUser -gt 10000",
          "reason": "Run as a high-UID user to avoid conflicts with the host's user table",
          "points": 1
        }
      ]
    }
  }
]

何を検査するかを指定する機能はなさそう。

kubeauditを試す

kubeauditを試してみたメモ。

導入

brew install kubeaudit

実行方法

3つのモードがある。

  1. マニフェストモード
  2. ローカルモード
  3. クラスターモード

マニフェストモード

適当なマニフェストを作る。

k create deploy nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

検査を実行する。

$ kubeaudit all -f nginx.yaml

DEPRECATION NOTICE: The 'mountds' command is deprecated and will stop working in a future minor release. Please use the 'mounts' command instead. If you use 'all' no change is required.


---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: nginx

--------------------------------------------

-- [error] AppArmorAnnotationMissing
   Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/nginx' should be added.
   Metadata:
      Container: nginx
      MissingAnnotation: container.apparmor.security.beta.kubernetes.io/nginx

-- [error] AutomountServiceAccountTokenTrueAndDefaultSA
   Message: Default service account with token mounted. automountServiceAccountToken should be set to 'false' on either the ServiceAccount or on the PodSpec or a non-default service account should be used.

-- [error] CapabilityOrSecurityContextMissing
   Message: Security Context not set. The Security Context should be specified and all Capabilities should be dropped by setting the Drop list to ALL.
   Metadata:
      Container: nginx

-- [warning] ImageTagMissing
   Message: Image tag is missing.
   Metadata:
      Container: nginx

-- [warning] LimitsNotSet
   Message: Resource limits not set.
   Metadata:
      Container: nginx

-- [error] RunAsNonRootPSCNilCSCNil
   Message: runAsNonRoot should be set to true or runAsUser should be set to a value > 0 either in the container SecurityContext or PodSecurityContext.
   Metadata:
      Container: nginx

-- [error] AllowPrivilegeEscalationNil
   Message: allowPrivilegeEscalation not set which allows privilege escalation. It should be set to 'false'.
   Metadata:
      Container: nginx

-- [warning] PrivilegedNil
   Message: privileged is not set in container SecurityContext. Privileged defaults to 'false' but it should be explicitly set to 'false'.
   Metadata:
      Container: nginx

-- [error] ReadOnlyRootFilesystemNil
   Message: readOnlyRootFilesystem is not set in container SecurityContext. It should be set to 'true'.
   Metadata:
      Container: nginx

-- [error] SeccompAnnotationMissing
   Message: Seccomp annotation is missing. The annotation seccomp.security.alpha.kubernetes.io/pod: runtime/default should be added.
   Metadata:
      MissingAnnotation: seccomp.security.alpha.kubernetes.io/pod

上記ではallを指定しているが、検査している対象は以下のコマンドのヘルプで確認できし、ドキュメントでも記載されている。

$ kubeaudit
Kubeaudit audits Kubernetes clusters for common security controls.

kubeaudit has three modes:
  1. Manifest mode: If a Kubernetes manifest file is provided using the -f/--manifest flag, kubeaudit will audit the manifest file. Kubeaudit also supports autofixing in manifest mode using the 'autofix' command. This will fix the manifest in-place. The fixed manifest can be written to a different file using the -o/--out flag.
  2. Cluster mode: If kubeaudit detects it is running in a cluster, it will audit the other resources in the cluster.
  3. Local mode: kubeaudit will try to connect to a cluster using the local kubeconfig file ($HOME/.kube/config). A different kubeconfig location can be specified using the -c/--kubeconfig flag

Usage:
  kubeaudit [command]

Available Commands:
  all          Run all audits
  apparmor     Audit containers running without AppArmor
  asat         Audit pods using an automatically mounted default service account
  autofix      Automagically make a manifest secure
  capabilities Audit containers not dropping ALL capabilities
  help         Help about any command
  hostns       Audit pods with hostNetwork, hostIPC or hostPID enabled
  image        Audit containers not using a specified image:tag
  limits       Audit containers exceeding a specified CPU or memory limit
  mountds      Audit containers that mount /var/run/docker.sock
  mounts       Audit containers that mount sensitive paths
  netpols      Audit namespaces that do not have a default deny network policy
  nonroot      Audit containers allowing for root user
  privesc      Audit containers that allow privilege escalation
  privileged   Audit containers running as privileged
  rootfs       Audit containers not using a read only root filesystems
  seccomp      Audit containers running without Seccomp
  version      Prints the current kubeaudit version

Flags:
  -e, --exitcode int         Exit code to use if there are results with severity of "error". Conventionally, 0 is used for success and all non-zero codes for an error. (default 2)
  -p, --format string        The output format to use (one of "pretty", "logrus", "json") (default "pretty")
  -h, --help                 help for kubeaudit
  -c, --kubeconfig string    Path to local Kubernetes config file. Only used in local mode (default is $HOME/.kube/config)
  -f, --manifest string      Path to the yaml configuration to audit. Only used in manifest mode.
  -m, --minseverity string   Set the lowest severity level to report (one of "error", "warning", "info") (default "info")
  -n, --namespace string     Only audit resources in the specified namespace. Not currently supported in manifest mode.

Use "kubeaudit [command] --help" for more information about a command.

自動的に修正する機能もある。

kubeaudit autofix -f nginx.yaml -o nginx-fixed.yaml

修正後のファイルは以下のようになる。limitsとかは設定されていない。

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
      annotations:
        container.apparmor.security.beta.kubernetes.io/nginx: runtime/default
        seccomp.security.alpha.kubernetes.io/pod: runtime/default
    spec:
      containers:
        - image: nginx
          name: nginx
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsNonRoot: true
      automountServiceAccountToken: false
status: {}

クラスターモード

クラスターモードといっているのは、クラスター内でコンテナとして実行すること。実行するコマンドはローカルモードと同じ。

ローカルモード

パスを指定する場合は-c "/path/to/config"で指定する。

$ kubeaudit all

DEPRECATION NOTICE: The 'mountds' command is deprecated and will stop working in a future minor release. Please use the 'mounts' command instead. If you use 'all' no change is required.


---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Daemonset
  metadata:
    name: aws-node
    namespace: kube-system

--------------------------------------------

-- [error] AppArmorAnnotationMissing
   Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/aws-node' should be added.
   Metadata:
      Container: aws-node
      MissingAnnotation: container.apparmor.security.beta.kubernetes.io/aws-node

-- [error] CapabilityShouldDropAll
   Message: Capability Drop list should be set to ALL. Add the specific ones you need to the Add list and set an override label.
   Metadata:
      Container: aws-node

-- [error] CapabilityAdded
   Message: Capability "NET_ADMIN" added. It should be removed from the capability add list. If you need this capability, add an override label such as 'container.audit.kubernetes.io/aws-node.allow-capability-net-admin: SomeReason'.
   Metadata:
      Container: aws-node
      Metadata: NET_ADMIN

-- [error] NamespaceHostNetworkTrue
   Message: hostNetwork is set to 'true' in PodSpec. It should be set to 'false'.

-- [warning] LimitsNotSet
   Message: Resource limits not set.
   Metadata:
      Container: aws-node

-- [error] RunAsNonRootPSCNilCSCNil
   Message: runAsNonRoot should be set to true or runAsUser should be set to a value > 0 either in the container SecurityContext or PodSecurityContext.
   Metadata:
      Container: aws-node

-- [error] AllowPrivilegeEscalationNil
   Message: allowPrivilegeEscalation not set which allows privilege escalation. It should be set to 'false'.
   Metadata:
      Container: aws-node

-- [warning] PrivilegedNil
   Message: privileged is not set in container SecurityContext. Privileged defaults to 'false' but it should be explicitly set to 'false'.
   Metadata:
      Container: aws-node

-- [error] ReadOnlyRootFilesystemNil
   Message: readOnlyRootFilesystem is not set in container SecurityContext. It should be set to 'true'.
   Metadata:
      Container: aws-node

-- [error] SeccompAnnotationMissing
   Message: Seccomp annotation is missing. The annotation seccomp.security.alpha.kubernetes.io/pod: runtime/default should be added.
   Metadata:
      MissingAnnotation: seccomp.security.alpha.kubernetes.io/pod


---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Daemonset
  metadata:
    name: kube-proxy
    namespace: kube-system

--------------------------------------------

(省略)

allとautofixサブコマンドでは、何を検査するかを設定ファイルにして渡すこともできる。

enabledAuditors:
  # Auditors are enabled by default if they are not explicitly set to "false"
  apparmor: false
  asat: false
  capabilities: false
  hostns: false
  image: false
  limits: false
  mounts: false
  netpols: false
  nonroot: false
  privesc: false
  privileged: false
  rootfs: true
  seccomp: true
$ kubeaudit all -k kubeaudit-config.yml -f nginx.yaml

DEPRECATION NOTICE: The 'mountds' command is deprecated and will stop working in a future minor release. Please use the 'mounts' command instead. If you use 'all' no change is required.


---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: nginx

--------------------------------------------

-- [error] ReadOnlyRootFilesystemNil
   Message: readOnlyRootFilesystem is not set in container SecurityContext. It should be set to 'true'.
   Metadata:
      Container: nginx

-- [error] SeccompAnnotationMissing
   Message: Seccomp annotation is missing. The annotation seccomp.security.alpha.kubernetes.io/pod: runtime/default should be added.
   Metadata:
      MissingAnnotation: seccomp.security.alpha.kubernetes.io/pod

AWS Secrets & Configuration Providerを試す

以下のブログの内容を試したメモ。

準備

クラスターを作成する。

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: mycluster18
  region: ap-northeast-1
  version: "1.18"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default
      enableSsm: true

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true
eksctl create cluster -f cluster.yaml

前提条件を準備する。

まず、Secrets Managerでシークレットを作る。

aws secretsmanager create-secret \
  --region ap-northeast-1 \
  --name mysecret/mypasswd \
  --secret-string '{"username":"admin","password":"abcdef"}'

このシークレットにアクセスできるポリシーを作る。

cat <<EOF > mysecret-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetResourcePolicy",
                "secretsmanager:GetSecretValue",
                "secretsmanager:DescribeSecret",
                "secretsmanager:ListSecretVersionIds"
            ],
            "Resource": "arn:aws:secretsmanager:ap-northeast-1:XXXXXXXXXXXX:secret:mysecret*"
        }
    ]
}
EOF
aws iam create-policy --policy-name mysecret-policy --policy-document file://mysecret-policy.json

test Namespaceとtest ServiceAccountを作る。

$ k create ns test
namespace/test created
$ kubens test
Context "sotosugi@ascp.ap-northeast-1.eksctl.io" modified.
Active namespace is "test".
$ k create sa test
serviceaccount/test created

ウォークスルー

Step 1: IAM Roles for Service Accounts (IRSA) を使って Pod のアクセスを制限する

OICDプロバイダーは作成済み。

ServiceAccount用のロールを作成する。

$ eksctl create iamserviceaccount --name test --namespace test --cluster ascp --attach-policy-arn arn:aws:iam::190189382900:policy/mysecret-policy --approve --override-existing-serviceaccounts
2021-04-23 15:42:56 [ℹ]  eksctl version 0.45.0
2021-04-23 15:42:56 [ℹ]  using region ap-northeast-1
2021-04-23 15:42:57 [ℹ]  1 existing iamserviceaccount(s) (kube-system/aws-node) will be excluded
2021-04-23 15:42:57 [ℹ]  1 iamserviceaccount (test/test) was included (based on the include/exclude rules)
2021-04-23 15:42:57 [!]  metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
2021-04-23 15:42:57 [ℹ]  1 task: { 2 sequential sub-tasks: { create IAM role for serviceaccount "test/test", create serviceaccount "test/test" } }
2021-04-23 15:42:57 [ℹ]  building iamserviceaccount stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-04-23 15:42:57 [ℹ]  deploying stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-04-23 15:42:57 [ℹ]  waiting for CloudFormation stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-04-23 15:43:14 [ℹ]  waiting for CloudFormation stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-04-23 15:43:31 [ℹ]  waiting for CloudFormation stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-04-23 15:43:32 [ℹ]  serviceaccount "test/test" already exists
2021-04-23 15:43:32 [ℹ]  updated serviceaccount "test/test"

Step 2: Kubernetes Secrets Store CSI driver をインストールする

チャートレポジトリを追加する。

$ helm repo add secrets-store-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/secrets-store-csi-driver/master/charts
"secrets-store-csi-driver" has been added to your repositories

チャートを確認する。

$ helm search repo secrets-store-csi-driver
NAME                                                    CHART VERSION   APP VERSION     DESCRIPTION                                       
secrets-store-csi-driver/secrets-store-csi-driver       0.0.21          0.0.21          A Helm chart to install the SecretsStore CSI Dr...

チャートのパラメータを確認する。

$ helm inspect values secrets-store-csi-driver/secrets-store-csi-driver
linux:
  enabled: true
  image:
    repository: k8s.gcr.io/csi-secrets-store/driver
    tag: v0.0.21
    pullPolicy: Always

  driver:
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 50m
        memory: 100Mi

  registrarImage:
    repository: k8s.gcr.io/sig-storage/csi-node-driver-registrar
    tag: v2.1.0
    pullPolicy: Always

  registrar:
    resources:
      limits:
        cpu: 100m
        memory: 100Mi
      requests:
        cpu: 10m
        memory: 20Mi
    logVerbosity: 5

  livenessProbeImage:
    repository: k8s.gcr.io/sig-storage/livenessprobe
    tag: v2.2.0
    pullPolicy: Always

  livenessProbe:
    resources:
      limits:
        cpu: 100m
        memory: 100Mi
      requests:
        cpu: 10m
        memory: 20Mi

  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1

  kubeletRootDir: /var/lib/kubelet
  providersDir: /etc/kubernetes/secrets-store-csi-providers
  nodeSelector: {}
  tolerations: []
  metricsAddr: ":8095"
  env: []
  priorityClassName: ""
  daemonsetAnnotations: {}
  podAnnotations: {}
  podLabels: {}

windows:
  enabled: false
  image:
    repository: k8s.gcr.io/csi-secrets-store/driver
    tag: v0.0.21
    pullPolicy: IfNotPresent

  driver:
    resources:
      limits:
        cpu: 400m
        memory: 400Mi
      requests:
        cpu: 50m
        memory: 100Mi

  registrarImage:
    repository: k8s.gcr.io/sig-storage/csi-node-driver-registrar
    tag: v2.1.0
    pullPolicy: IfNotPresent

  registrar:
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 10m
        memory: 20Mi
    logVerbosity: 5

  livenessProbeImage:
    repository: k8s.gcr.io/sig-storage/livenessprobe
    tag: v2.2.0
    pullPolicy: IfNotPresent

  livenessProbe:
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 10m
        memory: 20Mi

  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1

  kubeletRootDir: C:\var\lib\kubelet
  providersDir: C:\k\secrets-store-csi-providers
  nodeSelector: {}
  tolerations: []
  metricsAddr: ":8095"
  env: []
  priorityClassName: ""
  daemonsetAnnotations: {}
  podAnnotations: {}
  podLabels: {}

# log level. Uses V logs (klog)
logVerbosity: 0

# logging format JSON
logFormatJSON: false

livenessProbe:
  port: 9808
  logLevel: 2

## Install Default RBAC roles and bindings
rbac:
  install: true

## Install RBAC roles and bindings required for K8S Secrets syncing. Change this
## to false after v0.0.14
syncSecret:
  enabled: true

## [DEPRECATED] Minimum Provider Versions (optional)
## A comma delimited list of key-value pairs of minimum provider versions
## e.g. provider1=0.0.2,provider2=0.0.3
minimumProviderVersions:

## Enable secret rotation feature [alpha]
enableSecretRotation: false

## Secret rotation poll interval duration
rotationPollInterval:

## Filtered watch nodePublishSecretRef secrets
filteredWatchSecret: false

デフォルト設定でデプロイする。

$ helm -n kube-system install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver
NAME: csi-secrets-store
LAST DEPLOYED: Fri Apr 23 15:49:08 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Secrets Store CSI Driver is getting deployed to your cluster.

To verify that Secrets Store CSI Driver has started, run:

  kubectl --namespace=kube-system get pods -l "app=secrets-store-csi-driver"

Now you can follow these steps https://secrets-store-csi-driver.sigs.k8s.io/getting-started/usage.html
to create a SecretProviderClass resource, and a deployment using the SecretProviderClass.

インストールを確認する。DaemonSetが動いている。

$ kubectl get po --namespace=kube-system
NAME                                               READY   STATUS    RESTARTS   AGE
aws-node-gr7ks                                     1/1     Running   0          24m
aws-node-jmvkr                                     1/1     Running   0          24m
coredns-59847d77c8-tqxbn                           1/1     Running   0          32m
coredns-59847d77c8-vcmts                           1/1     Running   0          32m
csi-secrets-store-secrets-store-csi-driver-6ql5t   3/3     Running   0          60s
csi-secrets-store-secrets-store-csi-driver-lw9b6   3/3     Running   0          60s
kube-proxy-2kcdp                                   1/1     Running   0          24m
kube-proxy-7xbtl                                   1/1     Running   0          24m

CRDを確認する。

$ kubectl get crd
NAME                                                        CREATED AT
eniconfigs.crd.k8s.amazonaws.com                            2021-04-23T06:17:44Z
secretproviderclasses.secrets-store.csi.x-k8s.io            2021-04-23T06:49:09Z
secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io   2021-04-23T06:49:09Z
securitygrouppolicies.vpcresources.k8s.aws                  2021-04-23T06:17:47Z

Step 3: AWS Secrets & Configuration Provider をインストールします

AWS Secrets & Configuration Provider をインストールする。

$ curl -s https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml | kubectl apply -f -
serviceaccount/csi-secrets-store-provider-aws created
clusterrole.rbac.authorization.k8s.io/csi-secrets-store-provider-aws-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-secrets-store-provider-aws-cluster-rolebinding created
daemonset.apps/csi-secrets-store-provider-aws created

確認する。

$ k get pod -n kube-system
NAME                                               READY   STATUS    RESTARTS   AGE
aws-node-gr7ks                                     1/1     Running   0          55m
aws-node-jmvkr                                     1/1     Running   0          55m
coredns-59847d77c8-tqxbn                           1/1     Running   0          62m
coredns-59847d77c8-vcmts                           1/1     Running   0          62m
csi-secrets-store-provider-aws-gnz97               1/1     Running   0          22m
csi-secrets-store-provider-aws-lmr5r               1/1     Running   0          22m
csi-secrets-store-secrets-store-csi-driver-6ql5t   3/3     Running   0          31m
csi-secrets-store-secrets-store-csi-driver-lw9b6   3/3     Running   0          31m
kube-proxy-2kcdp                                   1/1     Running   0          55m
kube-proxy-7xbtl                                   1/1     Running   0          55m

Step 4: SecretProviderClass カスタムリソースを作成してデプロイする

SecretProviderClass カスタムリソースを作成する。

cat <<EOF > aws-secrets.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: aws-secrets
spec:
  provider: aws
  parameters:                    # provider-specific parameters
    objects:  |
      - objectName: "mysecret/mypasswd"
        objectType: "secretsmanager"
EOF
$ k apply -f aws-secrets.yaml
secretproviderclass.secrets-store.csi.x-k8s.io/aws-secrets created

Step 5: 構成されたシークレットに基づいてボリュームをマウントするように Pod を構成してデプロイする

cat <<EOF > nginx-secrets-store-inline.yaml
kind: Pod
apiVersion: v1
metadata:
  name: nginx-secrets-store-inline
spec:
  serviceAccountName: test
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - name: mysecret2
      mountPath: "/mnt/secrets-store"
      readOnly: true
  volumes:
    - name: mysecret2
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "aws-secrets"
EOF
$ k apply -f nginx-secrets-store-inline.yaml
pod/nginx-secrets-store-inline created

マウントされたことを確認します。

$ kubectl exec -it nginx-secrets-store-inline -- ls /mnt/secrets-store/
mysecret_mypasswd
$ kubectl exec -it nginx-secrets-store-inline -- cat /mnt/secrets-store/mysecret_mypasswd
{"username":"admin","password":"abcdef"}

その他

Secretと同期させたりもできるようだ。