Polarisを試す

Polarisを試してみたメモ。

Polarisは3つのモードで実行が可能。

CLI

まずはコマンドラインツールを試す。

brew tap FairwindsOps/tap
brew install FairwindsOps/tap/polaris

適当なマニフェストを作成する。

k create deploy nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

検査を実行する。デフォルトでディレクトリを指定するようになっている。

$ polaris audit --format=pretty --audit-path ./


Polaris audited Path ./ at 0001-01-01T00:00:00Z
    Nodes: 0 | Namespaces: 0 | Controllers: 1
    Final score: 46

Deployment nginx in namespace 
    hostIPCSet                           🎉 Success
        Security - Host IPC is not configured
    hostNetworkSet                       🎉 Success
        Security - Host network is not configured
    hostPIDSet                           🎉 Success
        Security - Host PID is not configured
  Container nginx
    runAsRootAllowed                     😬 Warning
        Security - Should not be allowed to run as root
    livenessProbeMissing                 😬 Warning
        Reliability - Liveness probe should be configured
    memoryLimitsMissing                  😬 Warning
        Efficiency - Memory limits should be set
    readinessProbeMissing                😬 Warning
        Reliability - Readiness probe should be configured
    cpuRequestsMissing                   😬 Warning
        Efficiency - CPU requests should be set
    privilegeEscalationAllowed           ❌ Danger
        Security - Privilege escalation should not be allowed
    pullPolicyNotAlways                  😬 Warning
        Reliability - Image pull policy should be "Always"
    runAsPrivileged                      🎉 Success
        Security - Not running as privileged
    cpuLimitsMissing                     😬 Warning
        Efficiency - CPU limits should be set
    dangerousCapabilities                🎉 Success
        Security - Container does not have any dangerous capabilities
    memoryRequestsMissing                😬 Warning
        Efficiency - Memory requests should be set
    notReadOnlyRootFilesystem            😬 Warning
        Security - Filesystem should be read only
    tagNotSpecified                      ❌ Danger
        Reliability - Image tag should be specified
    hostPortSet                          🎉 Success
        Security - Host port is not configured
    insecureCapabilities                 😬 Warning
        Security - Container should not have insecure capabilities

ダッシュボード

マニフェストかHelmでインストールする。

helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace

ポートフォワードでダッシュボードにアクセスする。

kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80

こんな感じでクラスターのオーバービューが見られる。

f:id:sotoiwa:20210511185846p:plain

カテゴリ別の結果も見られる。

f:id:sotoiwa:20210511185910p:plain

Namespaceの各ワークロードの結果も見られる。

f:id:sotoiwa:20210511185933p:plain

ダッシュボードをクラスターで動かさなくても、クラスターへのアクセスを許可したコンテナをローカルで起動してダッシュボードにアクセスすることもできる。ただし、EKSだとIAMで認証するために少し工夫がいるかも知れない。

アドミッションコントローラー

チャートまたはマニフェストでデプロイする。クラスターにcert-managerが必要。

helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace \
  --set webhook.enable=true --set dashboard.enable=false

Webhookが作成されている。

$ k get validatingwebhookconfiguration
NAME                                          WEBHOOKS   AGE
aws-load-balancer-webhook                     1          114d
cert-manager-webhook                          1          114d
gatekeeper-validating-webhook-configuration   2          4d23h
polaris-webhook                               1          91s
vpc-resource-validating-webhook               1          114d

先ほどのNginxのマニフェストで試す。ブロックされる。

$ k apply -f nginx.yaml 
Error from server (
Polaris prevented this deployment due to configuration problems:
- Container nginx: Privilege escalation should not be allowed
- Container nginx: Image tag should be specified
): error when creating "nginx.yaml": admission webhook "polaris.fairwinds.com" denied the request: 
Polaris prevented this deployment due to configuration problems:
- Container nginx: Privilege escalation should not be allowed
- Container nginx: Image tag should be specified

設定のカスタマイズconfig.yamlを渡すことで可能。

sKanを試す

Alcide sKanを試してみたメモ。

導入

バイナリをダウンロードしてパスの通ったディレクトリに置く。

実行

適当なマニフェストを作る。

k create deploy nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

検査を実行する。

$ skan manifest --report-passed -f nginx.yaml
[skan-this] Analyzing resources from '1' files/directories.
[skan-this] Loaded '1' objects
[skan-this] Ops Conformance | Workload Readiness & Liveness
[skan-this] Ops Conformance | Workload Capacity Planning
[skan-this] Workload Software Supply Chain | Image Registry Whitelist
[skan-this] Ingress Controllers & Services | Ingress Security & Hardening Configuration
[skan-this] Ingress Controllers & Services | Ingress Controller (nginx) 
[skan-this] Ingress Controllers & Services | Service Resource Checks
[skan-this] Pod Security | Workload Hardening
[skan-this] Secret Hunting | Find Secrets in ConfigMaps
[skan-this] Secret Hunting | Find Secrets in Pod Environment Variables
[skan-this] Admission Controllers | Validating Admission Controllers
[skan-this] Admission Controllers | Mutating Admission Controllers
[skan-this] Generating report (html) and saving as 'skan-result.html'
[skan-this] Summary:
[skan-this] Critical .... 0
[skan-this] High ........ 4
[skan-this] Medium ...... 6
[skan-this] Low ......... 0
[skan-this] Pass ........ 6

htmlの結果ファイルも作成されるので、結果をブラウザで見ることが可能。

open skan-result.html

f:id:sotoiwa:20210511172500p:plain

結果をyamljsonで出力することも可能。

skan manifest --report-passed -f nginx.yaml -o json --outputfile skan-result.json
skan manifest --report-passed -f nginx.yaml -o yaml --outputfile skan-result.yaml
AdvisorReportHeader:
  CreationTimeStamp: "2021-05-12T13:46:44+09:00"
  Info: nginx.yaml
  MSTimeStamp: 1620794804433
  ReportUID: 47155860-7de8-449f-b9d2-699fc0e2c754
  ScannerVersion: .
Reports:
  Ops Conformance:
    ResourceKind: Ops Conformance
    ResourceName: Ops Conformance
    ResourceNamespace: KubeAdvisor
    ResourceUID: dops.1
    Results:
    - Action: Alert
      Category: Ops Conformance
      Check:
        CheckId: "1"
        CheckTitle: Liveness Probe Configured
        GroupId: "1"
        GroupTitle: Workload Readiness & Liveness
        ModuleId: dops.1
        ModuleTitle: Ops Conformance
      CheckId: dops.1.1.1.1667744901853394230
      Message: '''Deployment.apps nginx'', is missing at least one Liveness Probe
        - '
      Platform: Kubernetes
      Recommendation: Deployment nginx - Configure liveness probe for your pod containers
        to ensure Pod liveness is managed and monitored by Kubernetes
      References:
      - https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: dops.1.1.1.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
    - Action: Alert
      Category: Ops Conformance
      Check:
        CheckId: "2"
        CheckTitle: Readiness Probe Configured
        GroupId: "1"
        GroupTitle: Workload Readiness & Liveness
        ModuleId: dops.1
        ModuleTitle: Ops Conformance
      CheckId: dops.1.1.2.1667744901853394230
      Message: '''Deployment.apps nginx'', is missing at least one Readiness Probe
        - '
      Platform: Kubernetes
      Recommendation: Deployment nginx - Configure readiness probe for your pod containers
        to ensure Pod enter a ready state at the right time and stage
      References:
      - https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: dops.1.1.2.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
    - Action: Alert
      Category: Ops Conformance
      Check:
        CheckId: "1"
        CheckTitle: CPU Limit & Request
        GroupId: "2"
        GroupTitle: Workload Capacity Planning
        ModuleId: dops.1
        ModuleTitle: Ops Conformance
      CheckId: dops.1.2.1.1667744901853394230
      Message: '''Deployment.apps nginx'', is missing a CPU request or limits definitions'
      Platform: Kubernetes
      Recommendation: Deployment nginx - Configure CPU limit or CPU request to help
        Kubernetes scheduler have better resource centric scheduling decisions
      References:
      - https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: dops.1.2.1.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
    - Action: Alert
      Category: Ops Conformance
      Check:
        CheckId: "2"
        CheckTitle: Memory Limit & Request
        GroupId: "2"
        GroupTitle: Workload Capacity Planning
        ModuleId: dops.1
        ModuleTitle: Ops Conformance
      CheckId: dops.1.2.2.1667744901853394230
      Message: '''Deployment.apps nginx'', is missing Memory request or limits definitions'
      Platform: Kubernetes
      Recommendation: Deployment nginx - Configure memory limit or memory request
        to help Kubernetes scheduler have better resource centric scheduling decisions
      References:
      - https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: dops.1.2.2.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
  Pod Security:
    ResourceKind: Workload Hardening
    ResourceName: Pod Security
    ResourceNamespace: KubeAdvisor
    ResourceUID: psec.1
    Results:
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "1"
        CheckTitle: Host Namespace Isolation
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.1.1667744901853394230
      Message: '''Deployment.apps nginx'', Modifying the default Pod namespace isolation
        allows the processes in a pod to run as if they were running natively on the
        host.'
      Platform: Pod
      Recommendation: Deployment nginx - Set the following Pod attributes 'hostNetwork',
        'hostIPC', 'hostPID' to false.
      References:
      - https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.1.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "3"
        CheckTitle: Privileged Containers
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.3.1667744901853394230
      Message: "The container(s) ''\n\t\t\t\t\t\t\t                  \n                                              has
        'privileged' set to true in the SecurityContext."
      Platform: Pod
      Recommendation: Deployment nginx - Set the 'Privileged' attribute in the Pod's
        container configuration to 'false'
      References:
      - https://kubernetes.io/docs/concepts/policy/security-context/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.3.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/policy/security-context/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "4"
        CheckTitle: High risk host file system mounts
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.4.1667744901853394230
      Message: '''Deployment.apps nginx'', mounts host directories that may impose
        higher risk level to the worker node - '''''
      Platform: Pod
      Recommendation: Deployment nginx - Adjust host volume mounts to comply with
        the blacklist, add an exception for this resource or use PodSecurityPolicy
        to deny admission for such workloads
      References:
      - https://kubernetes.io/docs/concepts/policy/pod-security-policy/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.4.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/policy/pod-security-policy/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "5"
        CheckTitle: Non-Root Containers
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.5.1667744901853394230
      Message: "Force Kubernetes to run containers as a non-root user to ensure least
        privilege - see container(s): 'nginx'\n\t\t\t\t\t\t\t                  \n
        \                                             "
      Platform: Pod
      Recommendation: Deployment nginx - The attribute 'runAsNonRoot' indicates whether
        the Kubernetes node agent will validate that the container images run as non-root.
        Container level security context settings are applied to the specific container
        and override settings made at the pod level where there is overlap
      References:
      - https://kubernetes.io/docs/concepts/policy/security-context/
      - https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.5.1667744901853394230@1667744901853394230
      Severity: High
      Url: https://kubernetes.io/docs/concepts/policy/security-context/,https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "6"
        CheckTitle: Immutable Containers
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.6.1667744901853394230
      Message: "An immutable root filesystem can prevent malicious binaries being
        added or overwrite existing binaries  - container(s): 'nginx'\n\t\t\t\t\t\t\t
        \                 \n                                              "
      Platform: Pod
      Recommendation: Deployment nginx - An immutable root filesystem prevents applications
        from writing to their local storage. In an exploit or intrusion event the
        attacker will not be able to tamper with the local filesystem or write foreign
        executables to disk. Set 'readOnlyRootFilesystem' to 'true' in your container
        securityContext
      References:
      - https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.6.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "7"
        CheckTitle: Run Container As User
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.7.1667744901853394230
      Message: "Set the user id to run the container process. This is the user id
        of the first process in the container   - container(s): 'nginx'\n\t\t\t\t\t\t\t
        \                 \n                                              "
      Platform: Pod
      Recommendation: Deployment nginx - Set the user id > 10000 and run the container
        with user id that differ from any host user id.  This setting can be configured
        using Pod SecurityContext for all containers and initContainers
      References:
      - https://kubernetes.io/docs/concepts/policy/security-context/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.7.1667744901853394230@1667744901853394230
      Severity: Medium
      Url: https://kubernetes.io/docs/concepts/policy/security-context/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "9"
        CheckTitle: Service Account Automount
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.9.1667744901853394230
      Message: '''Deployment.apps nginx'' - automountServiceAccountToken is not set
        to ''false'' in your Pod Spec. Consider reducing Kubernetes API Server access
        surface by disabling automount of service account. When you create a pod,
        if you do not specify a service account, it is automatically assigned the
        default service account in the same namespace'
      Platform: Pod
      Recommendation: Deployment nginx - Set automountServiceAccountToken is to 'false'
        in your Pod Spec. Following on the least privileges principle - if your Pod
        require no access to Kubernetes API Server, avoid the default behavior, by
        disabling the automatic provisioning of service access token.
      References:
      - https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
      - https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.9.1667744901853394230@1667744901853394230
      Severity: High
      Url: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/,https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "10"
        CheckTitle: Container Capabilities
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.10.1667744901853394230
      Message: '''Deployment.apps nginx'' - ''In container(s) ''nginx'' capabilities
        that should be dropped ''audit_write,chown,dac_override,fowner,fsetid,kill,mknod,net_bind_service,net_raw,net_broadcast,setfcap,setgid,setuid,setpcap,sys_chroot,sys_module,sys_boot,sys_time,sys_resource,ipc_lock,ipc_owner,sys_ptrace,block_suspend''
        or ''ALL'' and capabilities that one should avoid adding '''' '''
      Platform: Pod
      Recommendation: Deployment nginx - Review your resource security configuration,
        and specifically the securityContext of the various containers defined in
        it. If this is the intended behavior you can add this resource to check exception
        list
      References:
      - https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
      - https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities
      - https://github.com/alcideio/advisor/tree/master/examples
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.10.1667744901853394230@1667744901853394230
      Severity: High
      Url: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/,https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities,https://github.com/alcideio/advisor/tree/master/examples
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "11"
        CheckTitle: Do Not Run Pods on Master Nodes
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.11.1667744901853394230
      Message: '''Deployment.apps nginx'', The Kubernetes master nodes are the control
        nodes of the entire cluster.  Therefore, only certain items should be permitted
        to run on these nodes. To effectively limit what can run on these nodes, taints
        are placed on the nodes.If you encounter the toleration below on a Pod specification
        in one of your deployment resources, and your cluster is self-managed, it
        should be explicitly granted'
      Platform: Pod
      Recommendation: Deployment nginx - If you encounter the toleration 'node-role.kubernetes.io/master:NoSchedule'
        on a Pod specification in one of your deployment resources, and your cluster
        is self-managed, it should be explicitly granted by adding the resource to
        the exception list
      References:
      - https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.11.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
    - Action: Alert
      Category: Workload Hardening
      Check:
        CheckId: "12"
        CheckTitle: Container ProcMount Configuration
        GroupId: "1"
        GroupTitle: Workload Hardening
        ModuleId: psec.1
        ModuleTitle: Pod Security
      CheckId: psec.1.1.12.1667744901853394230
      Message: '''Deployment.apps nginx'' - procMount is set to Unmasked. Consider
        changing this to DefaultProcMount which uses the container runtime defaults
        for readonly and masked paths for /proc.'
      Platform: Pod
      Recommendation: Deployment nginx - Remove the Unmasked procMount configuration
        in the PodSecurityContext or the SecurityContext of any of the containers.
      References:
      - https://kubernetes.io/docs/concepts/policy/pod-security-policy/
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: psec.1.1.12.1667744901853394230@1667744901853394230
      Severity: Pass
      Url: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
  Secret Hunting:
    ResourceKind: Secret
    ResourceName: Secret Hunting
    ResourceNamespace: KubeAdvisor
    ResourceUID: scrt.1
    Results:
    - Action: Alert
      Category: Secret
      Check:
        CheckId: "1"
        CheckTitle: Scan PodSpec Environment Variable
        GroupId: "2"
        GroupTitle: Find Secrets in Pod Environment Variables
        ModuleId: scrt.1
        ModuleTitle: Secret Hunting
      CheckId: scrt.1.2.1.1667744901853394230
      Message: 'This check hunts for secrets, api keys and passwords that may have
        been misplaced in environment variables. Check for - '
      Platform: Secret
      Recommendation: Deployment nginx - If check fails, you should consider using
        Secret resource instead of storing secrets in environment variables
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: scrt.1.2.1.1667744901853394230@1667744901853394230
      Severity: Pass
  Workload Software Supply Chain:
    ResourceKind: Cluster
    ResourceName: Workload Software Supply Chain
    ResourceNamespace: KubeAdvisor
    ResourceUID: sply.1
    Results:
    - Action: Alert
      Category: Cluster
      Check:
        CheckId: "1"
        CheckTitle: Container Image Registry Supply Chain Hygiene
        GroupId: "1"
        GroupTitle: Image Registry Whitelist
        ModuleId: sply.1
        ModuleTitle: Workload Software Supply Chain
      CheckId: sply.1.1.1.1667744901853394230
      Message: Verify that the container image(s) used by 'Deployment.apps nginx'
        provisioned from whitelisted registries - 'nginx in container nginx'
      Platform: Kubernetes
      Recommendation: Deployment nginx - Add the image registries to the scan profile
        or push the images to one of the whitelisted registry
      References:
      - https://kubernetes.io/docs/concepts/containers/images
      Resource:
        Group: apps
        Kind: Deployment
        Labels:
          app: nginx
        Name: nginx
        Version: v1
      ResultUID: sply.1.1.1.1667744901853394230@1667744901853394230
      Severity: High
      Url: https://kubernetes.io/docs/concepts/containers/images

kubesecを試す

kubesecを試したメモ。kubesecというとSecretのマニフェストを暗号化するshyiko/kubesecのほうが有名な気もするが、controlplaneio/kubesecのほう。

導入

導入は以下の方法で可能。

  • コンテナイメージ
  • バイナリ
  • アドミッションコントローラー
  • kubecltプラグイン

また、SaaSとしても利用可能。

ここでは、バイナリをバイナリをダウンロードしてきてパスの通ったディレクトリに配置した。

実行

適当なマニフェストを作成する。

k create deploy nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

検査する。検査項目はドキュメントに記載がある。

$ kubesec scan nginx.yaml
[
  {
    "object": "Deployment/nginx.default",
    "valid": true,
    "fileName": "nginx.yaml",
    "message": "Passed with a score of 0 points",
    "score": 0,
    "scoring": {
      "advise": [
        {
          "id": "ApparmorAny",
          "selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",
          "reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY",
          "points": 3
        },
        {
          "id": "ServiceAccountName",
          "selector": ".spec .serviceAccountName",
          "reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege",
          "points": 3
        },
        {
          "id": "SeccompAny",
          "selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",
          "reason": "Seccomp profiles set minimum privilege and secure against unknown threats",
          "points": 1
        },
        {
          "id": "LimitsCPU",
          "selector": "containers[] .resources .limits .cpu",
          "reason": "Enforcing CPU limits prevents DOS via resource exhaustion",
          "points": 1
        },
        {
          "id": "RequestsMemory",
          "selector": "containers[] .resources .limits .memory",
          "reason": "Enforcing memory limits prevents DOS via resource exhaustion",
          "points": 1
        },
        {
          "id": "RequestsCPU",
          "selector": "containers[] .resources .requests .cpu",
          "reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster",
          "points": 1
        },
        {
          "id": "RequestsMemory",
          "selector": "containers[] .resources .requests .memory",
          "reason": "Enforcing memory requests aids a fair balancing of resources across the cluster",
          "points": 1
        },
        {
          "id": "CapDropAny",
          "selector": "containers[] .securityContext .capabilities .drop",
          "reason": "Reducing kernel capabilities available to a container limits its attack surface",
          "points": 1
        },
        {
          "id": "CapDropAll",
          "selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",
          "reason": "Drop all capabilities and add only those required to reduce syscall attack surface",
          "points": 1
        },
        {
          "id": "ReadOnlyRootFilesystem",
          "selector": "containers[] .securityContext .readOnlyRootFilesystem == true",
          "reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost",
          "points": 1
        },
        {
          "id": "RunAsNonRoot",
          "selector": "containers[] .securityContext .runAsNonRoot == true",
          "reason": "Force the running image to run as a non-root user to ensure least privilege",
          "points": 1
        },
        {
          "id": "RunAsUser",
          "selector": "containers[] .securityContext .runAsUser -gt 10000",
          "reason": "Run as a high-UID user to avoid conflicts with the host's user table",
          "points": 1
        }
      ]
    }
  }
]

何を検査するかを指定する機能はなさそう。

kubeauditを試す

kubeauditを試してみたメモ。

導入

brew install kubeaudit

実行方法

3つのモードがある。

  1. マニフェストモード
  2. ローカルモード
  3. クラスターモード

マニフェストモード

適当なマニフェストを作る。

k create deploy nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

検査を実行する。

$ kubeaudit all -f nginx.yaml

DEPRECATION NOTICE: The 'mountds' command is deprecated and will stop working in a future minor release. Please use the 'mounts' command instead. If you use 'all' no change is required.


---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: nginx

--------------------------------------------

-- [error] AppArmorAnnotationMissing
   Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/nginx' should be added.
   Metadata:
      Container: nginx
      MissingAnnotation: container.apparmor.security.beta.kubernetes.io/nginx

-- [error] AutomountServiceAccountTokenTrueAndDefaultSA
   Message: Default service account with token mounted. automountServiceAccountToken should be set to 'false' on either the ServiceAccount or on the PodSpec or a non-default service account should be used.

-- [error] CapabilityOrSecurityContextMissing
   Message: Security Context not set. The Security Context should be specified and all Capabilities should be dropped by setting the Drop list to ALL.
   Metadata:
      Container: nginx

-- [warning] ImageTagMissing
   Message: Image tag is missing.
   Metadata:
      Container: nginx

-- [warning] LimitsNotSet
   Message: Resource limits not set.
   Metadata:
      Container: nginx

-- [error] RunAsNonRootPSCNilCSCNil
   Message: runAsNonRoot should be set to true or runAsUser should be set to a value > 0 either in the container SecurityContext or PodSecurityContext.
   Metadata:
      Container: nginx

-- [error] AllowPrivilegeEscalationNil
   Message: allowPrivilegeEscalation not set which allows privilege escalation. It should be set to 'false'.
   Metadata:
      Container: nginx

-- [warning] PrivilegedNil
   Message: privileged is not set in container SecurityContext. Privileged defaults to 'false' but it should be explicitly set to 'false'.
   Metadata:
      Container: nginx

-- [error] ReadOnlyRootFilesystemNil
   Message: readOnlyRootFilesystem is not set in container SecurityContext. It should be set to 'true'.
   Metadata:
      Container: nginx

-- [error] SeccompAnnotationMissing
   Message: Seccomp annotation is missing. The annotation seccomp.security.alpha.kubernetes.io/pod: runtime/default should be added.
   Metadata:
      MissingAnnotation: seccomp.security.alpha.kubernetes.io/pod

上記ではallを指定しているが、検査している対象は以下のコマンドのヘルプで確認できし、ドキュメントでも記載されている。

$ kubeaudit
Kubeaudit audits Kubernetes clusters for common security controls.

kubeaudit has three modes:
  1. Manifest mode: If a Kubernetes manifest file is provided using the -f/--manifest flag, kubeaudit will audit the manifest file. Kubeaudit also supports autofixing in manifest mode using the 'autofix' command. This will fix the manifest in-place. The fixed manifest can be written to a different file using the -o/--out flag.
  2. Cluster mode: If kubeaudit detects it is running in a cluster, it will audit the other resources in the cluster.
  3. Local mode: kubeaudit will try to connect to a cluster using the local kubeconfig file ($HOME/.kube/config). A different kubeconfig location can be specified using the -c/--kubeconfig flag

Usage:
  kubeaudit [command]

Available Commands:
  all          Run all audits
  apparmor     Audit containers running without AppArmor
  asat         Audit pods using an automatically mounted default service account
  autofix      Automagically make a manifest secure
  capabilities Audit containers not dropping ALL capabilities
  help         Help about any command
  hostns       Audit pods with hostNetwork, hostIPC or hostPID enabled
  image        Audit containers not using a specified image:tag
  limits       Audit containers exceeding a specified CPU or memory limit
  mountds      Audit containers that mount /var/run/docker.sock
  mounts       Audit containers that mount sensitive paths
  netpols      Audit namespaces that do not have a default deny network policy
  nonroot      Audit containers allowing for root user
  privesc      Audit containers that allow privilege escalation
  privileged   Audit containers running as privileged
  rootfs       Audit containers not using a read only root filesystems
  seccomp      Audit containers running without Seccomp
  version      Prints the current kubeaudit version

Flags:
  -e, --exitcode int         Exit code to use if there are results with severity of "error". Conventionally, 0 is used for success and all non-zero codes for an error. (default 2)
  -p, --format string        The output format to use (one of "pretty", "logrus", "json") (default "pretty")
  -h, --help                 help for kubeaudit
  -c, --kubeconfig string    Path to local Kubernetes config file. Only used in local mode (default is $HOME/.kube/config)
  -f, --manifest string      Path to the yaml configuration to audit. Only used in manifest mode.
  -m, --minseverity string   Set the lowest severity level to report (one of "error", "warning", "info") (default "info")
  -n, --namespace string     Only audit resources in the specified namespace. Not currently supported in manifest mode.

Use "kubeaudit [command] --help" for more information about a command.

自動的に修正する機能もある。

kubeaudit autofix -f nginx.yaml -o nginx-fixed.yaml

修正後のファイルは以下のようになる。limitsとかは設定されていない。

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
      annotations:
        container.apparmor.security.beta.kubernetes.io/nginx: runtime/default
        seccomp.security.alpha.kubernetes.io/pod: runtime/default
    spec:
      containers:
        - image: nginx
          name: nginx
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsNonRoot: true
      automountServiceAccountToken: false
status: {}

クラスターモード

クラスターモードといっているのは、クラスター内でコンテナとして実行すること。実行するコマンドはローカルモードと同じ。

ローカルモード

パスを指定する場合は-c "/path/to/config"で指定する。

$ kubeaudit all

DEPRECATION NOTICE: The 'mountds' command is deprecated and will stop working in a future minor release. Please use the 'mounts' command instead. If you use 'all' no change is required.


---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Daemonset
  metadata:
    name: aws-node
    namespace: kube-system

--------------------------------------------

-- [error] AppArmorAnnotationMissing
   Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/aws-node' should be added.
   Metadata:
      Container: aws-node
      MissingAnnotation: container.apparmor.security.beta.kubernetes.io/aws-node

-- [error] CapabilityShouldDropAll
   Message: Capability Drop list should be set to ALL. Add the specific ones you need to the Add list and set an override label.
   Metadata:
      Container: aws-node

-- [error] CapabilityAdded
   Message: Capability "NET_ADMIN" added. It should be removed from the capability add list. If you need this capability, add an override label such as 'container.audit.kubernetes.io/aws-node.allow-capability-net-admin: SomeReason'.
   Metadata:
      Container: aws-node
      Metadata: NET_ADMIN

-- [error] NamespaceHostNetworkTrue
   Message: hostNetwork is set to 'true' in PodSpec. It should be set to 'false'.

-- [warning] LimitsNotSet
   Message: Resource limits not set.
   Metadata:
      Container: aws-node

-- [error] RunAsNonRootPSCNilCSCNil
   Message: runAsNonRoot should be set to true or runAsUser should be set to a value > 0 either in the container SecurityContext or PodSecurityContext.
   Metadata:
      Container: aws-node

-- [error] AllowPrivilegeEscalationNil
   Message: allowPrivilegeEscalation not set which allows privilege escalation. It should be set to 'false'.
   Metadata:
      Container: aws-node

-- [warning] PrivilegedNil
   Message: privileged is not set in container SecurityContext. Privileged defaults to 'false' but it should be explicitly set to 'false'.
   Metadata:
      Container: aws-node

-- [error] ReadOnlyRootFilesystemNil
   Message: readOnlyRootFilesystem is not set in container SecurityContext. It should be set to 'true'.
   Metadata:
      Container: aws-node

-- [error] SeccompAnnotationMissing
   Message: Seccomp annotation is missing. The annotation seccomp.security.alpha.kubernetes.io/pod: runtime/default should be added.
   Metadata:
      MissingAnnotation: seccomp.security.alpha.kubernetes.io/pod


---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Daemonset
  metadata:
    name: kube-proxy
    namespace: kube-system

--------------------------------------------

(省略)

allとautofixサブコマンドでは、何を検査するかを設定ファイルにして渡すこともできる。

enabledAuditors:
  # Auditors are enabled by default if they are not explicitly set to "false"
  apparmor: false
  asat: false
  capabilities: false
  hostns: false
  image: false
  limits: false
  mounts: false
  netpols: false
  nonroot: false
  privesc: false
  privileged: false
  rootfs: true
  seccomp: true
$ kubeaudit all -k kubeaudit-config.yml -f nginx.yaml

DEPRECATION NOTICE: The 'mountds' command is deprecated and will stop working in a future minor release. Please use the 'mounts' command instead. If you use 'all' no change is required.


---------------- Results for ---------------

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: nginx

--------------------------------------------

-- [error] ReadOnlyRootFilesystemNil
   Message: readOnlyRootFilesystem is not set in container SecurityContext. It should be set to 'true'.
   Metadata:
      Container: nginx

-- [error] SeccompAnnotationMissing
   Message: Seccomp annotation is missing. The annotation seccomp.security.alpha.kubernetes.io/pod: runtime/default should be added.
   Metadata:
      MissingAnnotation: seccomp.security.alpha.kubernetes.io/pod

AWS Secrets & Configuration Providerを試す

以下のブログを試したメモ。

(6/28にSecretとのSyncまでやり直して更新)

準備

クラスターを作成する。

cat <<EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ascp
  region: ap-northeast-1
  version: "1.19"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default
      enableSsm: true

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true
EOF
eksctl create cluster -f cluster.yaml

前提条件を準備する。

まず、Secrets Managerでシークレットを作る。

aws secretsmanager create-secret \
  --region ap-northeast-1 \
  --name mysecret/mypasswd \
  --secret-string '{"username":"admin","password":"abcdef"}'

このシークレットにアクセスできるポリシーを作る。

cat <<EOF > mysecret-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetResourcePolicy",
                "secretsmanager:GetSecretValue",
                "secretsmanager:DescribeSecret",
                "secretsmanager:ListSecretVersionIds"
            ],
            "Resource": "arn:aws:secretsmanager:ap-northeast-1:XXXXXXXXXXXX:secret:mysecret*"
        }
    ]
}
EOF
aws iam create-policy --policy-name mysecret-policy --policy-document file://mysecret-policy.json

test Namespaceとtest ServiceAccountを作る。

$ k create ns test
namespace/test created
$ kubens test
Context "sotosugi@ascp.ap-northeast-1.eksctl.io" modified.
Active namespace is "test".
$ k create sa test
serviceaccount/test created

ウォークスルー

Step 1: IAM Roles for Service Accounts (IRSA) を使って Pod へのアクセスを制限する

OICDプロバイダーは作成済み。

ServiceAccount用のロールを作成する。

$ eksctl create iamserviceaccount --name test --namespace test --cluster ascp --attach-policy-arn arn:aws:iam::XXXXXXXXXXXX:policy/mysecret-policy --approve --override-existing-serviceaccounts
2021-06-27 23:24:27 [ℹ]  eksctl version 0.54.0
2021-06-27 23:24:27 [ℹ]  using region ap-northeast-1
2021-06-27 23:24:30 [ℹ]  1 existing iamserviceaccount(s) (kube-system/aws-node) will be excluded
2021-06-27 23:24:30 [ℹ]  1 iamserviceaccount (test/test) was included (based on the include/exclude rules)
2021-06-27 23:24:30 [!]  metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
2021-06-27 23:24:30 [ℹ]  1 task: { 2 sequential sub-tasks: { create IAM role for serviceaccount "test/test", create serviceaccount "test/test" } }
2021-06-27 23:24:30 [ℹ]  building iamserviceaccount stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-06-27 23:24:30 [ℹ]  deploying stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-06-27 23:24:30 [ℹ]  waiting for CloudFormation stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-06-27 23:24:47 [ℹ]  waiting for CloudFormation stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-06-27 23:25:05 [ℹ]  waiting for CloudFormation stack "eksctl-ascp-addon-iamserviceaccount-test-test"
2021-06-27 23:25:08 [ℹ]  serviceaccount "test/test" already exists
2021-06-27 23:25:08 [ℹ]  updated serviceaccount "test/test"

Step 2: Kubernetes Secrets Store CSI driver をインストールする

チャートレポジトリを追加する。

$ helm repo add secrets-store-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/secrets-store-csi-driver/master/charts
"secrets-store-csi-driver" has been added to your repositories

チャートを確認する。

$ helm search repo secrets-store-csi-driver
NAME                                                    CHART VERSION   APP VERSION     DESCRIPTION                                       
secrets-store-csi-driver/secrets-store-csi-driver       0.0.23          0.0.23          A Helm chart to install the SecretsStore CSI Dr...

チャートのパラメータを確認する。

$ helm inspect values secrets-store-csi-driver/secrets-store-csi-driver
linux:
  enabled: true
  image:
    repository: k8s.gcr.io/csi-secrets-store/driver
    tag: v0.0.23
    pullPolicy: IfNotPresent

  ## Prevent the CSI driver from being scheduled on virtual-kublet nodes
  affinity: 
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: type
            operator: NotIn
            values:
            - virtual-kubelet

  driver:
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 50m
        memory: 100Mi

  registrarImage:
    repository: k8s.gcr.io/sig-storage/csi-node-driver-registrar
    tag: v2.2.0
    pullPolicy: IfNotPresent

  registrar:
    resources:
      limits:
        cpu: 100m
        memory: 100Mi
      requests:
        cpu: 10m
        memory: 20Mi
    logVerbosity: 5

  livenessProbeImage:
    repository: k8s.gcr.io/sig-storage/livenessprobe
    tag: v2.3.0
    pullPolicy: IfNotPresent

  livenessProbe:
    resources:
      limits:
        cpu: 100m
        memory: 100Mi
      requests:
        cpu: 10m
        memory: 20Mi


  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1

  kubeletRootDir: /var/lib/kubelet
  providersDir: /etc/kubernetes/secrets-store-csi-providers
  nodeSelector: {}
  tolerations: []
  metricsAddr: ":8095"
  env: []
  priorityClassName: ""
  daemonsetAnnotations: {}
  podAnnotations: {}
  podLabels: {}

  # volumes is a list of volumes made available to secrets store csi driver.
  volumes: null
  #   - name: foo
  #     emptyDir: {}

  # volumeMounts is a list of volumeMounts for secrets store csi driver.
  volumeMounts: null
  #   - name: foo
  #     mountPath: /bar
  #     readOnly: true

windows:
  enabled: false
  image:
    repository: k8s.gcr.io/csi-secrets-store/driver
    tag: v0.0.23
    pullPolicy: IfNotPresent

  ## Prevent the CSI driver from being scheduled on virtual-kublet nodes
  affinity: 
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: type
            operator: NotIn
            values:
            - virtual-kubelet

  driver:
    resources:
      limits:
        cpu: 400m
        memory: 400Mi
      requests:
        cpu: 50m
        memory: 100Mi

  registrarImage:
    repository: k8s.gcr.io/sig-storage/csi-node-driver-registrar
    tag: v2.2.0
    pullPolicy: IfNotPresent

  registrar:
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 10m
        memory: 20Mi
    logVerbosity: 5

  livenessProbeImage:
    repository: k8s.gcr.io/sig-storage/livenessprobe
    tag: v2.3.0
    pullPolicy: IfNotPresent

  livenessProbe:
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 10m
        memory: 20Mi

  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1

  kubeletRootDir: C:\var\lib\kubelet
  providersDir: C:\k\secrets-store-csi-providers
  nodeSelector: {}
  tolerations: []
  metricsAddr: ":8095"
  env: []
  priorityClassName: ""
  daemonsetAnnotations: {}
  podAnnotations: {}
  podLabels: {}

  # volumes is a list of volumes made available to secrets store csi driver.
  volumes: null
  #   - name: foo
  #     emptyDir: {}

  # volumeMounts is a list of volumeMounts for secrets store csi driver.
  volumeMounts: null
  #   - name: foo
  #     mountPath: /bar
  #     readOnly: true

# log level. Uses V logs (klog)
logVerbosity: 0

# logging format JSON
logFormatJSON: false

livenessProbe:
  port: 9808
  logLevel: 2

## Maximum size in bytes of gRPC response from plugins
maxCallRecvMsgSize: 4194304

## Install Default RBAC roles and bindings
rbac:
  install: true
  pspEnabled: false

## Install RBAC roles and bindings required for K8S Secrets syncing if true
syncSecret:
  enabled: false

## Enable secret rotation feature [alpha]
enableSecretRotation: false

## Secret rotation poll interval duration
rotationPollInterval:

## Filtered watch nodePublishSecretRef secrets
filteredWatchSecret: false

## Provider HealthCheck
providerHealthCheck: false

## Provider HealthCheck interval
providerHealthCheckInterval: 2m

imagePullSecrets: []

デフォルト設定でデプロイする。

$ helm -n kube-system install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver
NAME: csi-secrets-store
LAST DEPLOYED: Sun Jun 27 23:26:47 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Secrets Store CSI Driver is getting deployed to your cluster.

To verify that Secrets Store CSI Driver has started, run:

  kubectl --namespace=kube-system get pods -l "app=secrets-store-csi-driver"

Now you can follow these steps https://secrets-store-csi-driver.sigs.k8s.io/getting-started/usage.html
to create a SecretProviderClass resource, and a deployment using the SecretProviderClass.

インストールを確認する。DaemonSetが動いている。

$ kubectl get po --namespace=kube-system
NAME                                               READY   STATUS    RESTARTS   AGE
aws-node-7p247                                     1/1     Running   0          37m
aws-node-w6sq5                                     1/1     Running   0          37m
coredns-59847d77c8-fjwsx                           1/1     Running   0          51m
coredns-59847d77c8-qhlj5                           1/1     Running   0          51m
csi-secrets-store-secrets-store-csi-driver-9867x   3/3     Running   0          17s
csi-secrets-store-secrets-store-csi-driver-fjhl9   3/3     Running   0          17s
kube-proxy-ppq4b                                   1/1     Running   0          37m
kube-proxy-wkv2h                                   1/1     Running   0          37m

CRDを確認する。

$ kubectl get crd
NAME                                                        CREATED AT
eniconfigs.crd.k8s.amazonaws.com                            2021-06-27T13:35:51Z
secretproviderclasses.secrets-store.csi.x-k8s.io            2021-06-27T14:26:48Z
secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io   2021-06-27T14:26:48Z
securitygrouppolicies.vpcresources.k8s.aws                  2021-06-27T13:35:54Z

Step 3: AWS Secrets & Configuration Provider をインストールします

AWS Secrets & Configuration Provider をインストールする。

$ kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml
serviceaccount/csi-secrets-store-provider-aws created
clusterrole.rbac.authorization.k8s.io/csi-secrets-store-provider-aws-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-secrets-store-provider-aws-cluster-rolebinding created
daemonset.apps/csi-secrets-store-provider-aws created

確認する。

$ k get pod -n kube-system
NAME                                               READY   STATUS    RESTARTS   AGE
aws-node-7p247                                     1/1     Running   0          38m
aws-node-w6sq5                                     1/1     Running   0          38m
coredns-59847d77c8-fjwsx                           1/1     Running   0          52m
coredns-59847d77c8-qhlj5                           1/1     Running   0          52m
csi-secrets-store-provider-aws-6n65t               1/1     Running   0          13s
csi-secrets-store-provider-aws-nrn2g               1/1     Running   0          13s
csi-secrets-store-secrets-store-csi-driver-9867x   3/3     Running   0          79s
csi-secrets-store-secrets-store-csi-driver-fjhl9   3/3     Running   0          79s
kube-proxy-ppq4b                                   1/1     Running   0          38m
kube-proxy-wkv2h                                   1/1     Running   0          38m

Step 4: SecretProviderClass カスタムリソースを作成してデプロイする

SecretProviderClass カスタムリソースを作成する。

cat <<EOF > aws-secrets.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: aws-secrets
spec:
  provider: aws
  parameters:                    # provider-specific parameters
    objects:  |
      - objectName: "mysecret/mypasswd"
        objectType: "secretsmanager"
EOF
$ k apply -f aws-secrets.yaml
secretproviderclass.secrets-store.csi.x-k8s.io/aws-secrets created

Step 5: 構成されたシークレットに基づいてボリュームをマウントするように Pod を構成してデプロイする

cat <<EOF > nginx-secrets-store-inline.yaml
kind: Pod
apiVersion: v1
metadata:
  name: nginx-secrets-store-inline
spec:
  serviceAccountName: test
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - name: mysecret
      mountPath: "/mnt/secrets-store"
      readOnly: true
  volumes:
  - name: mysecret
    csi:
      driver: secrets-store.csi.k8s.io
      readOnly: true
      volumeAttributes:
        secretProviderClass: "aws-secrets"
EOF
$ k apply -f nginx-secrets-store-inline.yaml
pod/nginx-secrets-store-inline created

マウントされたことを確認する。

$ kubectl exec -it nginx-secrets-store-inline -- ls /mnt/secrets-store/
mysecret_mypasswd
$ kubectl exec -it nginx-secrets-store-inline -- cat /mnt/secrets-store/mysecret_mypasswd
{"username":"admin","password":"abcdef"}

その他

Secretと同期させたりもできるようなので確認する。インストール時に有効にする必要がある。念のため全部インストールし直す。

helm -n kube-system delete csi-secrets-store
kubectl delete -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml
helm -n kube-system upgrade --install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \
  --set syncSecret.enabled=true
kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml

ExternalSecretsのようにExternalSecretリソースを作るとSecretリソースが作られるわけではなく、Podがボリュームマウントする際にSecretリソースにも同期するという動作になる。

cat <<EOF > aws-secrets-sync.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: aws-secrets-sync
spec:
  provider: aws
  secretObjects:
  - secretName: mysecret
    type: Opaque
    data:
    - objectName: myalias
      key: mykey
  parameters:
    objects: |
      - objectName: "mysecret/mypasswd"
        objectAlias: myalias
        objectType: "secretsmanager"
EOF
$ k apply -f aws-secrets-sync.yaml
secretproviderclass.secrets-store.csi.x-k8s.io/aws-secrets-sync created
cat <<EOF > nginx-secrets-store-sync.yaml
kind: Pod
apiVersion: v1
metadata:
  name: nginx-secrets-store-sync
spec:
  serviceAccountName: test
  containers:
  - image: nginx
    name: nginx
    env:
    - name: MYSECRET
      valueFrom:
        secretKeyRef:
          name: mysecret
          key: mykey
    volumeMounts:
    - name: mysecret
      mountPath: "/mnt/secrets-store"
      readOnly: true
  volumes:
  - name: mysecret
    csi:
      driver: secrets-store.csi.k8s.io
      readOnly: true
      volumeAttributes:
        secretProviderClass: "aws-secrets-sync"
EOF
$ k apply -f nginx-secrets-store-sync.yaml
pod/nginx-secrets-store-sync created

確認する。

$ k get po
NAME                         READY   STATUS    RESTARTS   AGE
nginx-secrets-store-inline   1/1     Running   0          5m43s
nginx-secrets-store-sync     1/1     Running   0          5s
$ k exec -it nginx-secrets-store-sync -- env | grep MYSECRET
MYSECRET={"username":"admin","password":"abcdef"}
$ k get secret
NAME                  TYPE                                  DATA   AGE
default-token-8dd6x   kubernetes.io/service-account-token   3      119m
mysecret              Opaque                                1      78s
test-token-45fqm      kubernetes.io/service-account-token   3      119m
$ k get secret mysecret -o yaml
apiVersion: v1
data:
  mykey: eyJ1c2VybmFtZSI6ImFkbWluIiwicGFzc3dvcmQiOiJhYmNkZWYifQ==
kind: Secret
metadata:
  creationTimestamp: "2021-06-27T16:21:39Z"
  labels:
    secrets-store.csi.k8s.io/managed: "true"
  name: mysecret
  namespace: test
  ownerReferences:
  - apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
    kind: SecretProviderClassPodStatus
    name: nginx-secrets-store-sync-test-aws-secrets-sync
    uid: bec8e05c-1dab-46af-9784-cd7884264986
  resourceVersion: "28885"
  selfLink: /api/v1/namespaces/test/secrets/mysecret
  uid: a978b70a-7a5e-43bb-bb24-bb138ea18659
type: Opaque
$ k get secret mysecret -o json | jq -r '.data.mykey' | base64 --decode
{"username":"admin","password":"abcdef"}

Podを消すとSecretも消える。

$ k delete po --all
pod "nginx-secrets-store-inline" deleted
pod "nginx-secrets-store-sync" deleted
$ k get secret
NAME                  TYPE                                  DATA   AGE
default-token-8dd6x   kubernetes.io/service-account-token   3      134m
test-token-45fqm      kubernetes.io/service-account-token   3      133m

EKSでランタイムデフォルトのseccompプロファイルを使用する

EKSでランタイムデフォルトのseccompプロファイルが使用できるかを確認する。

Dockerのデフォルトで適用されるプロファイルは以下に記載がある。

1.19

1.19のEKSクラスターを起動する。

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: mycluster19
  region: ap-northeast-1
  version: "1.19"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default
      enableSsm: true

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true
eksctl create cluster -f cluster19.yaml

確認

NginxのPodを普通に起動する。

$ k run pod1 --image=nginx
pod/pod1 created

コンテナにログインし、プロセスのステータスを見る。Seccomp: 0となっている。

$ k exec -it pod1 -- bash
root@pod1:/# cat /proc/1/status 
Name:   nginx
Umask:  0022
State:  S (sleeping)
Tgid:   1
Ngid:   0
Pid:    1
PPid:   0
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 64
Groups:  
NStgid: 1
NSpid:  1
NSpgid: 1
NSsid:  1
VmPeak:    10676 kB
VmSize:    10648 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:      5972 kB
VmRSS:      5972 kB
RssAnon:             804 kB
RssFile:            5168 kB
RssShmem:              0 kB
VmData:      988 kB
VmStk:       132 kB
VmExe:       988 kB
VmLib:      3792 kB
VmPTE:        56 kB
VmSwap:        0 kB
HugetlbPages:          0 kB
CoreDumping:    0
THP_enabled:    1
Threads:        1
SigQ:   0/30446
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000040001000
SigCgt: 0000000198016a07
CapInh: 00000000a80425fb
CapPrm: 00000000a80425fb
CapEff: 00000000a80425fb
CapBnd: 00000000a80425fb
CapAmb: 0000000000000000
NoNewPrivs:     0
Seccomp:        0
Speculation_Store_Bypass:       vulnerable
Cpus_allowed:   3
Cpus_allowed_list:      0-1
Mems_allowed:   00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:      0
voluntary_ctxt_switches:        23
nonvoluntary_ctxt_switches:     62
root@pod1:/#

seccompプロファイルを指定してPodを起動する。

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    name: pod2
    securityContext:
      seccompProfile:
        type: RuntimeDefault
$ k apply -f pod2.yaml
pod/pod2 created

コンテナにログインし、プロセスのステータスを見る。Seccomp: 2となったので、プロファイルが適用されている。

$ k exec -it pod2 -- bash
root@pod2:/# cat /proc/1/status
Name:   nginx
Umask:  0022
State:  S (sleeping)
Tgid:   1
Ngid:   0
Pid:    1
PPid:   0
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 64
Groups:  
NStgid: 1
NSpid:  1
NSpgid: 1
NSsid:  1
VmPeak:    10676 kB
VmSize:    10648 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:      6076 kB
VmRSS:      6076 kB
RssAnon:             812 kB
RssFile:            5264 kB
RssShmem:              0 kB
VmData:      988 kB
VmStk:       132 kB
VmExe:       988 kB
VmLib:      3792 kB
VmPTE:        56 kB
VmSwap:        0 kB
HugetlbPages:          0 kB
CoreDumping:    0
THP_enabled:    1
Threads:        1
SigQ:   0/30446
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000040001000
SigCgt: 0000000198016a07
CapInh: 00000000a80425fb
CapPrm: 00000000a80425fb
CapEff: 00000000a80425fb
CapBnd: 00000000a80425fb
CapAmb: 0000000000000000
NoNewPrivs:     0
Seccomp:        2
Speculation_Store_Bypass:       vulnerable
Cpus_allowed:   3
Cpus_allowed_list:      0-1
Mems_allowed:   00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:      0
voluntary_ctxt_switches:        65
nonvoluntary_ctxt_switches:     16
root@pod2:/# 

1.18

1.18以前はアノテーションで指定することになっているが、ランタイムデフォルトの使用であればkubeletの起動引数のカスタマイズなしでも大丈夫なのかを確認する。

1.18のEKSクラスターを起動する。

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: mycluster18
  region: ap-northeast-1
  version: "1.18"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default
      enableSsm: true

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true
eksctl create cluster -f cluster18.yaml

確認

NginxのPodを普通に起動する。

$ k run pod1 --image=nginx
pod/pod1 created

コンテナにログインし、プロセスのステータスを見る。Seccomp: 0となっている。

$ k exec -it pod1 -- bash
root@pod1:/# cat /proc/1/status
Name:   nginx
Umask:  0022
State:  S (sleeping)
Tgid:   1
Ngid:   0
Pid:    1
PPid:   0
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 64
Groups:  
NStgid: 1
NSpid:  1
NSpgid: 1
NSsid:  1
VmPeak:    10680 kB
VmSize:    10652 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:      6008 kB
VmRSS:      6008 kB
RssAnon:             812 kB
RssFile:            5196 kB
RssShmem:              0 kB
VmData:      988 kB
VmStk:       132 kB
VmExe:       988 kB
VmLib:      3796 kB
VmPTE:        40 kB
VmPMD:        12 kB
VmSwap:        0 kB
HugetlbPages:          0 kB
Threads:        1
SigQ:   0/30446
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000040001000
SigCgt: 0000000198016a07
CapInh: 00000000a80425fb
CapPrm: 00000000a80425fb
CapEff: 00000000a80425fb
CapBnd: 00000000a80425fb
CapAmb: 0000000000000000
NoNewPrivs:     0
Seccomp:        0
Speculation_Store_Bypass:       vulnerable
Cpus_allowed:   3
Cpus_allowed_list:      0-1
Mems_allowed:   00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:      0
voluntary_ctxt_switches:        55
nonvoluntary_ctxt_switches:     7
root@pod1:/# 

seccompプロファイルを指定してPodを起動する。

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod3
  name: pod3
  annotations:
    seccomp.security.alpha.kubernetes.io/pod: "runtime/default"
spec:
  containers:
  - image: nginx
    name: pod3
$ k apply -f pod3.yaml
pod/pod3 created

コンテナにログインし、プロセスのステータスを見る。Seccomp: 2となったので、プロファイルが適用されている。

$ k exec -it pod3 -- bash
root@pod3:/# cat /proc/1/status
Name:   nginx
Umask:  0022
State:  S (sleeping)
Tgid:   1
Ngid:   0
Pid:    1
PPid:   0
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 64
Groups:  
NStgid: 1
NSpid:  1
NSpgid: 1
NSsid:  1
VmPeak:    10680 kB
VmSize:    10652 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:      5980 kB
VmRSS:      5980 kB
RssAnon:             808 kB
RssFile:            5172 kB
RssShmem:              0 kB
VmData:      988 kB
VmStk:       132 kB
VmExe:       988 kB
VmLib:      3796 kB
VmPTE:        40 kB
VmPMD:        12 kB
VmSwap:        0 kB
HugetlbPages:          0 kB
Threads:        1
SigQ:   0/30446
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000040001000
SigCgt: 0000000198016a07
CapInh: 00000000a80425fb
CapPrm: 00000000a80425fb
CapEff: 00000000a80425fb
CapBnd: 00000000a80425fb
CapAmb: 0000000000000000
NoNewPrivs:     0
Seccomp:        2
Speculation_Store_Bypass:       vulnerable
Cpus_allowed:   3
Cpus_allowed_list:      0-1
Mems_allowed:   00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:      0
voluntary_ctxt_switches:        72
nonvoluntary_ctxt_switches:     37
root@pod3:/# 

参考リンク

SPIFFEのチュートリアルを試したメモ

以下ブログを翻訳しようとしたが、SPIFFEが全然わからなかったので、SPIFFEのチュートリアルを試したメモ。

参考リンク

手順

クラスターの作成

今回はMinikubeでやってみる。注意点として記載があるとおり、いくつかフラグをつけて起動する。

$ minikube start \
>     --extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key \
>     --extra-config=apiserver.service-account-key-file=/var/lib/minikube/certs/sa.pub \
>     --extra-config=apiserver.service-account-issuer=api \
>     --extra-config=apiserver.service-account-api-audiences=api,spire-server \
>     --extra-config=apiserver.authorization-mode=Node,RBAC
😄  Darwin 10.14.6 上の minikube v1.18.1
✨  dockerドライバーが自動的に選択されました。他の選択肢:  hyperkit, virtualbox, ssh
👍  コントロールプレーンのノード minikube を minikube 上で起動しています
🚜  Pulling base image ...
💾  Kubernetes v1.20.2 のダウンロードの準備をしています
    > preloaded-images-k8s-v9-v1....: 491.22 MiB / 491.22 MiB  100.00% 25.98 Mi
🔥  docker container (CPUs=2, Memory=3890MB) を作成しています...
🐳  Docker 20.10.3 で Kubernetes v1.20.2 を準備しています...
    ▪ apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key
    ▪ apiserver.service-account-key-file=/var/lib/minikube/certs/sa.pub
    ▪ apiserver.service-account-issuer=api
    ▪ apiserver.service-account-api-audiences=api,spire-server
    ▪ apiserver.authorization-mode=Node,RBAC
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Kubernetes コンポーネントを検証しています...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟  有効なアドオン: default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

リポジトリのクローン

リポジトリをクローンする。

git clone https://github.com/spiffe/spire-tutorials
cd spire-tutorials/k8s/quickstart

サーバーの構成

Namespaceを作成する。

$ kubectl apply -f spire-namespace.yaml
namespace/spire created
$ k get ns
NAME              STATUS   AGE
default           Active   9m14s
kube-node-lease   Active   9m15s
kube-public       Active   9m16s
kube-system       Active   9m16s
spire             Active   8s

SPIREサーバー用のServiceAccountCondfigMapClusterRole/ClusterRoleBindingを作成する。

$ kubectl apply \
>     -f server-account.yaml \
>     -f spire-bundle-configmap.yaml \
>     -f server-cluster-role.yaml
serviceaccount/spire-server created
configmap/spire-bundle created
clusterrole.rbac.authorization.k8s.io/spire-server-trust-role created
clusterrolebinding.rbac.authorization.k8s.io/spire-server-trust-role-binding created

SPIREサーバーのConfigMapStatefulSetServiceを作成する。

$ kubectl apply \
>     -f server-configmap.yaml \
>     -f server-statefulset.yaml \
>     -f server-service.yaml
configmap/spire-server created
statefulset.apps/spire-server created
service/spire-server created

確認する。

$  kubectl get statefulset --namespace spire
NAME           READY   AGE
spire-server   1/1     20s
$ kubectl get pods --namespace spire
NAME             READY   STATUS    RESTARTS   AGE
spire-server-0   1/1     Running   0          30s
$ kubectl get services --namespace spire
NAME           TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
spire-server   NodePort   10.106.188.29   <none>        8081:31210/TCP   37s

エージェントの構成

SPIREエージェント用のServiceAccountClusterRole/ClusterRoleBindingを作成する。

$ kubectl apply \
>     -f agent-account.yaml \
>     -f agent-cluster-role.yaml
serviceaccount/spire-agent created
clusterrole.rbac.authorization.k8s.io/spire-agent-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/spire-agent-cluster-role-binding created

SPIREエージェントのConfigMapDaemonSetを作成する。

$ kubectl apply \
>     -f agent-configmap.yaml \
>     -f agent-daemonset.yaml
configmap/spire-agent created
daemonset.apps/spire-agent created

確認する。

$ kubectl get daemonset --namespace spire
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
spire-agent   1         1         1       1            1           <none>          114s
$ kubectl get pods --namespace spire
NAME                READY   STATUS    RESTARTS   AGE
spire-agent-fx88v   1/1     Running   0          2m
spire-server-0      1/1     Running   0          7m39s

ワークロードの登録

最初にワークロードをサーバーに登録する必要がある。

ノードに割り当てるSPIFFEIDを指定して、ノードの新しい登録エントリーを作成する。

$ kubectl exec -n spire spire-server-0 -- \
>     /opt/spire/bin/spire-server entry create \
>     -spiffeID spiffe://example.org/ns/spire/sa/spire-agent \
>     -selector k8s_sat:cluster:demo-cluster \
>     -selector k8s_sat:agent_ns:spire \
>     -selector k8s_sat:agent_sa:spire-agent \
>     -node
Entry ID         : dbd74f07-f3ce-4dcf-b6c1-1a78f062dfe1
SPIFFE ID        : spiffe://example.org/ns/spire/sa/spire-agent
Parent ID        : spiffe://example.org/spire/server
Revision         : 0
TTL              : default
Selector         : k8s_sat:agent_ns:spire
Selector         : k8s_sat:agent_sa:spire-agent
Selector         : k8s_sat:cluster:demo-cluster

ワークロードに割り当てるSPIFFEIDを指定して、ワークロードの新しい登録エントリを作成する。

$ kubectl exec -n spire spire-server-0 -- \
>     /opt/spire/bin/spire-server entry create \
>     -spiffeID spiffe://example.org/ns/default/sa/default \
>     -parentID spiffe://example.org/ns/spire/sa/spire-agent \
>     -selector k8s:ns:default \
>     -selector k8s:sa:default
Entry ID         : f5ef86a6-9b8f-4a0a-b503-2c9a3e2dd476
SPIFFE ID        : spiffe://example.org/ns/default/sa/default
Parent ID        : spiffe://example.org/ns/spire/sa/spire-agent
Revision         : 0
TTL              : default
Selector         : k8s:ns:default
Selector         : k8s:sa:default

ワークロードコンテナの構成

SPIREにアクセスするようにワークロードコンテナを構成する。

クライアントのDeploymentを作成する。

$ kubectl apply -f client-deployment.yaml
deployment.apps/client created

マニフェストを見ると、ホストの/run/spire/socketsをマウントしていて、hostPID: truehostNetwork: trueが設定されている。

Podに接続する。

kubectl exec -it $(kubectl get pods -o=jsonpath='{.items[0].metadata.name}' \
   -l app=client)  -- /bin/sh

ソケットにアクセスしSVIDが取得できることを確認する。

/opt/spire # /opt/spire/bin/spire-agent api fetch -socketPath /run/spire/sockets/agent.sock
Received 1 svid after 10.535ms

SPIFFE ID:              spiffe://example.org/ns/default/sa/default
SVID Valid After:       2021-03-24 13:42:52 +0000 UTC
SVID Valid Until:       2021-03-24 14:43:02 +0000 UTC
CA #1 Valid After:      2021-03-24 13:30:48 +0000 UTC
CA #1 Valid Until:      2021-03-25 13:30:58 +0000 UTC

/opt/spire # 

以上でチュートリアルは終了。

-writeで証明書を保存できる。

/opt/spire # /opt/spire/bin/spire-agent api fetch x509 -socketPath /run/spire/sockets/agent.sock -write /tmp/
Received 1 svid after 16.7911ms

SPIFFE ID:              spiffe://example.org/ns/default/sa/default
SVID Valid After:       2021-03-24 13:42:52 +0000 UTC
SVID Valid Until:       2021-03-24 14:43:02 +0000 UTC
CA #1 Valid After:      2021-03-24 13:30:48 +0000 UTC
CA #1 Valid Until:      2021-03-25 13:30:58 +0000 UTC

Writing SVID #0 to file /tmp/svid.0.pem.
Writing key #0 to file /tmp/svid.0.key.
Writing bundle #0 to file /tmp/bundle.0.pem.
/opt/spire #