jqでjsonにフィールドを追加する

以下のjsonにフィールドを追加したかったのでやり方のメモ。

{
  "cniVersion": "0.3.1",
  "name": "aws-cni",
  "plugins": [
    {
      "name": "aws-cni",
      "type": "aws-cni",
      "vethPrefix": "eni"
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      },
      "snat": true
    },
    {
      "name": "cilium",
      "type": "cilium-cni",
      "enable-debug": false
    }
  ]
}

以下のようにする。

cat test.json | jq '.plugins[0] |= .+ { "mtu": "9001", "pluginLogFile": "/var/log/aws-routed-eni/plugin.log", "pluginLogLevel": "Debug" }'

以下のように出力される。

{
  "cniVersion": "0.3.1",
  "name": "aws-cni",
  "plugins": [
    {
      "name": "aws-cni",
      "type": "aws-cni",
      "vethPrefix": "eni",
      "mtu": "9001",
      "pluginLogFile": "/var/log/aws-routed-eni/plugin.log",
      "pluginLogLevel": "Debug"
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      },
      "snat": true
    },
    {
      "name": "cilium",
      "type": "cilium-cni",
      "enable-debug": false
    }
  ]
}

参考リンク

EKSでCiliumを試す

EKSでCiliumをネットワークポリシーエンジンとして使う方法を試してみる。CNIとしてCiliumを使う方法と、ネットワークポリシーエンジンとしてだけ使う方法があるが、後者。

コンポーネント バージョン 備考
eksctl 0.36.2
Kubernetes バージョン 1.18
プラットフォームのバージョン eks.3
VPC CNI Plugin 1.7.5
Cilium 1.9.3

クラスターの作成

クラスターを作成する。

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: cilium
  region: ap-northeast-1
vpc:
  cidr: "10.1.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default
    privateNetworking: true

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]
eksctl create cluster -f cilium.yaml

Ciliumのインストール

Helmリポジトリを追加する。

helm repo add cilium https://helm.cilium.io/

HelmでCiliumをインストールする。

helm install cilium cilium/cilium --version 1.9.3 \
  --namespace kube-system \
  --set cni.chainingMode=aws-cni \
  --set masquerade=false \
  --set tunnel=disabled \
  --set nodeinit.enabled=true

Ciliumがインストールされたことを確認する。

$ kubectl get po -A
NAMESPACE     NAME                              READY   STATUS              RESTARTS   AGE
kube-system   aws-node-p6k9m                    1/1     Running             0          41h
kube-system   aws-node-qttwr                    1/1     Running             0          41h
kube-system   cilium-dffdz                      1/1     Running             0          4m46s
kube-system   cilium-node-init-f6snw            1/1     Running             0          4m46s
kube-system   cilium-node-init-f72cr            1/1     Running             0          4m46s
kube-system   cilium-operator-db487bc5b-2m86x   1/1     Running             0          4m46s
kube-system   cilium-operator-db487bc5b-8s4sm   1/1     Running             0          4m46s
kube-system   cilium-smm9f                      1/1     Running             0          4m46s
kube-system   coredns-86f7d88d77-dl769          0/1     ContainerCreating   0          4m14s
kube-system   coredns-86f7d88d77-rt84l          0/1     ContainerCreating   0          3m44s
kube-system   kube-proxy-59krv                  1/1     Running             0          41h
kube-system   kube-proxy-mh4dv                  1/1     Running             0          41h

Ciliumは起動したが、CoreDNSが起動しなくなっている。Podをdescribeすると以下のようなWarningがでている。

  Warning  FailedCreatePodSandBox  3m2s (x4 over 3m5s)    kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8bd40e1f66457a69ed99969214e248f4dbc97d499683ef612bbe30b7137a1ec0" network for pod "coredns-86f7d88d77-dl769": networkPlugin cni failed to set up pod "coredns-86f7d88d77-dl769_kube-system" network: netplugin failed but error parsing its diagnostic message "{\"level\":\"debug\",\"ts\":\"2021-01-29T02:41:14.743Z\",\"caller\":\"routed-eni-cni-plugin/cni.go:123\",\"msg\":\"MTU not set, defaulting to 9001\"}\n{\"level\":\"info\",\"ts\":\"2021-01-29T02:41:14.743Z\",\"caller\":\"routed-eni-cni-plugin/cni.go:117\",\"msg\":\"Received CNI add request: ContainerID(8bd40e1f66457a69ed99969214e248f4dbc97d499683ef612bbe30b7137a1ec0) Netns(/proc/32025/ns/net) IfName(eth0) Args(IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-86f7d88d77-dl769;K8S_POD_INFRA_CONTAINER_ID=8bd40e1f66457a69ed99969214e248f4dbc97d499683ef612bbe30b7137a1ec0) Path(/opt/cni/bin) argsStdinData({\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"name\\\":\\\"aws-cni\\\",\\\"type\\\":\\\"aws-cni\\\",\\\"vethPrefix\\\":\\\"eni\\\"})\"}\n{\"level\":\"debug\",\"ts\":\"2021-01-29T02:41:14.744Z\",\"caller\":\"routed-eni-cni-plugin/cni.go:117\",\"msg\":\"MTU value set is 9001:\"}\n{\"level\":\"error\",\"ts\":\"2021-01-29T02:41:14.745Z\",\"caller\":\"routed-eni-cni-plugin/cni.go:117\",\"msg\":\"Failed to assign an IP address to container 8bd40e1f66457a69ed99969214e248f4dbc97d499683ef612bbe30b7137a1ec0\"}\n{\n    \"code\": 100,\n    \"msg\": \"add cmd: failed to assign an IP address to container\"\n}": invalid character '{' after top-level value

関連するかも知れないIssueがある。

VPC CNI PluginのバージョンとCiliumのバージョンを確認する。

$ kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
amazon-k8s-cni-init:v1.7.5-eksbuild.1
amazon-k8s-cni:v1.7.5-eksbuild.1
$ helm ls -A
NAME    NAMESPACE   REVISION    UPDATED                                 STATUS      CHART           APP VERSION
cilium  kube-system 1           2021-01-29 11:40:28.449879 +0900 JST    deployed    cilium-1.9.3    1.9.3

Issueにあるように、手作業で編集してみる。

編集前はこのようになっている。

[root@ip-10-1-90-101 ~]# cat /etc/cni/net.d/10-aws.conflist
{
  "cniVersion": "0.3.1",
  "name": "aws-cni",
  "plugins": [
    {
      "name": "aws-cni",
      "type": "aws-cni",
      "vethPrefix": "eni",
      "mtu": "9001",
      "pluginLogFile": "/var/log/aws-routed-eni/plugin.log",
      "pluginLogLevel": "DEBUG"
    },
    {
      "type": "portmap",
      "capabilities": {"portMappings": true},
      "snat": true
    }
  ]
}[root@ip-10-1-90-101 ~]# cat /etc/cni/net.d/05-cilium.conflist
{
  "cniVersion": "0.3.1",
  "name": "aws-cni",
  "plugins": [
    {
      "name": "aws-cni",
      "type": "aws-cni",
      "vethPrefix": "eni"
    },
    {
      "type": "portmap",
      "capabilities": {"portMappings": true},
      "snat": true
    },
    {
       "name": "cilium",
       "type": "cilium-cni",
       "enable-debug": false
    }
  ]
}
[root@ip-10-1-90-101 ~]#

2つめのファイルのほうを編集する。

[root@ip-10-1-90-101 ~]# cat /etc/cni/net.d/05-cilium.conflist
{
  "cniVersion": "0.3.1",
  "name": "aws-cni",
  "plugins": [
    {
      "name": "aws-cni",
      "type": "aws-cni",
      "vethPrefix": "eni",
      "mtu": "9001",
      "pluginLogFile": "/var/log/aws-routed-eni/plugin.log",
      "pluginLogLevel": "Debug"
    },
    {
      "type": "portmap",
      "capabilities": {"portMappings": true},
      "snat": true
    },
    {
       "name": "cilium",
       "type": "cilium-cni",
       "enable-debug": false
    }
  ]
}
[root@ip-10-1-90-101 ~]#

CoreDNSが起動したことを確認する。

$ k get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   aws-node-p6k9m                    1/1     Running   0          41h
kube-system   aws-node-qttwr                    1/1     Running   0          41h
kube-system   cilium-dffdz                      1/1     Running   0          35m
kube-system   cilium-node-init-f6snw            1/1     Running   0          35m
kube-system   cilium-node-init-f72cr            1/1     Running   0          35m
kube-system   cilium-operator-db487bc5b-2m86x   1/1     Running   0          35m
kube-system   cilium-operator-db487bc5b-8s4sm   1/1     Running   0          35m
kube-system   cilium-smm9f                      1/1     Running   0          35m
kube-system   coredns-86f7d88d77-dl769          1/1     Running   0          35m
kube-system   coredns-86f7d88d77-rt84l          1/1     Running   0          34m
kube-system   kube-proxy-59krv                  1/1     Running   0          41h
kube-system   kube-proxy-mh4dv                  1/1     Running   0          41h

Ciliumをインストールした後、Ciliumでポリシーを適用するためにはPodの再起動が必要であり、CoreDNSは自動で再起動されたようだ。

テスト

テストをデプロイする。

$ kubectl create ns cilium-test
namespace/cilium-test created
$ kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
deployment.apps/pod-to-a created
deployment.apps/pod-to-external-1111 created
deployment.apps/pod-to-a-denied-cnp created
deployment.apps/pod-to-a-allowed-cnp created
deployment.apps/pod-to-external-fqdn-allow-google-cnp created
deployment.apps/pod-to-b-multi-node-clusterip created
deployment.apps/pod-to-b-multi-node-headless created
deployment.apps/host-to-b-multi-node-clusterip created
deployment.apps/host-to-b-multi-node-headless created
deployment.apps/pod-to-b-multi-node-nodeport created
deployment.apps/pod-to-b-intra-node-nodeport created
service/echo-a created
service/echo-b created
service/echo-b-headless created
service/echo-b-host-headless created
ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created

Ciliumの管理下にあるPodはCiliumEndpoint(cep)リソースとして確認できる。

$ kubectl get cep -A
NAMESPACE     NAME                                                     ENDPOINT ID   IDENTITY ID   INGRESS ENFORCEMENT   EGRESS ENFORCEMENT   VISIBILITY POLICY   ENDPOINT STATE   IPV4           IPV6
cilium-test   echo-a-57cbbd9b8b-mxrfs                                  3296          5791                                                                         ready            10.1.67.236
cilium-test   echo-b-6db5fc8ff8-rkm9w                                  2571          1204                                                                         ready            10.1.96.204
cilium-test   pod-to-a-648fd74787-4gxhr                                3667          57208                                                                        ready            10.1.85.107
cilium-test   pod-to-a-allowed-cnp-7776c879f-qltmn                     3192          34348                                                                        ready            10.1.112.226
cilium-test   pod-to-a-denied-cnp-b5ff897c7-kwz2m                      919           45310                                                                        ready            10.1.76.211
cilium-test   pod-to-b-intra-node-nodeport-6546644d59-6qfc7            1334          44803                                                                        ready            10.1.112.247
cilium-test   pod-to-b-multi-node-clusterip-7d54c74c5f-7g72l           272           58159                                                                        ready            10.1.91.91
cilium-test   pod-to-b-multi-node-headless-76db68d547-kkg7f            3158          55761                                                                        ready            10.1.65.159
cilium-test   pod-to-b-multi-node-nodeport-7496df84d7-qtxl2            2704          20459                                                                        ready            10.1.94.155
cilium-test   pod-to-external-1111-6d4f9d9645-pnlgv                    1517          23485                                                                        ready            10.1.104.150
cilium-test   pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr   157           43510                                                                        ready            10.1.65.164
kube-system   coredns-86f7d88d77-dl769                                 195           3617                                                                         ready            10.1.98.80
kube-system   coredns-86f7d88d77-rt84l                                 395           3617                                                                         ready            10.1.87.50

CiliumのCRDを確認する。

$ k api-resources | grep -e NAME -e cilium
NAME                               SHORTNAMES     APIVERSION                        NAMESPACED   KIND
ciliumclusterwidenetworkpolicies   ccnp           cilium.io/v2                      false        CiliumClusterwideNetworkPolicy
ciliumendpoints                    cep,ciliumep   cilium.io/v2                      true         CiliumEndpoint
ciliumexternalworkloads            cew            cilium.io/v2                      false        CiliumExternalWorkload
ciliumidentities                   ciliumid       cilium.io/v2                      false        CiliumIdentity
ciliumlocalredirectpolicies        clrp           cilium.io/v2                      true         CiliumLocalRedirectPolicy
ciliumnetworkpolicies              cnp,ciliumnp   cilium.io/v2                      true         CiliumNetworkPolicy
ciliumnodes                        cn,ciliumn     cilium.io/v2                      false        CiliumNode

テストで作成されたポリシーを確認する。

$ k -n cilium-test get cnp
NAME                                    AGE
pod-to-a-allowed-cnp                    7m43s
pod-to-a-denied-cnp                     7m43s
pod-to-external-fqdn-allow-google-cnp   7m43s

1つめのポリシーを確認する。

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  labels:
    component: policy-check
    name: pod-to-a-allowed-cnp
    quarantine: "false"
    topology: any
    traffic: internal
    type: autocheck
  name: pod-to-a-allowed-cnp
  namespace: cilium-test
spec:
  egress:
  - toEndpoints:
    - matchLabels:
        name: echo-a
    toPorts:
    - ports:
      - port: "8080"
        protocol: TCP
  - toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: kube-dns
    toPorts:
    - ports:
      - port: "53"
        protocol: ANY
  - toEndpoints:
    - matchLabels:
        k8s:dns.operator.openshift.io/daemonset-dns: default
        k8s:io.kubernetes.pod.namespace: openshift-dns
    toPorts:
    - ports:
      - port: "5353"
        protocol: UDP
  endpointSelector:
    matchLabels:
      name: pod-to-a-allowed-cnp
  • 対象はname=pod-to-a-allowed-cnpのラベルを持つPod
  • name=echo-aのラベルを持つEndpointへのTCP: 8080へのアウトバウンド接続を許可
  • 名前解決の問い合わせの許可

確認する。名前解決できる。

$ k exec -it pod-to-a-allowed-cnp-7776c879f-qltmn -- sh
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve

Name:      kubernetes.default
Address 1: 172.20.0.1 kubernetes.default.svc.cluster.local

name=echo-aのラベルを持つPodの8080ポートに疎通する。

/ # curl 10.1.67.236:8080
<html>
  <head>

(省略)

他の同じく8080をListenしているPodには繋がらない。

$ k exec -it pod-to-a-allowed-cnp-7776c879f-qltmn -- sh
/ # curl 10.1.96.204:8080

2つめのポリシーを確認する。

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  labels:
    component: policy-check
    name: pod-to-a-denied-cnp
    quarantine: "false"
    topology: any
    traffic: internal
    type: autocheck
  name: pod-to-a-denied-cnp
  namespace: cilium-test
spec:
  egress:
  - toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: kube-dns
    toPorts:
    - ports:
      - port: "53"
        protocol: ANY
  - toEndpoints:
    - matchLabels:
        k8s:dns.operator.openshift.io/daemonset-dns: default
        k8s:io.kubernetes.pod.namespace: openshift-dns
    toPorts:
    - ports:
      - port: "5353"
        protocol: UDP
  endpointSelector:
    matchLabels:
      name: pod-to-a-denied-cnp
  • 対象はname=pod-to-a-denied-cnpのラベルを持つPod
  • 名前解決の問い合わせの許可

確認する。名前解決できる。

$ k exec -it pod-to-a-denied-cnp-b5ff897c7-kwz2m -- sh
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve

Name:      kubernetes.default
Address 1: 172.20.0.1 kubernetes.default.svc.cluster.local

name=echo-aのラベルを持つPodの8080ポートに疎通しないようになっている。

/ # curl 10.1.67.236:8080

3つめのポリシーを確認する。

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  labels:
    component: policy-check
    name: pod-to-external-fqdn-allow-google-cnp
    quarantine: "false"
    topology: any
    traffic: external
    type: autocheck
  name: pod-to-external-fqdn-allow-google-cnp
  namespace: cilium-test
spec:
  egress:
  - toFQDNs:
    - matchPattern: '*.google.com'
  - toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: kube-dns
    toPorts:
    - ports:
      - port: "53"
        protocol: ANY
      rules:
        dns:
        - matchPattern: '*'
  - toEndpoints:
    - matchLabels:
        k8s:dns.operator.openshift.io/daemonset-dns: default
        k8s:io.kubernetes.pod.namespace: openshift-dns
    toPorts:
    - ports:
      - port: "5353"
        protocol: UDP
      rules:
        dns:
        - matchPattern: '*'
  endpointSelector:
    matchLabels:
      name: pod-to-external-fqdn-allow-google-cnp
  • 対象はpod-to-external-fqdn-allow-google-cnpのラベルを持つPod
  • *.google.comへのアウトバウンド接続の許可
  • 名前解決の問い合わせの許可

確認する。google.comは繋がる。

$ k exec -it pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr -- sh
/ # curl http://www.google.com/
<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="ja">

(省略)

確認する。google.co.jpは繋がらない。

/ # curl http://www.google.co.jp/

Hubbleのインストール

可視化ツールのHubbleを導入する。以下のリンク先も参照すること。

export CILIUM_NAMESPACE=kube-system
helm upgrade cilium cilium/cilium --version 1.9.3 \
   --namespace $CILIUM_NAMESPACE \
   --reuse-values \
   --set hubble.listenAddress=":4244" \
   --set hubble.relay.enabled=true \
   --set hubble.ui.enabled=true

HubbleのPodが起動したことを確認する。

$ k get po -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
aws-node-p6k9m                    1/1     Running   0          4d20h
aws-node-qttwr                    1/1     Running   0          4d20h
cilium-dffdz                      1/1     Running   0          3d3h
cilium-node-init-f6snw            1/1     Running   0          3d3h
cilium-node-init-f72cr            1/1     Running   0          3d3h
cilium-operator-db487bc5b-2m86x   1/1     Running   0          3d3h
cilium-operator-db487bc5b-8s4sm   1/1     Running   0          3d3h
cilium-smm9f                      1/1     Running   0          3d3h
coredns-86f7d88d77-dl769          1/1     Running   0          3d3h
coredns-86f7d88d77-rt84l          1/1     Running   0          3d3h
hubble-relay-f489fcbbb-8xg6b      1/1     Running   0          3m44s
hubble-ui-769fb95577-g87xw        3/3     Running   0          3m44s
kube-proxy-59krv                  1/1     Running   0          4d20h
kube-proxy-mh4dv                  1/1     Running   0          4d20h

Hubble UIに対してポートフォワードでアクセスする。

kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80

トラフィックが可視化される。これはかっこいい!

f:id:sotoiwa:20210201164011p:plain

Hubble CLIもある。

Hubble Relayに対してポートフォワードする。

kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80

エンドポイントに接続できることを確認する。

$ hubble --server localhost:4245 status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 8192/8192 (100.00%)
Flows/s: 57.08
Connected Nodes: 2/2

クラスター全体のトラフィックを観察する。

$ hubble --server localhost:4245 observe
TIMESTAMP             SOURCE                                                                     DESTINATION                                                                TYPE          VERDICT     SUMMARY
Feb  1 06:22:34.295   kube-system/coredns-86f7d88d77-dl769:35746                                 10.1.0.2:53                                                                to-stack      FORWARDED   UDP
Feb  1 06:22:34.295   10.1.0.2:53                                                                kube-system/coredns-86f7d88d77-dl769:35746                                 to-endpoint   FORWARDED   UDP
Feb  1 06:22:34.295   kube-system/coredns-86f7d88d77-dl769:53                                    10.1.90.101:53224                                                          to-stack      FORWARDED   UDP
Feb  1 06:22:34.299   cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414   www.google.com:80                                                          to-stack      FORWARDED   TCP Flags: ACK, PSH
Feb  1 06:22:34.378   www.google.com:80                                                          cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414   to-endpoint   FORWARDED   TCP Flags: ACK, PSH
Feb  1 06:22:34.378   cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414   www.google.com:80                                                          to-stack      FORWARDED   TCP Flags: ACK, FIN
Feb  1 06:22:34.380   www.google.com:80                                                          cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414   to-endpoint   FORWARDED   TCP Flags: ACK, FIN
Feb  1 06:22:34.380   cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414   www.google.com:80                                                          to-stack      FORWARDED   TCP Flags: ACK
Feb  1 06:22:34.691   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934                     kube-system/coredns-86f7d88d77-dl769:53                                    L3-L4         FORWARDED   UDP
Feb  1 06:22:34.691   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934                     kube-system/coredns-86f7d88d77-dl769:53                                    to-stack      FORWARDED   UDP
Feb  1 06:22:34.691   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934                     kube-system/coredns-86f7d88d77-dl769:53                                    to-endpoint   FORWARDED   UDP
Feb  1 06:22:34.691   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934                     kube-system/coredns-86f7d88d77-dl769:53                                    to-stack      FORWARDED   UDP
Feb  1 06:22:34.691   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934                     kube-system/coredns-86f7d88d77-dl769:53                                    to-endpoint   FORWARDED   UDP
Feb  1 06:22:34.692   kube-system/coredns-86f7d88d77-dl769:53                                    cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934                     to-stack      FORWARDED   UDP
Feb  1 06:22:34.692   kube-system/coredns-86f7d88d77-dl769:53                                    cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934                     to-endpoint   FORWARDED   UDP
Feb  1 06:22:34.692   kube-system/coredns-86f7d88d77-dl769:53                                    cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934                     to-stack      FORWARDED   UDP
Feb  1 06:22:34.692   kube-system/coredns-86f7d88d77-dl769:53                                    cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934                     to-endpoint   FORWARDED   UDP
Feb  1 06:22:34.692   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   L3-L4         FORWARDED   TCP Flags: SYN
Feb  1 06:22:34.692   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   to-stack      FORWARDED   TCP Flags: SYN
Feb  1 06:22:34.693   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   to-endpoint   FORWARDED   TCP Flags: SYN
Feb  1 06:22:34.693   cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     to-stack      FORWARDED   TCP Flags: SYN, ACK
Feb  1 06:22:34.694   cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     to-endpoint   FORWARDED   TCP Flags: SYN, ACK
Feb  1 06:22:34.694   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   to-stack      FORWARDED   TCP Flags: ACK
Feb  1 06:22:34.694   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   to-stack      FORWARDED   TCP Flags: ACK, PSH
Feb  1 06:22:34.695   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   to-endpoint   FORWARDED   TCP Flags: ACK
Feb  1 06:22:34.695   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   to-endpoint   FORWARDED   TCP Flags: ACK, PSH
Feb  1 06:22:34.698   cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     to-stack      FORWARDED   TCP Flags: ACK, PSH
Feb  1 06:22:34.699   cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     to-endpoint   FORWARDED   TCP Flags: ACK, PSH
Feb  1 06:22:34.699   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   to-stack      FORWARDED   TCP Flags: ACK, FIN
Feb  1 06:22:34.700   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   to-endpoint   FORWARDED   TCP Flags: ACK, FIN
Feb  1 06:22:34.700   cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     to-stack      FORWARDED   TCP Flags: ACK, FIN
Feb  1 06:22:34.701   cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     to-endpoint   FORWARDED   TCP Flags: ACK, FIN
Feb  1 06:22:34.702   cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020                     cilium-test/echo-a-57cbbd9b8b-mxrfs:8080                                   to-endpoint   FORWARDED   TCP Flags: ACK
Feb  1 06:22:34.989   10.1.84.176:443                                                            kube-system/coredns-86f7d88d77-rt84l:41758                                 to-endpoint   FORWARDED   TCP Flags: ACK, PSH
Feb  1 06:22:35.456   10.1.90.101:47900                                                          kube-system/coredns-86f7d88d77-rt84l:8080                                  to-endpoint   FORWARDED   TCP Flags: SYN
Feb  1 06:22:35.456   kube-system/coredns-86f7d88d77-rt84l:8080                                  10.1.90.101:47900                                                          to-stack      FORWARDED   TCP Flags: SYN, ACK
Feb  1 06:22:35.456   10.1.90.101:47900                                                          kube-system/coredns-86f7d88d77-rt84l:8080                                  to-endpoint   FORWARDED   TCP Flags: ACK
Feb  1 06:22:35.456   10.1.90.101:47900                                                          kube-system/coredns-86f7d88d77-rt84l:8080                                  to-endpoint   FORWARDED   TCP Flags: ACK, PSH
Feb  1 06:22:35.456   kube-system/coredns-86f7d88d77-rt84l:8080                                  10.1.90.101:47900                                                          to-stack      FORWARDED   TCP Flags: ACK, PSH
Feb  1 06:22:35.457   kube-system/coredns-86f7d88d77-rt84l:8080                                  10.1.90.101:47900                                                          to-stack      FORWARDED   TCP Flags: ACK, FIN

疑問点の調査

まずテスト用アプリは削除する。

kubectl delete -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
kubectl delete ns cilium-test

普通のNetwork Policyは使えるのか

CiliumのマニュアルにはCiliumNetworkPolicyの例があるが、そもそもKubernetesのNetworkPolicyが使えるのかを確認する。

まずPodを2つ起動する。

$ k run pod1 --image=nginx
pod/pod1 created
$ k run pod2 --image=nginx
pod/pod2 created
$ k label pod pod1 app=pod1
pod/pod1 labeled
$ k label pod pod2 app=pod2
pod/pod2 labeled

pod1からpod2に疎通できることを確認する。

$ k get po -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP            NODE                                             NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          9m9s    10.1.92.166   ip-10-1-66-141.ap-northeast-1.compute.internal   <none>           <none>
pod2   1/1     Running   0          8m58s   10.1.85.107   ip-10-1-66-141.ap-northeast-1.compute.internal   <none>           <none>
$ k exec -it pod1 -- curl 10.1.85.107
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

全てブロックするNetworkPolicyを作成する。

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
$ k apply -f default-deny-all-np.yaml
networkpolicy.networking.k8s.io/default-deny-all created

ブロックされることを確認する。

$ k exec -it pod1 -- curl 10.1.85.107

つまり、普通のNetworkPolicyも使える。

両方定義したらどうなるのか

Calicoの場合は、拒否ルールが使えたり優先度の定義ができたりする。Ciliumも拒否ポリシーはあるが、ベータ機能。

先ほどKubernetesのNetworkPolicyで全て拒否を定義したが、CiliumNetworkPolicyでpod1からpod2への通信の許可を定義する。

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: pod1-egress
spec:
  endpointSelector:
    matchLabels:
      app: pod1
  egress:
  - toEndpoints:
    - matchLabels:
        app: pod2
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: pod2-ingress
spec:
  endpointSelector:
    matchLabels:
      app: pod2
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: pod1
$ k apply -f pod1-egress-cnp.yaml
ciliumnetworkpolicy.cilium.io/pod1-egress created
$ k apply -f pod2-ingress-cnp.yaml
ciliumnetworkpolicy.cilium.io/pod2-ingress created

動作確認する。

$ k exec -it pod1 -- curl 10.1.85.107
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

両方を組み合わせても使える。

念のため、反対にしてみる。NetworkPolicyとCiliumNetworkPolicyを削除する。

k delete netpol default-deny-all
k delete cnp pod1-egress
k delete cnp pod2-ingress

全て禁止するCiliumNetworkPolicyを作成する。

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: default-deny-all
spec:
  endpointSelector: {}
  ingress:
  - {}
  egress:
  - {}
$ k apply -f default-deny-all-cnp.yaml
ciliumnetworkpolicy.cilium.io/default-deny-all created

ブロックされることを確認する。

$ k exec -it pod1 -- curl 10.1.85.107

KubernetesのNetworkPolicyでpod1からpod2への通信の許可を定義する。

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: pod1-egress
spec:
  podSelector:
    matchLabels:
      app: pod1
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: pod2
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: pod2-ingress
spec:
  podSelector:
    matchLabels:
      app: pod2
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: pod1
$ k apply -f pod1-egress-np.yaml
networkpolicy.networking.k8s.io/pod1-egress created
$ k apply -f pod2-ingress-np.yaml
networkpolicy.networking.k8s.io/pod2-ingress created

疎通を確認する。

$ k exec -it pod1 -- curl 10.1.85.107
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

両方定義した和集合でよさそう。

EKSでIstioとCalicoは動くのか

EKSでCNIとしてCalico等を使うと、AdmissionWebhookが上手く動かないので、Istioのオートインジェクションが使えないらしいので確認する。 また、VPC CNI Pluign + Calicoなら大丈夫かを確認する。

クラスターの作成

クラスターを作成する。

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: calico
  region: ap-northeast-1
vpc:
  cidr: "10.1.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default
    privateNetworking: true

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]
eksctl create cluster -f calico.yaml

Istioのインストール

以下にしたがってIstioのサンプル環境を作る。

Istioをダウンロードする。

curl -L https://istio.io/downloadIstio | sh -
cd istio-1.8.2

IstioをDemoプロファイルでインストールする。

$ istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Egress gateways installed
✔ Installation complete

Podが起動していることを確認する。

$ k get po -A
NAMESPACE      NAME                                    READY   STATUS    RESTARTS   AGE
istio-system   istio-egressgateway-7fc985bd9f-qbl9f    1/1     Running   0          3m53s
istio-system   istio-ingressgateway-58f9d7d858-btgf6   1/1     Running   0          3m53s
istio-system   istiod-7d8f784f96-cgxkh                 1/1     Running   0          4m7s
kube-system    aws-node-49l6h                          1/1     Running   0          11m
kube-system    aws-node-lmkvt                          1/1     Running   0          11m
kube-system    coredns-86f7d88d77-gs96r                1/1     Running   0          17m
kube-system    coredns-86f7d88d77-t2qc8                1/1     Running   0          17m
kube-system    kube-proxy-jvtfn                        1/1     Running   0          11m
kube-system    kube-proxy-k998d                        1/1     Running   0          11m

オートインジェクションを有効化する。

$ kubectl label namespace default istio-injection=enabled
namespace/default labeled

サンプルアプリケーションをデプロイする。

$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created

サンプルアプリのPodを確認する。

$ k get po -n default
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-558b8b4b76-cdlxx       2/2     Running   0          53s
productpage-v1-6987489c74-xv8xd   2/2     Running   0          53s
ratings-v1-7dc98c7588-gpd96       2/2     Running   0          53s
reviews-v1-7f99cc4496-v598l       2/2     Running   0          53s
reviews-v2-7d79d5bd5d-jx5pc       2/2     Running   0          53s
reviews-v3-7dbcdcbc56-smrqw       2/2     Running   0          53s

ゲートウェイをデプロイする。

$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

アプリのURLを確認する。

export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo "http://$GATEWAY_URL/productpage"

Bookinfoにアクセスできることを確認。

f:id:sotoiwa:20210125172917p:plain

VPC CNI Plugin + Calico

ネットワークポリシーエンジンとして、Calicoをインストールする。

$ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.7.5/config/v1.7/calico.yaml
daemonset.apps/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
serviceaccount/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
deployment.apps/calico-typha created
poddisruptionbudget.policy/calico-typha created
clusterrolebinding.rbac.authorization.k8s.io/typha-cpha created
clusterrole.rbac.authorization.k8s.io/typha-cpha created
configmap/calico-typha-horizontal-autoscaler created
deployment.apps/calico-typha-horizontal-autoscaler created
role.rbac.authorization.k8s.io/typha-cpha created
serviceaccount/typha-cpha created
rolebinding.rbac.authorization.k8s.io/typha-cpha created
service/calico-typha created

全てを許可するネットワークポリシーを用意する。

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - {}
  egress:
  - {}

これを全てのNamespaceに適用する。

k apply -f allow-all.yaml -n default
k apply -f allow-all.yaml -n istio-system
k apply -f allow-all.yaml -n kube-node-lease
k apply -f allow-all.yaml -n kube-public
k apply -f allow-all.yaml -n kube-system

これでサンプルアプリをデプロイし直して問題ないか確認する。

kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

特に問題なさそう。

$ k get po -n default
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-558b8b4b76-vq4p5       2/2     Running   0          43s
productpage-v1-6987489c74-wngdb   2/2     Running   0          42s
ratings-v1-7dc98c7588-nfrcr       2/2     Running   0          43s
reviews-v1-7f99cc4496-v2kj5       2/2     Running   0          43s
reviews-v2-7d79d5bd5d-kmh8l       2/2     Running   0          42s
reviews-v3-7dbcdcbc56-fm27c       2/2     Running   0          42s

Bookinfoもアクセスできた(画面キャプチャ省略)。

Bookinfoを削除する。

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml

ネットワークポリシーを削除する。

k delete -f allow-all.yaml -n default
k delete -f allow-all.yaml -n istio-system
k delete -f allow-all.yaml -n kube-node-lease
k delete -f allow-all.yaml -n kube-public
k delete -f allow-all.yaml -n kube-system

Calicoをアンインストールする。

kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.7.5/config/v1.7/calico.yaml

iptableルールとかが残っていておかしな挙動になるといやなので、念のため、ノードを再起動する。

Calico

CNIを置き換える形でCalicoをインストールする。

まずVPC CNI Pluginを削除する。

$ kubectl delete daemonset -n kube-system aws-node
daemonset.apps "aws-node" deleted

Calicoをインストールする。

$ kubectl apply -f https://docs.projectcalico.org/manifests/calico-vxlan.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

Podが稼働していることを確認する。

$ k get po -A
NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE
istio-system   istio-egressgateway-7fc985bd9f-g4bzz       1/1     Running   0          2m43s
istio-system   istio-ingressgateway-58f9d7d858-4fjdl      1/1     Running   0          2m43s
istio-system   istiod-7d8f784f96-sw8j6                    1/1     Running   0          2m43s
kube-system    calico-kube-controllers-7dbc97f587-v2hwv   1/1     Running   0          2m43s
kube-system    calico-node-qhhhc                          1/1     Running   0          55s
kube-system    calico-node-whgxw                          1/1     Running   0          2m58s
kube-system    coredns-86f7d88d77-5rhdf                   1/1     Running   0          2m42s
kube-system    coredns-86f7d88d77-9np8m                   1/1     Running   0          2m42s
kube-system    kube-proxy-crr72                           1/1     Running   0          2m58s
kube-system    kube-proxy-p26pv                           1/1     Running   0          55s

Bookinfoをデプロイする。

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

Podが起動してこない。

$ k get po -n default
No resources found in default namespace.

やはりWebhookが繋がらないので動かないようだ。

$ k describe rs details-v1-558b8b4b76
Name:           details-v1-558b8b4b76
Namespace:      default
Selector:       app=details,pod-template-hash=558b8b4b76,version=v1
Labels:         app=details
                pod-template-hash=558b8b4b76
                version=v1
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/details-v1
Replicas:       0 current / 1 desired
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=details
                    pod-template-hash=558b8b4b76
                    version=v1
  Service Account:  bookinfo-details
  Containers:
   details:
    Image:        docker.io/istio/examples-bookinfo-details-v1:1.16.2
    Port:         9080/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type             Status  Reason
  ----             ------  ------
  ReplicaFailure   True    FailedCreate
Events:
  Type     Reason        Age                 From                   Message
  ----     ------        ----                ----                   -------
  Warning  FailedCreate  84s (x2 over 114s)  replicaset-controller  Error creating: Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject?timeout=30s: dial tcp 192.168.35.1:15017: i/o timeout
  Warning  FailedCreate  54s                 replicaset-controller  Error creating: Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  FailedCreate  24s                 replicaset-controller  Error creating: Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject?timeout=30s: context deadline exceeded

Calicoのマニュアルにも以下のような注釈がある。

Note: Calico networking cannot currently be installed on the EKS control plane nodes. As a result the control plane nodes will not be able to initiate network connections to Calico pods. (This is a general limitation of EKS’s custom networking support, not specific to Calico.) As a workaround, trusted pods that require control plane nodes to connect to them, such as those implementing admission controller webhooks, can include hostNetwork:true in their pod spec. See the Kuberentes API pod spec definition for more information on this setting.

Lambdaをコンテナで実行したときの実行ユーザーを確認する

Lambdaのコンテナイメージサポートを確認しつつ、実行ユーザーを確認したメモ。

参考リンク

イメージの作成

イメージを作成する。RIC導入済みのベースイメージは以下のリンクに記載がある。Pythonのイメージを使うことにする。

アプリケーション(app.py)を作成する。

import os
import sys
def handler(event, context):
    msg='Hello from AWS Lambda using Python!'
    msg+=', sys.version: '
    msg+=sys.version
    msg+=', os.getuid: '
    msg+=str(os.getuid())
    return msg

Dockerfileを作成する。

FROM public.ecr.aws/lambda/python:3.8

COPY app.py   ./
CMD ["app.handler"]

イメージをビルドする。

docker build -t hello-lambda .

RIEを使ってテストするため、ローカルでイメージを起動する。

docker run --rm -it -p 9000:8080 hello-lambda

別ターミナルからリクエストを送る。

$ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
"Hello from AWS Lambda using Python!, sys.version: 3.8.6 (default, Dec 16 2020, 01:05:15) \n[GCC 7.3.1 20180712 (Red Hat 7.3.1-11)], os.getuid: 0"

ECRにログインする。

ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
AWS_REGION=$(aws configure get region)
aws ecr get-login-password | docker login --username AWS --password-stdin https://${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com

ECRにpushする。

aws ecr create-repository --repository-name hello-lambda
docker tag  hello-lambda:latest ${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/hello-lambda:latest
docker push ${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/hello-lambda:latest

関数のデプロイ

マネコンで関数を作成する。

f:id:sotoiwa:20210112074941p:plain

関数をテストする。uidは993となっている。

f:id:sotoiwa:20210112074956p:plain

Kubernetes での Seccomp入門

Seccompに入門したメモ。

SeccompはDockerではデフォルトで有効でDocker用の制限されたプロファイルがある。Kubernetesではデフォルトでは有効ではなく、指定する必要がある。ランタイムのデフォルトを参照することは可能。

参考リンク

Docker

この内容をコピーしてdefault.jsonファイルを作成する。

プロファイルを指定しないで起動する。

root@cks-worker:~# docker run --rm nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

プロファイルを指定して起動する。デフォルトのプロファイルなので特に変わらない。

root@cks-worker:~# docker run --rm --security-opt seccomp=default.json nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

プロファイルを変更する。default.jsonファイルからwriteシステムコールを探して消す。

再び起動するとエラーになることが確認できる。

root@cks-worker:~# docker run --rm --security-opt seccomp=default.json nginx
docker: Error response from daemon: OCI runtime start failed: cannot start an already running container: unknown.
ERRO[0000] error waiting for container: context canceled

Kubernetes

KubeletがSeccompプロファイルを読めるようにKubeletの起動引数ディレクトリを指定してあげるか、デフォルトのディレクトリである/var/lib/kubelet/seccompにプロファイルを置いてあげる必要がある。

root@cks-worker:~# mkdir /var/lib/kubelet/seccomp
root@cks-worker:~# mv default.json /var/lib/kubelet/seccomp/

先ず存在しないプロファイルを指定してみて試す。

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: secure
  name: secure
spec:
  securityContext:
    seccompProfile:
      type: Localhost
      localhostProfile: profiles/audit.json
  containers:
  - image: nginx
    name: secure
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

エラーになる。

root@cks-master:~# k apply -f secure.yaml
pod/secure created
root@cks-master:~# k get pod
NAME     READY   STATUS                 RESTARTS   AGE
secure   0/1     CreateContainerError   0          9s
root@cks-master:~# k describe pod secure
Name:         secure
Namespace:    default
Priority:     0
Node:         cks-worker/10.146.0.7
Start Time:   Mon, 04 Jan 2021 15:20:33 +0000
Labels:       run=secure
Annotations:  seccomp.security.alpha.kubernetes.io/pod: localhost/profiles/audit.json
Status:       Pending
IP:           10.44.0.1
IPs:
  IP:  10.44.0.1
Containers:
  secure:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r2nz7 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-r2nz7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r2nz7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  18s                default-scheduler  Successfully assigned default/secure to cks-worker
  Normal   Pulled     14s                kubelet            Successfully pulled image "nginx" in 3.215115053s
  Normal   Pulling    13s (x2 over 17s)  kubelet            Pulling image "nginx"
  Warning  Failed     10s (x2 over 14s)  kubelet            Error: failed to generate security options for container "secure": failed to generate seccomp security options for container: cannot load seccomp profile "/var/lib/kubelet/seccomp/profiles/audit.json": open /var/lib/kubelet/seccomp/profiles/audit.json: no such file or directory
  Normal   Pulled     10s                kubelet            Successfully pulled image "nginx" in 3.27904469s
root@cks-master:~# k delete pod secure
pod "secure" deleted

マニフェストを修正して存在するプロファイルを指定する。

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: secure
  name: secure
spec:
  securityContext:
    seccompProfile:
      type: Localhost
      localhostProfile: default.json
  containers:
  - image: nginx
    name: secure
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

再び試すと先ほどとは違うエラーになる。Dockerで試したときと同じエラー。

root@cks-master:~# k apply -f secure.yaml
pod/secure created
root@cks-master:~# k get pod
NAME     READY   STATUS              RESTARTS   AGE
secure   0/1     RunContainerError   0          13s
root@cks-master:~# k describe pod secure
Name:         secure
Namespace:    default
Priority:     0
Node:         cks-worker/10.146.0.7
Start Time:   Mon, 04 Jan 2021 15:24:36 +0000
Labels:       run=secure
Annotations:  seccomp.security.alpha.kubernetes.io/pod: localhost/default.json
Status:       Running
IP:           10.44.0.1
IPs:
  IP:  10.44.0.1
Containers:
  secure:
    Container ID:   docker://e83c1adb518c82a829376551cdccdc35fc4b95f6406026e70e4681c9b8c55498
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:4cf620a5c81390ee209398ecc18e5fb9dd0f5155cd82adcbae532fec94006fb9
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       RunContainerError
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime start failed: cannot start an already running container: unknown
      Exit Code:    128
      Started:      Mon, 04 Jan 2021 15:24:40 +0000
      Finished:     Mon, 04 Jan 2021 15:24:40 +0000
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r2nz7 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-r2nz7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r2nz7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  19s                default-scheduler  Successfully assigned default/secure to cks-worker
  Normal   Pulled     15s                kubelet            Successfully pulled image "nginx" in 2.879879032s
  Normal   Pulling    13s (x2 over 18s)  kubelet            Pulling image "nginx"
  Normal   Created    10s (x2 over 15s)  kubelet            Created container secure
  Warning  Failed     10s (x2 over 14s)  kubelet            Error: failed to start container "secure": Error response from daemon: OCI runtime start failed: cannot start an already running container: unknown
  Normal   Pulled     10s                kubelet            Successfully pulled image "nginx" in 2.801219695s
root@cks-master:~# k delete pod secure
pod "secure" deleted

Workerノードで先ほど消したwriteを戻す。

再び試すと起動する。

root@cks-master:~# k apply -f secure.yaml
pod/secure created
root@cks-master:~# k get pod
NAME     READY   STATUS    RESTARTS   AGE
secure   1/1     Running   0          9s
root@cks-master:~#

AppArmor入門

AppArmorに入門したメモ。

AppArmorが使えるLinuxディストリビューションではDockerではデフォルトで有効で、Docker用の制限されたプロファイルがある。Kubernetesではデフォルトでは有効ではなく、指定する必要がある。ランタイムのデフォルトを参照することは可能。

参考リンク

curl

curlが動作することを確認する。

root@cks-worker:~# curl -v killer.sh
* Rebuilt URL to: killer.sh/
*   Trying 157.245.26.192...
* TCP_NODELAY set
* Connected to killer.sh (157.245.26.192) port 80 (#0)
> GET / HTTP/1.1
> Host: killer.sh
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< location: https://killer.sh/
< date: Mon, 04 Jan 2021 13:59:04 GMT
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host killer.sh left intact

必要なツールを入れる。

apt-get install -y apparmor-utils

curl用のプロファイルを作る。ログを分析していないので、まだ何も許可しない。

root@cks-worker:~# aa-genprof curl
Writing updated profile for /usr/bin/curl.
Setting /usr/bin/curl to complain mode.

Before you begin, you may wish to check if a
profile already exists for the application you
wish to confine. See the following wiki page for
more information:
http://wiki.apparmor.net/index.php/Profiles

Profiling: /usr/bin/curl

Please start the application to be profiled in
another window and exercise its functionality now.

Once completed, select the "Scan" option below in
order to scan the system logs for AppArmor events.

For each AppArmor event, you will be given the
opportunity to choose whether the access should be
allowed or denied.

[(S)can system log for AppArmor events] / (F)inish
Setting /usr/bin/curl to enforce mode.

Reloaded AppArmor profiles in enforce mode.

Please consider contributing your new profile!
See the following wiki page for more information:
http://wiki.apparmor.net/index.php/Profiles

Finished generating profile for /usr/bin/curl.

プロファイルが作成されたことを確認する。

apparmor module is loaded.
26 profiles are loaded.
21 profiles are in enforce mode.
   /sbin/dhclient
   /snap/snapd/10492/usr/lib/snapd/snap-confine
   /snap/snapd/10492/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/bin/curl
   /usr/bin/lxc-start
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/lib/snapd/snap-confine
   /usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/sbin/chronyd
   /usr/sbin/tcpdump
   docker-default
   lxc-container-default
   lxc-container-default-cgns
   lxc-container-default-with-mounting
   lxc-container-default-with-nesting
   man_filter
   man_groff
   snap-update-ns.google-cloud-sdk
5 profiles are in complain mode.
   snap.google-cloud-sdk.anthoscli
   snap.google-cloud-sdk.bq
   snap.google-cloud-sdk.docker-credential-gcloud
   snap.google-cloud-sdk.gcloud
   snap.google-cloud-sdk.gsutil
6 processes have profiles defined.
6 processes are in enforce mode.
   /usr/sbin/chronyd (1395)
   docker-default (3566)
   docker-default (3581)
   docker-default (4844)
   docker-default (4989)
   docker-default (5048)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

プロファイルは/etc/apparmor.dディレクトリに作成されており、以下のような内容。

root@cks-worker:~# cd /etc/apparmor.d/
root@cks-worker:/etc/apparmor.d# cat usr.bin.curl
# Last Modified: Mon Jan  4 14:02:20 2021
#include <tunables/global>

/usr/bin/curl {
  #include <abstractions/base>

  /lib/x86_64-linux-gnu/ld-*.so mr,
  /usr/bin/curl mr,

}

curlを実行すると失敗することを確認する。これによってログがでる。

root@cks-worker:~# curl -v killer.sh
* Rebuilt URL to: killer.sh/
* Could not resolve host: killer.sh
* Closing connection 0
curl: (6) Could not resolve host: killer.sh

ログを分析してプロファイルを更新する。

root@cks-worker:/etc/apparmor.d# aa-logprof
Reading log entries from /var/log/syslog.
Updating AppArmor profiles in /etc/apparmor.d.
Enforce-mode changes:

Profile:  /usr/bin/curl
Path:     /etc/ssl/openssl.cnf
New Mode: owner r
Severity: 2

 [1 - #include <abstractions/lxc/container-base>]
  2 - #include <abstractions/lxc/start-container>
  3 - #include <abstractions/openssl>
  4 - #include <abstractions/ssl_keys>
  5 - owner /etc/ssl/openssl.cnf r,
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / (O)wner permissions off / Abo(r)t / (F)inish
Adding #include <abstractions/lxc/container-base> to profile.
Deleted 2 previous matching profile entries.

= Changed Local Profiles =

The following local profiles were changed. Would you like to save them?

 [1 - /usr/bin/curl]
(S)ave Changes / Save Selec(t)ed Profile / [(V)iew Changes] / View Changes b/w (C)lean profiles / Abo(r)t
Writing updated profile for /usr/bin/curl.

プロファイルの内容が変わったことを確認する。

root@cks-worker:/etc/apparmor.d# cat usr.bin.curl
# Last Modified: Mon Jan  4 14:05:21 2021
#include <tunables/global>

/usr/bin/curl {
  #include <abstractions/base>
  #include <abstractions/lxc/container-base>

}

curlが実行できるようになった。

root@cks-worker:/etc/apparmor.d# curl -v killer.sh
* Rebuilt URL to: killer.sh/
*   Trying 157.245.26.192...
* TCP_NODELAY set
* Connected to killer.sh (157.245.26.192) port 80 (#0)
> GET / HTTP/1.1
> Host: killer.sh
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< location: https://killer.sh/
< date: Mon, 04 Jan 2021 14:06:22 GMT
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host killer.sh left intact

Docker

プロファイルのファイルを作成する。以下のファイルを/etc/apparmor.dに作成する。

#include <tunables/global>


profile docker-nginx flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/base>

  network inet tcp,
  network inet udp,
  network inet icmp,

  deny network raw,

  deny network packet,

  file,
  umount,

  deny /bin/** wl,
  deny /boot/** wl,
  deny /dev/** wl,
  deny /etc/** wl,
  deny /home/** wl,
  deny /lib/** wl,
  deny /lib64/** wl,
  deny /media/** wl,
  deny /mnt/** wl,
  deny /opt/** wl,
  deny /proc/** wl,
  deny /root/** wl,
  deny /sbin/** wl,
  deny /srv/** wl,
  deny /tmp/** wl,
  deny /sys/** wl,
  deny /usr/** wl,

  audit /** w,

  /var/run/nginx.pid w,

  /usr/sbin/nginx ix,

  deny /bin/dash mrwklx,
  deny /bin/sh mrwklx,
  deny /usr/bin/top mrwklx,


  capability chown,
  capability dac_override,
  capability setuid,
  capability setgid,
  capability net_bind_service,

  deny @{PROC}/* w,   # deny write for all files directly in /proc (not in a subdir)
  # deny write to files not in /proc/<number>/** or /proc/sys/**
  deny @{PROC}/{[^1-9],[^1-9][^0-9],[^1-9s][^0-9y][^0-9s],[^1-9][^0-9][^0-9][^0-9]*}/** w,
  deny @{PROC}/sys/[^k]** w,  # deny /proc/sys except /proc/sys/k* (effectively /proc/sys/kernel)
  deny @{PROC}/sys/kernel/{?,??,[^s][^h][^m]**} w,  # deny everything except shm* in /proc/sys/kernel/
  deny @{PROC}/sysrq-trigger rwklx,
  deny @{PROC}/mem rwklx,
  deny @{PROC}/kmem rwklx,
  deny @{PROC}/kcore rwklx,

  deny mount,

  deny /sys/[^f]*/** wklx,
  deny /sys/f[^s]*/** wklx,
  deny /sys/fs/[^c]*/** wklx,
  deny /sys/fs/c[^g]*/** wklx,
  deny /sys/fs/cg[^r]*/** wklx,
  deny /sys/firmware/** rwklx,
  deny /sys/kernel/security/** rwklx,
}

ファイルを指定してロードする。

apparmor_parser /etc/apparmor.d/docker-nginx

ロードされたことを確認する。

root@cks-worker:/etc/apparmor.d# aa-status
apparmor module is loaded.
27 profiles are loaded.
22 profiles are in enforce mode.
   /sbin/dhclient
   /snap/snapd/10492/usr/lib/snapd/snap-confine
   /snap/snapd/10492/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/bin/curl
   /usr/bin/lxc-start
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/lib/snapd/snap-confine
   /usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/sbin/chronyd
   /usr/sbin/tcpdump
   docker-default
   docker-nginx
   lxc-container-default
   lxc-container-default-cgns
   lxc-container-default-with-mounting
   lxc-container-default-with-nesting
   man_filter
   man_groff
   snap-update-ns.google-cloud-sdk
5 profiles are in complain mode.
   snap.google-cloud-sdk.anthoscli
   snap.google-cloud-sdk.bq
   snap.google-cloud-sdk.docker-credential-gcloud
   snap.google-cloud-sdk.gcloud
   snap.google-cloud-sdk.gsutil
6 processes have profiles defined.
6 processes are in enforce mode.
   /usr/sbin/chronyd (1395)
   docker-default (3566)
   docker-default (3581)
   docker-default (4844)
   docker-default (4989)
   docker-default (5048)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Dockerでnginxを普通に実行する。

root@cks-worker:/etc/apparmor.d# docker run --rm nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

Dockerでnginxをデフォルトのプロファイルを指定して実行する。

root@cks-worker:/etc/apparmor.d# docker run --rm --security-opt apparmor=docker-default nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

Dockerでnginxをデフォルトの先ほど作成したプロファイルを指定して実行する。起動はするが、何かが拒否されている。

root@cks-worker:/etc/apparmor.d# docker run --rm --security-opt apparmor=docker-nginx nginx
/docker-entrypoint.sh: No files found in /docker-entrypoint.d/, skipping configuration
/docker-entrypoint.sh: 13: /docker-entrypoint.sh: cannot create /dev/null: Permission denied

別のターミナルでコンテナにログインして、アクセス制御がされていることを確認する。

root@cks-worker:~# docker exec -it 6535dec3d7e5 sh
# touch /root/test
touch: cannot touch '/root/test': Permission denied
#

Kubernetes

プロファイルはアノテーションで指定する。

存在しないプロファイルを指定してPodを作成するとブロックされる。

root@cks-master:~# k apply -f secure.yaml
pod/secure created
root@cks-master:~# k get pod
NAME     READY   STATUS    RESTARTS   AGE
secure   0/1     Blocked   0          20s
root@cks-master:~# k describe pod secure
Name:         secure
Namespace:    default
Priority:     0
Node:         cks-worker/10.146.0.7
Start Time:   Mon, 04 Jan 2021 14:38:05 +0000
Labels:       run=secure
Annotations:  container.apparmor.security.beta.kubernetes.io/secure: localhost/hello
Status:       Pending
Reason:       AppArmor
Message:      Cannot enforce AppArmor: profile "hello" is not loaded
IP:
IPs:          <none>
Containers:
  secure:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       Blocked
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r2nz7 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-r2nz7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r2nz7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  37s   default-scheduler  Successfully assigned default/secure to cks-worker
root@cks-master:~# k delete pod secure
pod "secure" deleted

存在するプロファイルを指定して試す。アクセス拒否がされていることを確認する。

root@cks-master:~# k apply -f secure.yaml
pod/secure created
root@cks-master:~# k get pod
NAME     READY   STATUS              RESTARTS   AGE
secure   0/1     ContainerCreating   0          3s
root@cks-master:~# k get pod
NAME     READY   STATUS    RESTARTS   AGE
secure   1/1     Running   0          7s
root@cks-master:~# k exec -it secure -- sh
# touch /root/test
touch: cannot touch '/root/test': Permission denied
#

Kubernetes auditログの有効化

kubeadmクラスターでauditログを有効化するメモ。

参考リンク

手順

ディレクトリを作成する。

mkdir -p /etc/kubernetes/audit
cd /etc/kubernetes/audit

ポリシーファイルを作成する。

vi policy.yaml

Metadataレベルで全てをロギングする例。

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata

kube-apiserverのマニフェストを変更する。

(省略)
spec:
  containers:
  - command:
    - kube-apiserver
    - --audit-policy-file=/etc/kubernetes/audit/policy.yaml
    - --audit-log-path=/var/log/audit.log
(省略)
    volumeMounts:
    - mountPath: /etc/kubernetes/audit
      name: k8s-audit
      readOnly: true
    - mountPath: /var/log/audit.log
      name: k8s-audit-log
      readOnly: false
(省略)
  volumes:
  - hostPath:
      path: /etc/kubernetes/audit
      type: DirectoryOrCreate
    name: k8s-audit
  - hostPath:
      path: /var/log/audit.log
      type: FileOrCreate
    name: k8s-audit-log
(省略)

確認

Secretを作成してログを確認する。createの場合はResponseCompleteのステージしかない?

root@cks-master:/etc/kubernetes/manifests# k create secret generic very-secure --from-literal=user=admin
secret/very-secure created
root@cks-master:/etc/kubernetes/manifests# cat /var/log/audit.log | grep very-secure
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"5094c9ec-69bb-4d76-be4e-021287d2c370","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/default/secrets?fieldManager=kubectl-create","verb":"create","user":{"username":"kubernetes-admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["10.146.0.6"],"userAgent":"kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a","objectRef":{"resource":"secrets","namespace":"default","name":"very-secure","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":201},"requestReceivedTimestamp":"2021-01-03T21:15:42.512381Z","stageTimestamp":"2021-01-03T21:15:42.519040Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}
root@cks-master:/etc/kubernetes/manifests#

jqで整形してみると以下のような形式。

{
  "kind": "Event",
  "apiVersion": "audit.k8s.io/v1",
  "level": "Metadata",
  "auditID": "5094c9ec-69bb-4d76-be4e-021287d2c370",
  "stage": "ResponseComplete",
  "requestURI": "/api/v1/namespaces/default/secrets?fieldManager=kubectl-create",
  "verb": "create",
  "user": {
    "username": "kubernetes-admin",
    "groups": [
      "system:masters",
      "system:authenticated"
    ]
  },
  "sourceIPs": [
    "10.146.0.6"
  ],
  "userAgent": "kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a",
  "objectRef": {
    "resource": "secrets",
    "namespace": "default",
    "name": "very-secure",
    "apiVersion": "v1"
  },
  "responseStatus": {
    "metadata": {},
    "code": 201
  },
  "requestReceivedTimestamp": "2021-01-03T21:15:42.512381Z",
  "stageTimestamp": "2021-01-03T21:15:42.519040Z",
  "annotations": {
    "authorization.k8s.io/decision": "allow",
    "authorization.k8s.io/reason": ""
  }
}

Secretを適当にeditしてみる。getRequestReceivedResponseCompletepatchRequestReceivedResponseCompleteが記録されている。

root@cks-master:/etc/kubernetes/manifests# k edit secret very-secure
secret/very-secure edited
root@cks-master:/etc/kubernetes/manifests# cat /var/log/audit.log | grep very-secure
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"5094c9ec-69bb-4d76-be4e-021287d2c370","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/default/secrets?fieldManager=kubectl-create","verb":"create","user":{"username":"kubernetes-admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["10.146.0.6"],"userAgent":"kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a","objectRef":{"resource":"secrets","namespace":"default","name":"very-secure","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":201},"requestReceivedTimestamp":"2021-01-03T21:15:42.512381Z","stageTimestamp":"2021-01-03T21:15:42.519040Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"cec16d21-a336-473f-9c61-aa56acfad7d0","stage":"RequestReceived","requestURI":"/api/v1/namespaces/default/secrets/very-secure","verb":"get","user":{"username":"kubernetes-admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["10.146.0.6"],"userAgent":"kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a","objectRef":{"resource":"secrets","namespace":"default","name":"very-secure","apiVersion":"v1"},"requestReceivedTimestamp":"2021-01-03T21:20:20.876437Z","stageTimestamp":"2021-01-03T21:20:20.876437Z"}
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"cec16d21-a336-473f-9c61-aa56acfad7d0","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/default/secrets/very-secure","verb":"get","user":{"username":"kubernetes-admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["10.146.0.6"],"userAgent":"kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a","objectRef":{"resource":"secrets","namespace":"default","name":"very-secure","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-01-03T21:20:20.876437Z","stageTimestamp":"2021-01-03T21:20:20.878489Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"80f4192b-0533-43d9-90a3-e4f33aaee2c5","stage":"RequestReceived","requestURI":"/api/v1/namespaces/default/secrets/very-secure?fieldManager=kubectl-edit","verb":"patch","user":{"username":"kubernetes-admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["10.146.0.6"],"userAgent":"kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a","objectRef":{"resource":"secrets","namespace":"default","name":"very-secure","apiVersion":"v1"},"requestReceivedTimestamp":"2021-01-03T21:20:33.854711Z","stageTimestamp":"2021-01-03T21:20:33.854711Z"}
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"80f4192b-0533-43d9-90a3-e4f33aaee2c5","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/default/secrets/very-secure?fieldManager=kubectl-edit","verb":"patch","user":{"username":"kubernetes-admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["10.146.0.6"],"userAgent":"kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a","objectRef":{"resource":"secrets","namespace":"default","name":"very-secure","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-01-03T21:20:33.854711Z","stageTimestamp":"2021-01-03T21:20:33.860414Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}
root@cks-master:/etc/kubernetes/manifests#

ポリシーのカスタマイズ

以下のルールでポリシーを作成する。

  • RequestReceivedステージのログは記録しない
  • get, watch, listのログは記録しない
  • SecretについてはMetadataレベルで記録する
  • その他はRequestResponseレベルで記録する
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
omitStages:
  - "RequestReceived"
rules:
  - level: None
    verbs: ["get", "watch", "list"]
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets"]
  - level: RequestResponse