Ciliumの理解が進まないので、Getting StartedをKindでやってみたメモ。
クラスターの作成
kind-config.yaml
を作成する。
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker - role: worker networking: disableDefaultCNI: true
クラスターを作成する。
$ kind create cluster --config=kind-config.yaml Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.20.2) 🖼 ✓ Preparing nodes 📦 📦 📦 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a nice day! 👋
ノードを確認する。
$ k get node NAME STATUS ROLES AGE VERSION kind-control-plane NotReady control-plane,master 5m4s v1.20.2 kind-worker NotReady <none> 4m24s v1.20.2 kind-worker2 NotReady <none> 4m25s v1.20.2 kind-worker3 NotReady <none> 4m24s v1.20.2
Ciliumのインストール
Ciliumのtarをダウンロードして展開する。
curl -LO https://github.com/cilium/cilium/archive/master.tar.gz tar xzf master.tar.gz cd cilium-master/install/kubernetes
CiliumのイメージをpullしてKindクラスターにロードする。
docker pull cilium/cilium:latest kind load docker-image cilium/cilium:latest
ローカルディレクトリからHelmでインストールする。
helm install cilium ./cilium \ --namespace kube-system \ --set nodeinit.enabled=true \ --set kubeProxyReplacement=partial \ --set hostServices.enabled=false \ --set externalIPs.enabled=true \ --set nodePort.enabled=true \ --set hostPort.enabled=true \ --set bpf.masquerade=false \ --set image.pullPolicy=IfNotPresent \ --set ipam.mode=kubernetes
Podが全て起動したことを確認する。
$ kubectl -n kube-system get pod NAME READY STATUS RESTARTS AGE cilium-25tq6 1/1 Running 0 3m12s cilium-4csrs 1/1 Running 0 3m13s cilium-8lsmq 1/1 Running 0 3m12s cilium-node-init-7p5b7 1/1 Running 1 3m13s cilium-node-init-hzwf7 1/1 Running 1 3m12s cilium-node-init-rqtct 1/1 Running 1 3m13s cilium-node-init-zkx9b 1/1 Running 1 3m13s cilium-operator-748d8bf4f7-7gtpb 1/1 Running 0 3m12s cilium-operator-748d8bf4f7-b6csm 1/1 Running 0 3m12s cilium-xkqdz 1/1 Running 0 3m12s coredns-74ff55c5b-7prvm 1/1 Running 0 10m coredns-74ff55c5b-grsh7 1/1 Running 0 10m etcd-kind-control-plane 1/1 Running 0 10m kube-apiserver-kind-control-plane 1/1 Running 0 10m kube-controller-manager-kind-control-plane 1/1 Running 0 10m kube-proxy-5t94q 1/1 Running 0 9m54s kube-proxy-mf2sm 1/1 Running 0 10m kube-proxy-t6vct 1/1 Running 0 9m55s kube-proxy-zttsg 1/1 Running 0 9m54s kube-scheduler-kind-control-plane 1/1 Running 0 10m
Identity-Aware and HTTP-Aware Policy Enforcement
デモアプリのインストール
デモアプリをインストールする。
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/HEAD/examples/minikube/http-sw-app.yaml service/deathstar created deployment.apps/deathstar created pod/tiefighter created pod/xwing created
PodとServiceを確認する。
$ k get pod,svc -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS pod/deathstar-c74d84667-cpd99 1/1 Running 0 7m41s 10.244.3.208 kind-worker <none> <none> class=deathstar,org=empire,pod-template-hash=c74d84667 pod/deathstar-c74d84667-tfsn2 1/1 Running 0 7m41s 10.244.3.31 kind-worker <none> <none> class=deathstar,org=empire,pod-template-hash=c74d84667 pod/tiefighter 1/1 Running 0 7m41s 10.244.1.246 kind-worker2 <none> <none> class=tiefighter,org=empire pod/xwing 1/1 Running 0 7m41s 10.244.2.124 kind-worker3 <none> <none> class=xwing,org=alliance NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS service/deathstar ClusterIP 10.96.51.254 <none> 80/TCP 7m43s class=deathstar,org=empire <none> service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21m <none> component=apiserver,provider=kubernetes
CiliumのPodを確認する。
$ kubectl -n kube-system get pod -l k8s-app=cilium -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cilium-25tq6 1/1 Running 0 12m 172.18.0.4 kind-worker2 <none> <none> cilium-4csrs 1/1 Running 0 12m 172.18.0.2 kind-control-plane <none> <none> cilium-8lsmq 1/1 Running 0 12m 172.18.0.5 kind-worker3 <none> <none> cilium-xkqdz 1/1 Running 0 12m 172.18.0.3 kind-worker <none> <none>
各PodはCiliumのエンドポイントとなる。cilium endpoint list
コマンドでエンドポイントの状態が確認できる。
$ kubectl -n kube-system exec cilium-25tq6 -- cilium endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 402 Disabled Disabled 4 reserved:health 10.244.1.160 ready 609 Disabled Disabled 16260 k8s:class=tiefighter 10.244.1.246 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 1776 Disabled Disabled 1 reserved:host ready $ kubectl -n kube-system exec cilium-4csrs -- cilium endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 2089 Disabled Disabled 1 k8s:node-role.kubernetes.io/control-plane ready k8s:node-role.kubernetes.io/master reserved:host 3500 Disabled Disabled 4 reserved:health 10.244.0.151 ready $ kubectl -n kube-system exec cilium-8lsmq -- cilium endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 94 Disabled Disabled 6547 k8s:class=xwing 10.244.2.124 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=alliance 291 Disabled Disabled 31776 k8s:io.cilium.k8s.policy.cluster=default 10.244.2.193 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1533 Disabled Disabled 8324 k8s:app=local-path-provisioner 10.244.2.176 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=local-path-provisioner-service-account k8s:io.kubernetes.pod.namespace=local-path-storage 1639 Disabled Disabled 31776 k8s:io.cilium.k8s.policy.cluster=default 10.244.2.70 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1825 Disabled Disabled 4 reserved:health 10.244.2.177 ready 3168 Disabled Disabled 1 reserved:host ready $ kubectl -n kube-system exec cilium-xkqdz -- cilium endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 302 Disabled Disabled 48677 k8s:class=deathstar 10.244.3.31 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 587 Disabled Disabled 1 reserved:host ready 1699 Disabled Disabled 4 reserved:health 10.244.3.1 ready 2742 Disabled Disabled 48677 k8s:class=deathstar 10.244.3.208 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire
まだNetwork Policyが定義されていないので、ENFORCEMENTはDisabledとなっている。
現在の状態を確認する。xwing
もtiefighter
もdeathstar
Service(のPod)にアクセス可能になっている。
$ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed
L3/L4ポリシーの適用
org=empire
のラベルを持つPodからのみdeathstar
Service(のPod)にアクセスできるようにする。
apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "rule1" spec: description: "L3-L4 policy to restrict deathstar access to empire ships only" endpointSelector: matchLabels: org: empire class: deathstar ingress: - fromEndpoints: - matchLabels: org: empire toPorts: - ports: - port: "80" protocol: TCP
$ k apply -f rule1-l4.yaml ciliumnetworkpolicy.cilium.io/rule1 created
確認する。
$ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing ^C
deathstar
のPodについて、IngressのENFORCEMENTがEnabledになっていることを確認する。
$ kubectl -n kube-system exec cilium-25tq6 -- cilium endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 402 Disabled Disabled 4 reserved:health 10.244.1.160 ready 609 Disabled Disabled 16260 k8s:class=tiefighter 10.244.1.246 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 1776 Disabled Disabled 1 reserved:host ready $ kubectl -n kube-system exec cilium-4csrs -- cilium endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 2089 Disabled Disabled 1 k8s:node-role.kubernetes.io/control-plane ready k8s:node-role.kubernetes.io/master reserved:host 3500 Disabled Disabled 4 reserved:health 10.244.0.151 ready $ kubectl -n kube-system exec cilium-8lsmq -- cilium endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 94 Disabled Disabled 6547 k8s:class=xwing 10.244.2.124 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=alliance 291 Disabled Disabled 31776 k8s:io.cilium.k8s.policy.cluster=default 10.244.2.193 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1533 Disabled Disabled 8324 k8s:app=local-path-provisioner 10.244.2.176 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=local-path-provisioner-service-account k8s:io.kubernetes.pod.namespace=local-path-storage 1639 Disabled Disabled 31776 k8s:io.cilium.k8s.policy.cluster=default 10.244.2.70 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1825 Disabled Disabled 4 reserved:health 10.244.2.177 ready 3168 Disabled Disabled 1 reserved:host ready $ kubectl -n kube-system exec cilium-xkqdz -- cilium endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 302 Enabled Disabled 48677 k8s:class=deathstar 10.244.3.31 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 587 Disabled Disabled 1 reserved:host ready 1699 Disabled Disabled 4 reserved:health 10.244.3.1 ready 2742 Enabled Disabled 48677 k8s:class=deathstar 10.244.3.208 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire
L7 HTTPポリシーの適用
deathstar
Service(のPod)が危険なAPIを持っているとする。アクセスして確認する。
$ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port Panic: deathstar exploded goroutine 1 [running]: main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa) /code/src/github.com/empire/deathstar/ temp/main.go:9 +0x64 main.main() /code/src/github.com/empire/deathstar/ temp/main.go:5 +0x85
/v1/request-landing
へのPOSTは許可するが、/v1/exhaust-port
へのPUTは禁止する。
apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "rule1" spec: description: "L7 policy to restrict access to specific HTTP call" endpointSelector: matchLabels: org: empire class: deathstar ingress: - fromEndpoints: - matchLabels: org: empire toPorts: - ports: - port: "80" protocol: TCP rules: http: - method: "POST" path: "/v1/request-landing"
$ k apply -f rule1-l7.yaml ciliumnetworkpolicy.cilium.io/rule1 configured
確認する。
$ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port Access denied
cilium policy get
でポリシーを確認できる。
$ kubectl -n kube-system exec cilium-xkqdz -- cilium policy get [ { "endpointSelector": { "matchLabels": { "any:class": "deathstar", "any:org": "empire", "k8s:io.kubernetes.pod.namespace": "default" } }, "ingress": [ { "fromEndpoints": [ { "matchLabels": { "any:org": "empire", "k8s:io.kubernetes.pod.namespace": "default" } } ], "toPorts": [ { "ports": [ { "port": "80", "protocol": "TCP" } ], "rules": { "http": [ { "path": "/v1/request-landing", "method": "POST" } ] } } ] } ], "labels": [ { "key": "io.cilium.k8s.policy.derived-from", "value": "CiliumNetworkPolicy", "source": "k8s" }, { "key": "io.cilium.k8s.policy.name", "value": "rule1", "source": "k8s" }, { "key": "io.cilium.k8s.policy.namespace", "value": "default", "source": "k8s" }, { "key": "io.cilium.k8s.policy.uid", "value": "ba8b809f-fc18-4296-8e1d-596b38921b04", "source": "k8s" } ], "description": "L7 policy to restrict access to specific HTTP call" } ] Revision: 4
``でトラフィックをモニターできる。ただし、このノードが対象。
$ kubectl -n kube-system exec -it cilium-xkqdz -- cilium monitor -v --type l7 Listening for events on 4 CPUs with 64x4096 of shared memory Press Ctrl-C to quit <- Request http from 0 ([k8s:class=tiefighter k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire]) to 302 ([k8s:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.cilium.k8s.policy.cluster=default]), identity 16260->48677, verdict Denied PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port => 403 <- Request http from 0 ([k8s:class=tiefighter k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire]) to 302 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.cilium.k8s.policy.cluster=default k8s:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default]), identity 16260->48677, verdict Denied PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port => 403 <- Request http from 0 ([k8s:org=empire k8s:class=tiefighter k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default]) to 302 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.cilium.k8s.policy.cluster=default k8s:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default]), identity 16260->48677, verdict Denied PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port => 403 ^C Received an interrupt, disconnecting from monitor...
クリーンアップ
$ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/HEAD/examples/minikube/http-sw-app.yaml service "deathstar" deleted deployment.apps "deathstar" deleted pod "tiefighter" deleted pod "xwing" deleted $ kubectl delete cnp rule1 ciliumnetworkpolicy.cilium.io "rule1" deleted
Locking down external access with DNS-based policies
デモアプリのインストール
デモアプリをインストールする。先ほどとは異なるアプリ。
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes-dns/dns-sw-app.yaml pod/mediabot created
Podを確認する。
$ k get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS mediabot 1/1 Running 0 58s 10.244.1.211 kind-worker2 <none> <none> class=mediabot,org=empire
DNSポリシーの適用
api.twitter.com
へのEgressを許可するポリシーを作成する。
apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "fqdn" spec: endpointSelector: matchLabels: org: empire class: mediabot egress: - toFQDNs: - matchName: "api.twitter.com" - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": kube-system "k8s:k8s-app": kube-dns toPorts: - ports: - port: "53" protocol: ANY rules: dns: - matchPattern: "*"
$ k apply -f fqdn1.yaml ciliumnetworkpolicy.cilium.io/fqdn created
確認する。
$ kubectl exec -it mediabot -- curl -sL https://api.twitter.com <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> ... </html> $ kubectl exec -it mediabot -- curl -sL https://help.twitter.com ^Ccommand terminated with exit code 130
パターンを使ったDNSポリシー
matchName
は完全一致なので、代わりにmatchPattern
を使ってみる。
apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "fqdn" spec: endpointSelector: matchLabels: org: empire class: mediabot egress: - toFQDNs: - matchName: "api.twitter.com" - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": kube-system "k8s:k8s-app": kube-dns toPorts: - ports: - port: "53" protocol: ANY rules: dns: - matchPattern: "*"
$ k apply -f fqdn2.yaml ciliumnetworkpolicy.cilium.io/fqdn configured
確認する。twitter.com
は許可されないことに注意。
$ kubectl exec -it mediabot -- curl -sL https://help.twitter.com <!doctype html> ... $ kubectl exec -it mediabot -- curl -sL https://about.twitter.com <!DOCTYPE html> ... $ kubectl exec -it mediabot -- curl -sL https://twitter.com ^Ccommand terminated with exit code 130
DNS, Port, L7ルールの組み合わせ
アクセスできるポートを443に限定する。
apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "fqdn" spec: endpointSelector: matchLabels: org: empire class: mediabot egress: - toFQDNs: - matchPattern: "*.twitter.com" toPorts: - ports: - port: "443" protocol: TCP - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": kube-system "k8s:k8s-app": kube-dns toPorts: - ports: - port: "53" protocol: ANY rules: dns: - matchPattern: "*"
$ k apply -f fqdn3.yaml ciliumnetworkpolicy.cilium.io/fqdn configured
確認する。
$ kubectl exec -it mediabot -- curl https://help.twitter.com <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> ... $ kubectl exec -it mediabot -- curl http://help.twitter.com ^Ccommand terminated with exit code 130
クリーンアップ
$ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/HEAD/examples/kubernetes-dns/dns-sw-app.yaml pod "mediabot" deleted $ kubectl delete cnp fqdn ciliumnetworkpolicy.cilium.io "fqdn" deleted