EKSでCiliumをネットワークポリシーエンジンとして使う方法を試してみる。CNIとしてCiliumを使う方法と、ネットワークポリシーエンジンとしてだけ使う方法があるが、後者。
コンポーネント |
バージョン |
備考 |
eksctl |
0.36.2 |
|
Kubernetes バージョン |
1.18 |
|
プラットフォームのバージョン |
eks.3 |
|
VPC CNI Plugin |
1.7.5 |
|
Cilium |
1.9.3 |
|
クラスターを作成する。
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: cilium
region: ap-northeast-1
vpc:
cidr: "10.1.0.0/16"
availabilityZones:
- ap-northeast-1a
- ap-northeast-1c
managedNodeGroups:
- name: managed-ng-1
minSize: 2
maxSize: 2
desiredCapacity: 2
ssh:
allow: true
publicKeyName: default
privateNetworking: true
cloudWatch:
clusterLogging:
enableTypes: ["*"]
eksctl create cluster -f cilium.yaml
Ciliumのインストール
Helmリポジトリを追加する。
helm repo add cilium https://helm.cilium.io/
HelmでCiliumをインストールする。
helm install cilium cilium/cilium --version 1.9.3 \
--namespace kube-system \
--set cni.chainingMode=aws-cni \
--set masquerade=false \
--set tunnel=disabled \
--set nodeinit.enabled=true
Ciliumがインストールされたことを確認する。
$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-p6k9m 1/1 Running 0 41h
kube-system aws-node-qttwr 1/1 Running 0 41h
kube-system cilium-dffdz 1/1 Running 0 4m46s
kube-system cilium-node-init-f6snw 1/1 Running 0 4m46s
kube-system cilium-node-init-f72cr 1/1 Running 0 4m46s
kube-system cilium-operator-db487bc5b-2m86x 1/1 Running 0 4m46s
kube-system cilium-operator-db487bc5b-8s4sm 1/1 Running 0 4m46s
kube-system cilium-smm9f 1/1 Running 0 4m46s
kube-system coredns-86f7d88d77-dl769 0/1 ContainerCreating 0 4m14s
kube-system coredns-86f7d88d77-rt84l 0/1 ContainerCreating 0 3m44s
kube-system kube-proxy-59krv 1/1 Running 0 41h
kube-system kube-proxy-mh4dv 1/1 Running 0 41h
Ciliumは起動したが、CoreDNSが起動しなくなっている。Podをdescribeすると以下のようなWarningがでている。
Warning FailedCreatePodSandBox 3m2s (x4 over 3m5s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8bd40e1f66457a69ed99969214e248f4dbc97d499683ef612bbe30b7137a1ec0" network for pod "coredns-86f7d88d77-dl769": networkPlugin cni failed to set up pod "coredns-86f7d88d77-dl769_kube-system" network: netplugin failed but error parsing its diagnostic message "{\"level\":\"debug\",\"ts\":\"2021-01-29T02:41:14.743Z\",\"caller\":\"routed-eni-cni-plugin/cni.go:123\",\"msg\":\"MTU not set, defaulting to 9001\"}\n{\"level\":\"info\",\"ts\":\"2021-01-29T02:41:14.743Z\",\"caller\":\"routed-eni-cni-plugin/cni.go:117\",\"msg\":\"Received CNI add request: ContainerID(8bd40e1f66457a69ed99969214e248f4dbc97d499683ef612bbe30b7137a1ec0) Netns(/proc/32025/ns/net) IfName(eth0) Args(IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-86f7d88d77-dl769;K8S_POD_INFRA_CONTAINER_ID=8bd40e1f66457a69ed99969214e248f4dbc97d499683ef612bbe30b7137a1ec0) Path(/opt/cni/bin) argsStdinData({\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"name\\\":\\\"aws-cni\\\",\\\"type\\\":\\\"aws-cni\\\",\\\"vethPrefix\\\":\\\"eni\\\"})\"}\n{\"level\":\"debug\",\"ts\":\"2021-01-29T02:41:14.744Z\",\"caller\":\"routed-eni-cni-plugin/cni.go:117\",\"msg\":\"MTU value set is 9001:\"}\n{\"level\":\"error\",\"ts\":\"2021-01-29T02:41:14.745Z\",\"caller\":\"routed-eni-cni-plugin/cni.go:117\",\"msg\":\"Failed to assign an IP address to container 8bd40e1f66457a69ed99969214e248f4dbc97d499683ef612bbe30b7137a1ec0\"}\n{\n \"code\": 100,\n \"msg\": \"add cmd: failed to assign an IP address to container\"\n}": invalid character '{' after top-level value
関連するかも知れないIssueがある。
VPC CNI PluginのバージョンとCiliumのバージョンを確認する。
$ kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
amazon-k8s-cni-init:v1.7.5-eksbuild.1
amazon-k8s-cni:v1.7.5-eksbuild.1
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cilium kube-system 1 2021-01-29 11:40:28.449879 +0900 JST deployed cilium-1.9.3 1.9.3
Issueにあるように、手作業で編集してみる。
編集前はこのようになっている。
[root@ip-10-1-90-101 ~]# cat /etc/cni/net.d/10-aws.conflist
{
"cniVersion": "0.3.1",
"name": "aws-cni",
"plugins": [
{
"name": "aws-cni",
"type": "aws-cni",
"vethPrefix": "eni",
"mtu": "9001",
"pluginLogFile": "/var/log/aws-routed-eni/plugin.log",
"pluginLogLevel": "DEBUG"
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
}
]
}[root@ip-10-1-90-101 ~]# cat /etc/cni/net.d/05-cilium.conflist
{
"cniVersion": "0.3.1",
"name": "aws-cni",
"plugins": [
{
"name": "aws-cni",
"type": "aws-cni",
"vethPrefix": "eni"
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
},
{
"name": "cilium",
"type": "cilium-cni",
"enable-debug": false
}
]
}
[root@ip-10-1-90-101 ~]#
2つめのファイルのほうを編集する。
[root@ip-10-1-90-101 ~]# cat /etc/cni/net.d/05-cilium.conflist
{
"cniVersion": "0.3.1",
"name": "aws-cni",
"plugins": [
{
"name": "aws-cni",
"type": "aws-cni",
"vethPrefix": "eni",
"mtu": "9001",
"pluginLogFile": "/var/log/aws-routed-eni/plugin.log",
"pluginLogLevel": "Debug"
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
},
{
"name": "cilium",
"type": "cilium-cni",
"enable-debug": false
}
]
}
[root@ip-10-1-90-101 ~]#
CoreDNSが起動したことを確認する。
$ k get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-p6k9m 1/1 Running 0 41h
kube-system aws-node-qttwr 1/1 Running 0 41h
kube-system cilium-dffdz 1/1 Running 0 35m
kube-system cilium-node-init-f6snw 1/1 Running 0 35m
kube-system cilium-node-init-f72cr 1/1 Running 0 35m
kube-system cilium-operator-db487bc5b-2m86x 1/1 Running 0 35m
kube-system cilium-operator-db487bc5b-8s4sm 1/1 Running 0 35m
kube-system cilium-smm9f 1/1 Running 0 35m
kube-system coredns-86f7d88d77-dl769 1/1 Running 0 35m
kube-system coredns-86f7d88d77-rt84l 1/1 Running 0 34m
kube-system kube-proxy-59krv 1/1 Running 0 41h
kube-system kube-proxy-mh4dv 1/1 Running 0 41h
Ciliumをインストールした後、Ciliumでポリシーを適用するためにはPodの再起動が必要であり、CoreDNSは自動で再起動されたようだ。
テスト
テストをデプロイする。
$ kubectl create ns cilium-test
namespace/cilium-test created
$ kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
deployment.apps/pod-to-a created
deployment.apps/pod-to-external-1111 created
deployment.apps/pod-to-a-denied-cnp created
deployment.apps/pod-to-a-allowed-cnp created
deployment.apps/pod-to-external-fqdn-allow-google-cnp created
deployment.apps/pod-to-b-multi-node-clusterip created
deployment.apps/pod-to-b-multi-node-headless created
deployment.apps/host-to-b-multi-node-clusterip created
deployment.apps/host-to-b-multi-node-headless created
deployment.apps/pod-to-b-multi-node-nodeport created
deployment.apps/pod-to-b-intra-node-nodeport created
service/echo-a created
service/echo-b created
service/echo-b-headless created
service/echo-b-host-headless created
ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created
Ciliumの管理下にあるPodはCiliumEndpoint(cep)リソースとして確認できる。
$ kubectl get cep -A
NAMESPACE NAME ENDPOINT ID IDENTITY ID INGRESS ENFORCEMENT EGRESS ENFORCEMENT VISIBILITY POLICY ENDPOINT STATE IPV4 IPV6
cilium-test echo-a-57cbbd9b8b-mxrfs 3296 5791 ready 10.1.67.236
cilium-test echo-b-6db5fc8ff8-rkm9w 2571 1204 ready 10.1.96.204
cilium-test pod-to-a-648fd74787-4gxhr 3667 57208 ready 10.1.85.107
cilium-test pod-to-a-allowed-cnp-7776c879f-qltmn 3192 34348 ready 10.1.112.226
cilium-test pod-to-a-denied-cnp-b5ff897c7-kwz2m 919 45310 ready 10.1.76.211
cilium-test pod-to-b-intra-node-nodeport-6546644d59-6qfc7 1334 44803 ready 10.1.112.247
cilium-test pod-to-b-multi-node-clusterip-7d54c74c5f-7g72l 272 58159 ready 10.1.91.91
cilium-test pod-to-b-multi-node-headless-76db68d547-kkg7f 3158 55761 ready 10.1.65.159
cilium-test pod-to-b-multi-node-nodeport-7496df84d7-qtxl2 2704 20459 ready 10.1.94.155
cilium-test pod-to-external-1111-6d4f9d9645-pnlgv 1517 23485 ready 10.1.104.150
cilium-test pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr 157 43510 ready 10.1.65.164
kube-system coredns-86f7d88d77-dl769 195 3617 ready 10.1.98.80
kube-system coredns-86f7d88d77-rt84l 395 3617 ready 10.1.87.50
CiliumのCRDを確認する。
$ k api-resources | grep -e NAME -e cilium
NAME SHORTNAMES APIVERSION NAMESPACED KIND
ciliumclusterwidenetworkpolicies ccnp cilium.io/v2 false CiliumClusterwideNetworkPolicy
ciliumendpoints cep,ciliumep cilium.io/v2 true CiliumEndpoint
ciliumexternalworkloads cew cilium.io/v2 false CiliumExternalWorkload
ciliumidentities ciliumid cilium.io/v2 false CiliumIdentity
ciliumlocalredirectpolicies clrp cilium.io/v2 true CiliumLocalRedirectPolicy
ciliumnetworkpolicies cnp,ciliumnp cilium.io/v2 true CiliumNetworkPolicy
ciliumnodes cn,ciliumn cilium.io/v2 false CiliumNode
テストで作成されたポリシーを確認する。
$ k -n cilium-test get cnp
NAME AGE
pod-to-a-allowed-cnp 7m43s
pod-to-a-denied-cnp 7m43s
pod-to-external-fqdn-allow-google-cnp 7m43s
1つめのポリシーを確認する。
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
labels:
component: policy-check
name: pod-to-a-allowed-cnp
quarantine: "false"
topology: any
traffic: internal
type: autocheck
name: pod-to-a-allowed-cnp
namespace: cilium-test
spec:
egress:
- toEndpoints:
- matchLabels:
name: echo-a
toPorts:
- ports:
- port: "8080"
protocol: TCP
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
- toEndpoints:
- matchLabels:
k8s:dns.operator.openshift.io/daemonset-dns: default
k8s:io.kubernetes.pod.namespace: openshift-dns
toPorts:
- ports:
- port: "5353"
protocol: UDP
endpointSelector:
matchLabels:
name: pod-to-a-allowed-cnp
- 対象は
name=pod-to-a-allowed-cnp
のラベルを持つPod
name=echo-a
のラベルを持つEndpointへのTCP: 8080へのアウトバウンド接続を許可
- 名前解決の問い合わせの許可
確認する。名前解決できる。
$ k exec -it pod-to-a-allowed-cnp-7776c879f-qltmn -- sh
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve
Name: kubernetes.default
Address 1: 172.20.0.1 kubernetes.default.svc.cluster.local
name=echo-a
のラベルを持つPodの8080ポートに疎通する。
/ # curl 10.1.67.236:8080
<html>
<head>
(省略)
他の同じく8080をListenしているPodには繋がらない。
$ k exec -it pod-to-a-allowed-cnp-7776c879f-qltmn -- sh
/ # curl 10.1.96.204:8080
2つめのポリシーを確認する。
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
labels:
component: policy-check
name: pod-to-a-denied-cnp
quarantine: "false"
topology: any
traffic: internal
type: autocheck
name: pod-to-a-denied-cnp
namespace: cilium-test
spec:
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
- toEndpoints:
- matchLabels:
k8s:dns.operator.openshift.io/daemonset-dns: default
k8s:io.kubernetes.pod.namespace: openshift-dns
toPorts:
- ports:
- port: "5353"
protocol: UDP
endpointSelector:
matchLabels:
name: pod-to-a-denied-cnp
- 対象は
name=pod-to-a-denied-cnp
のラベルを持つPod
- 名前解決の問い合わせの許可
確認する。名前解決できる。
$ k exec -it pod-to-a-denied-cnp-b5ff897c7-kwz2m -- sh
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve
Name: kubernetes.default
Address 1: 172.20.0.1 kubernetes.default.svc.cluster.local
name=echo-a
のラベルを持つPodの8080ポートに疎通しないようになっている。
/ # curl 10.1.67.236:8080
3つめのポリシーを確認する。
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
labels:
component: policy-check
name: pod-to-external-fqdn-allow-google-cnp
quarantine: "false"
topology: any
traffic: external
type: autocheck
name: pod-to-external-fqdn-allow-google-cnp
namespace: cilium-test
spec:
egress:
- toFQDNs:
- matchPattern: '*.google.com'
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: '*'
- toEndpoints:
- matchLabels:
k8s:dns.operator.openshift.io/daemonset-dns: default
k8s:io.kubernetes.pod.namespace: openshift-dns
toPorts:
- ports:
- port: "5353"
protocol: UDP
rules:
dns:
- matchPattern: '*'
endpointSelector:
matchLabels:
name: pod-to-external-fqdn-allow-google-cnp
- 対象は
pod-to-external-fqdn-allow-google-cnp
のラベルを持つPod
*.google.com
へのアウトバウンド接続の許可
- 名前解決の問い合わせの許可
確認する。google.comは繋がる。
$ k exec -it pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr -- sh
/ # curl http://www.google.com/
<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="ja">
(省略)
確認する。google.co.jpは繋がらない。
/ # curl http://www.google.co.jp/
Hubbleのインストール
可視化ツールのHubbleを導入する。以下のリンク先も参照すること。
export CILIUM_NAMESPACE=kube-system
helm upgrade cilium cilium/cilium --version 1.9.3 \
--namespace $CILIUM_NAMESPACE \
--reuse-values \
--set hubble.listenAddress=":4244" \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true
HubbleのPodが起動したことを確認する。
$ k get po -n kube-system
NAME READY STATUS RESTARTS AGE
aws-node-p6k9m 1/1 Running 0 4d20h
aws-node-qttwr 1/1 Running 0 4d20h
cilium-dffdz 1/1 Running 0 3d3h
cilium-node-init-f6snw 1/1 Running 0 3d3h
cilium-node-init-f72cr 1/1 Running 0 3d3h
cilium-operator-db487bc5b-2m86x 1/1 Running 0 3d3h
cilium-operator-db487bc5b-8s4sm 1/1 Running 0 3d3h
cilium-smm9f 1/1 Running 0 3d3h
coredns-86f7d88d77-dl769 1/1 Running 0 3d3h
coredns-86f7d88d77-rt84l 1/1 Running 0 3d3h
hubble-relay-f489fcbbb-8xg6b 1/1 Running 0 3m44s
hubble-ui-769fb95577-g87xw 3/3 Running 0 3m44s
kube-proxy-59krv 1/1 Running 0 4d20h
kube-proxy-mh4dv 1/1 Running 0 4d20h
Hubble UIに対してポートフォワードでアクセスする。
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-ui --address 0.0.0.0 --address :: 12000:80
トラフィックが可視化される。これはかっこいい!
Hubble CLIもある。
Hubble Relayに対してポートフォワードする。
kubectl port-forward -n $CILIUM_NAMESPACE svc/hubble-relay --address 0.0.0.0 --address :: 4245:80
エンドポイントに接続できることを確認する。
$ hubble --server localhost:4245 status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 8192/8192 (100.00%)
Flows/s: 57.08
Connected Nodes: 2/2
クラスター全体のトラフィックを観察する。
$ hubble --server localhost:4245 observe
TIMESTAMP SOURCE DESTINATION TYPE VERDICT SUMMARY
Feb 1 06:22:34.295 kube-system/coredns-86f7d88d77-dl769:35746 10.1.0.2:53 to-stack FORWARDED UDP
Feb 1 06:22:34.295 10.1.0.2:53 kube-system/coredns-86f7d88d77-dl769:35746 to-endpoint FORWARDED UDP
Feb 1 06:22:34.295 kube-system/coredns-86f7d88d77-dl769:53 10.1.90.101:53224 to-stack FORWARDED UDP
Feb 1 06:22:34.299 cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414 www.google.com:80 to-stack FORWARDED TCP Flags: ACK, PSH
Feb 1 06:22:34.378 www.google.com:80 cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414 to-endpoint FORWARDED TCP Flags: ACK, PSH
Feb 1 06:22:34.378 cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414 www.google.com:80 to-stack FORWARDED TCP Flags: ACK, FIN
Feb 1 06:22:34.380 www.google.com:80 cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414 to-endpoint FORWARDED TCP Flags: ACK, FIN
Feb 1 06:22:34.380 cilium-test/pod-to-external-fqdn-allow-google-cnp-5bc496897c-cspwr:57414 www.google.com:80 to-stack FORWARDED TCP Flags: ACK
Feb 1 06:22:34.691 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934 kube-system/coredns-86f7d88d77-dl769:53 L3-L4 FORWARDED UDP
Feb 1 06:22:34.691 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934 kube-system/coredns-86f7d88d77-dl769:53 to-stack FORWARDED UDP
Feb 1 06:22:34.691 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934 kube-system/coredns-86f7d88d77-dl769:53 to-endpoint FORWARDED UDP
Feb 1 06:22:34.691 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934 kube-system/coredns-86f7d88d77-dl769:53 to-stack FORWARDED UDP
Feb 1 06:22:34.691 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934 kube-system/coredns-86f7d88d77-dl769:53 to-endpoint FORWARDED UDP
Feb 1 06:22:34.692 kube-system/coredns-86f7d88d77-dl769:53 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934 to-stack FORWARDED UDP
Feb 1 06:22:34.692 kube-system/coredns-86f7d88d77-dl769:53 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934 to-endpoint FORWARDED UDP
Feb 1 06:22:34.692 kube-system/coredns-86f7d88d77-dl769:53 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934 to-stack FORWARDED UDP
Feb 1 06:22:34.692 kube-system/coredns-86f7d88d77-dl769:53 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:33934 to-endpoint FORWARDED UDP
Feb 1 06:22:34.692 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 L3-L4 FORWARDED TCP Flags: SYN
Feb 1 06:22:34.692 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 to-stack FORWARDED TCP Flags: SYN
Feb 1 06:22:34.693 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 to-endpoint FORWARDED TCP Flags: SYN
Feb 1 06:22:34.693 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 to-stack FORWARDED TCP Flags: SYN, ACK
Feb 1 06:22:34.694 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 to-endpoint FORWARDED TCP Flags: SYN, ACK
Feb 1 06:22:34.694 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 to-stack FORWARDED TCP Flags: ACK
Feb 1 06:22:34.694 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 to-stack FORWARDED TCP Flags: ACK, PSH
Feb 1 06:22:34.695 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 to-endpoint FORWARDED TCP Flags: ACK
Feb 1 06:22:34.695 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 to-endpoint FORWARDED TCP Flags: ACK, PSH
Feb 1 06:22:34.698 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 to-stack FORWARDED TCP Flags: ACK, PSH
Feb 1 06:22:34.699 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 to-endpoint FORWARDED TCP Flags: ACK, PSH
Feb 1 06:22:34.699 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 to-stack FORWARDED TCP Flags: ACK, FIN
Feb 1 06:22:34.700 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 to-endpoint FORWARDED TCP Flags: ACK, FIN
Feb 1 06:22:34.700 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 to-stack FORWARDED TCP Flags: ACK, FIN
Feb 1 06:22:34.701 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 to-endpoint FORWARDED TCP Flags: ACK, FIN
Feb 1 06:22:34.702 cilium-test/pod-to-a-allowed-cnp-7776c879f-qltmn:40020 cilium-test/echo-a-57cbbd9b8b-mxrfs:8080 to-endpoint FORWARDED TCP Flags: ACK
Feb 1 06:22:34.989 10.1.84.176:443 kube-system/coredns-86f7d88d77-rt84l:41758 to-endpoint FORWARDED TCP Flags: ACK, PSH
Feb 1 06:22:35.456 10.1.90.101:47900 kube-system/coredns-86f7d88d77-rt84l:8080 to-endpoint FORWARDED TCP Flags: SYN
Feb 1 06:22:35.456 kube-system/coredns-86f7d88d77-rt84l:8080 10.1.90.101:47900 to-stack FORWARDED TCP Flags: SYN, ACK
Feb 1 06:22:35.456 10.1.90.101:47900 kube-system/coredns-86f7d88d77-rt84l:8080 to-endpoint FORWARDED TCP Flags: ACK
Feb 1 06:22:35.456 10.1.90.101:47900 kube-system/coredns-86f7d88d77-rt84l:8080 to-endpoint FORWARDED TCP Flags: ACK, PSH
Feb 1 06:22:35.456 kube-system/coredns-86f7d88d77-rt84l:8080 10.1.90.101:47900 to-stack FORWARDED TCP Flags: ACK, PSH
Feb 1 06:22:35.457 kube-system/coredns-86f7d88d77-rt84l:8080 10.1.90.101:47900 to-stack FORWARDED TCP Flags: ACK, FIN
疑問点の調査
まずテスト用アプリは削除する。
kubectl delete -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml
kubectl delete ns cilium-test
普通のNetwork Policyは使えるのか
CiliumのマニュアルにはCiliumNetworkPolicyの例があるが、そもそもKubernetesのNetworkPolicyが使えるのかを確認する。
まずPodを2つ起動する。
$ k run pod1 --image=nginx
pod/pod1 created
$ k run pod2 --image=nginx
pod/pod2 created
$ k label pod pod1 app=pod1
pod/pod1 labeled
$ k label pod pod2 app=pod2
pod/pod2 labeled
pod1からpod2に疎通できることを確認する。
$ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 9m9s 10.1.92.166 ip-10-1-66-141.ap-northeast-1.compute.internal <none> <none>
pod2 1/1 Running 0 8m58s 10.1.85.107 ip-10-1-66-141.ap-northeast-1.compute.internal <none> <none>
$ k exec -it pod1 -- curl 10.1.85.107
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
全てブロックするNetworkPolicyを作成する。
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
$ k apply -f default-deny-all-np.yaml
networkpolicy.networking.k8s.io/default-deny-all created
ブロックされることを確認する。
$ k exec -it pod1 -- curl 10.1.85.107
つまり、普通のNetworkPolicyも使える。
両方定義したらどうなるのか
Calicoの場合は、拒否ルールが使えたり優先度の定義ができたりする。Ciliumも拒否ポリシーはあるが、ベータ機能。
先ほどKubernetesのNetworkPolicyで全て拒否を定義したが、CiliumNetworkPolicyでpod1からpod2への通信の許可を定義する。
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: pod1-egress
spec:
endpointSelector:
matchLabels:
app: pod1
egress:
- toEndpoints:
- matchLabels:
app: pod2
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: pod2-ingress
spec:
endpointSelector:
matchLabels:
app: pod2
ingress:
- fromEndpoints:
- matchLabels:
app: pod1
$ k apply -f pod1-egress-cnp.yaml
ciliumnetworkpolicy.cilium.io/pod1-egress created
$ k apply -f pod2-ingress-cnp.yaml
ciliumnetworkpolicy.cilium.io/pod2-ingress created
動作確認する。
$ k exec -it pod1 -- curl 10.1.85.107
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
両方を組み合わせても使える。
念のため、反対にしてみる。NetworkPolicyとCiliumNetworkPolicyを削除する。
k delete netpol default-deny-all
k delete cnp pod1-egress
k delete cnp pod2-ingress
全て禁止するCiliumNetworkPolicyを作成する。
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: default-deny-all
spec:
endpointSelector: {}
ingress:
- {}
egress:
- {}
$ k apply -f default-deny-all-cnp.yaml
ciliumnetworkpolicy.cilium.io/default-deny-all created
ブロックされることを確認する。
$ k exec -it pod1 -- curl 10.1.85.107
KubernetesのNetworkPolicyでpod1からpod2への通信の許可を定義する。
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: pod1-egress
spec:
podSelector:
matchLabels:
app: pod1
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: pod2
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: pod2-ingress
spec:
podSelector:
matchLabels:
app: pod2
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: pod1
$ k apply -f pod1-egress-np.yaml
networkpolicy.networking.k8s.io/pod1-egress created
$ k apply -f pod2-ingress-np.yaml
networkpolicy.networking.k8s.io/pod2-ingress created
疎通を確認する。
$ k exec -it pod1 -- curl 10.1.85.107
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
両方定義した和集合でよさそう。