EKSでIstioとCalicoは動くのか

EKSでCNIとしてCalico等を使うと、AdmissionWebhookが上手く動かないので、Istioのオートインジェクションが使えないらしいので確認する。 また、VPC CNI Pluign + Calicoなら大丈夫かを確認する。

クラスターの作成

クラスターを作成する。

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: calico
  region: ap-northeast-1
vpc:
  cidr: "10.1.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default
    privateNetworking: true

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]
eksctl create cluster -f calico.yaml

Istioのインストール

以下にしたがってIstioのサンプル環境を作る。

Istioをダウンロードする。

curl -L https://istio.io/downloadIstio | sh -
cd istio-1.8.2

IstioをDemoプロファイルでインストールする。

$ istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Egress gateways installed
✔ Installation complete

Podが起動していることを確認する。

$ k get po -A
NAMESPACE      NAME                                    READY   STATUS    RESTARTS   AGE
istio-system   istio-egressgateway-7fc985bd9f-qbl9f    1/1     Running   0          3m53s
istio-system   istio-ingressgateway-58f9d7d858-btgf6   1/1     Running   0          3m53s
istio-system   istiod-7d8f784f96-cgxkh                 1/1     Running   0          4m7s
kube-system    aws-node-49l6h                          1/1     Running   0          11m
kube-system    aws-node-lmkvt                          1/1     Running   0          11m
kube-system    coredns-86f7d88d77-gs96r                1/1     Running   0          17m
kube-system    coredns-86f7d88d77-t2qc8                1/1     Running   0          17m
kube-system    kube-proxy-jvtfn                        1/1     Running   0          11m
kube-system    kube-proxy-k998d                        1/1     Running   0          11m

オートインジェクションを有効化する。

$ kubectl label namespace default istio-injection=enabled
namespace/default labeled

サンプルアプリケーションをデプロイする。

$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created

サンプルアプリのPodを確認する。

$ k get po -n default
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-558b8b4b76-cdlxx       2/2     Running   0          53s
productpage-v1-6987489c74-xv8xd   2/2     Running   0          53s
ratings-v1-7dc98c7588-gpd96       2/2     Running   0          53s
reviews-v1-7f99cc4496-v598l       2/2     Running   0          53s
reviews-v2-7d79d5bd5d-jx5pc       2/2     Running   0          53s
reviews-v3-7dbcdcbc56-smrqw       2/2     Running   0          53s

ゲートウェイをデプロイする。

$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

アプリのURLを確認する。

export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo "http://$GATEWAY_URL/productpage"

Bookinfoにアクセスできることを確認。

f:id:sotoiwa:20210125172917p:plain

VPC CNI Plugin + Calico

ネットワークポリシーエンジンとして、Calicoをインストールする。

$ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.7.5/config/v1.7/calico.yaml
daemonset.apps/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
serviceaccount/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
deployment.apps/calico-typha created
poddisruptionbudget.policy/calico-typha created
clusterrolebinding.rbac.authorization.k8s.io/typha-cpha created
clusterrole.rbac.authorization.k8s.io/typha-cpha created
configmap/calico-typha-horizontal-autoscaler created
deployment.apps/calico-typha-horizontal-autoscaler created
role.rbac.authorization.k8s.io/typha-cpha created
serviceaccount/typha-cpha created
rolebinding.rbac.authorization.k8s.io/typha-cpha created
service/calico-typha created

全てを許可するネットワークポリシーを用意する。

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - {}
  egress:
  - {}

これを全てのNamespaceに適用する。

k apply -f allow-all.yaml -n default
k apply -f allow-all.yaml -n istio-system
k apply -f allow-all.yaml -n kube-node-lease
k apply -f allow-all.yaml -n kube-public
k apply -f allow-all.yaml -n kube-system

これでサンプルアプリをデプロイし直して問題ないか確認する。

kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

特に問題なさそう。

$ k get po -n default
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-558b8b4b76-vq4p5       2/2     Running   0          43s
productpage-v1-6987489c74-wngdb   2/2     Running   0          42s
ratings-v1-7dc98c7588-nfrcr       2/2     Running   0          43s
reviews-v1-7f99cc4496-v2kj5       2/2     Running   0          43s
reviews-v2-7d79d5bd5d-kmh8l       2/2     Running   0          42s
reviews-v3-7dbcdcbc56-fm27c       2/2     Running   0          42s

Bookinfoもアクセスできた(画面キャプチャ省略)。

Bookinfoを削除する。

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml

ネットワークポリシーを削除する。

k delete -f allow-all.yaml -n default
k delete -f allow-all.yaml -n istio-system
k delete -f allow-all.yaml -n kube-node-lease
k delete -f allow-all.yaml -n kube-public
k delete -f allow-all.yaml -n kube-system

Calicoをアンインストールする。

kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.7.5/config/v1.7/calico.yaml

iptableルールとかが残っていておかしな挙動になるといやなので、念のため、ノードを再起動する。

Calico

CNIを置き換える形でCalicoをインストールする。

まずVPC CNI Pluginを削除する。

$ kubectl delete daemonset -n kube-system aws-node
daemonset.apps "aws-node" deleted

Calicoをインストールする。

$ kubectl apply -f https://docs.projectcalico.org/manifests/calico-vxlan.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

Podが稼働していることを確認する。

$ k get po -A
NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE
istio-system   istio-egressgateway-7fc985bd9f-g4bzz       1/1     Running   0          2m43s
istio-system   istio-ingressgateway-58f9d7d858-4fjdl      1/1     Running   0          2m43s
istio-system   istiod-7d8f784f96-sw8j6                    1/1     Running   0          2m43s
kube-system    calico-kube-controllers-7dbc97f587-v2hwv   1/1     Running   0          2m43s
kube-system    calico-node-qhhhc                          1/1     Running   0          55s
kube-system    calico-node-whgxw                          1/1     Running   0          2m58s
kube-system    coredns-86f7d88d77-5rhdf                   1/1     Running   0          2m42s
kube-system    coredns-86f7d88d77-9np8m                   1/1     Running   0          2m42s
kube-system    kube-proxy-crr72                           1/1     Running   0          2m58s
kube-system    kube-proxy-p26pv                           1/1     Running   0          55s

Bookinfoをデプロイする。

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

Podが起動してこない。

$ k get po -n default
No resources found in default namespace.

やはりWebhookが繋がらないので動かないようだ。

$ k describe rs details-v1-558b8b4b76
Name:           details-v1-558b8b4b76
Namespace:      default
Selector:       app=details,pod-template-hash=558b8b4b76,version=v1
Labels:         app=details
                pod-template-hash=558b8b4b76
                version=v1
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/details-v1
Replicas:       0 current / 1 desired
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=details
                    pod-template-hash=558b8b4b76
                    version=v1
  Service Account:  bookinfo-details
  Containers:
   details:
    Image:        docker.io/istio/examples-bookinfo-details-v1:1.16.2
    Port:         9080/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type             Status  Reason
  ----             ------  ------
  ReplicaFailure   True    FailedCreate
Events:
  Type     Reason        Age                 From                   Message
  ----     ------        ----                ----                   -------
  Warning  FailedCreate  84s (x2 over 114s)  replicaset-controller  Error creating: Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject?timeout=30s: dial tcp 192.168.35.1:15017: i/o timeout
  Warning  FailedCreate  54s                 replicaset-controller  Error creating: Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  FailedCreate  24s                 replicaset-controller  Error creating: Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject?timeout=30s: context deadline exceeded

Calicoのマニュアルにも以下のような注釈がある。

Note: Calico networking cannot currently be installed on the EKS control plane nodes. As a result the control plane nodes will not be able to initiate network connections to Calico pods. (This is a general limitation of EKS’s custom networking support, not specific to Calico.) As a workaround, trusted pods that require control plane nodes to connect to them, such as those implementing admission controller webhooks, can include hostNetwork:true in their pod spec. See the Kuberentes API pod spec definition for more information on this setting.