CodeCommit 接続に使用する AWS プロファイルを指定する

CodeCommit を GRC で利用する際、以下のようにリポジトリ名の前に AWS プロファイル名を指定しておくと、git コマンドの実行前に export AWS_PROFILE=hoge-profile のようにプロファイルを指定する必要がないので少し便利。

$ git remote -v
origin  codecommit::ap-northeast-1://hoge-profile@hoge-repository (fetch)
origin  codecommit::ap-northeast-1://hoge-profile@hoge-repository (push)

Istio の Envoy Proxy のメモリ使用量の調査 2

Istio の Envoy Proxy のメモリ使用量の調査の続き。

前の投稿ではあまりメモリ消費量を増やせなかったので、問題となっているアプリを想定した構成にしたところ、メモリ使用量を大きくすることができた。 問題となっているアプリケーションは 1 Deployment が全て 1 Replica であり、各 Deployment には 1-2 の Service がある。1 Namespace にはこの Deployment が 50 個程度ある。

また Sidecar リソースを使って Namespace Isolation を行った場合の効果も合わせて測定した。

準備

m6i.large (2 core) を 100 ノードの構成にする。

$ k get nodes | head
NAME                                              STATUS   ROLES    AGE   VERSION
ip-10-0-101-103.ap-northeast-1.compute.internal   Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-101-58.ap-northeast-1.compute.internal    Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-102-166.ap-northeast-1.compute.internal   Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-102-32.ap-northeast-1.compute.internal    Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-104-145.ap-northeast-1.compute.internal   Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-104-163.ap-northeast-1.compute.internal   Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-104-37.ap-northeast-1.compute.internal    Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-104-43.ap-northeast-1.compute.internal    Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-106-38.ap-northeast-1.compute.internal    Ready    <none>   20h   v1.29.6-eks-1552ad0
$ k get nodes | wc -l
     101

計測用の Pod を配置する Namespace を 2 つ用意する。片方には Sidecar リソースを作成し、自分自身の Namespace と istio-system Namespace との通信だけを許可する。

k create ns measure1
k label namespace measure1 istio-injection=enabled
k -n measure1 create deployment test1 --image=public.ecr.aws/docker/library/nginx --replicas=1
k create ns measure2
k label namespace measure2 istio-injection=enabled
k -n measure2 create deployment test1 --image=public.ecr.aws/docker/library/nginx --replicas=1
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1
kind: Sidecar
metadata:
  name: default
  namespace: measure2
spec:
  egress:
  - hosts:
    - "./*"
    - "istio-system/*"
EOF
$ istioctl proxy-status
NAME                                CLUSTER        CDS              LDS              EDS              RDS              ECDS        ISTIOD                     VERSION
test1-6658f86b76-mppvq.measure1     Kubernetes     SYNCED (4s)      SYNCED (4s)      SYNCED (4s)      SYNCED (4s)      IGNORED     istiod-dd95d7bdc-ss9ws     1.23.0
test1-6658f86b76-vqkhm.measure2     Kubernetes     SYNCED (10s)     SYNCED (10s)     SYNCED (10s)     SYNCED (10s)     IGNORED     istiod-dd95d7bdc-jw984     1.23.0

テスト実施

テストは以下のようなスクリプトを実行して行う。アプリを 10 インスタンス作成し、その際にメモリ使用量の変化を測定する。

for i in {1..10}
do
  date
  echo "create ns${i}"
  k create ns ns${i}
  k label namespace ns${i} istio-injection=enabled
  for j in {1..50}
  do
    k -n ns${i} create deployment test${j} --image=public.ecr.aws/docker/library/nginx --replicas=1
    k -n ns${i} expose deployment test${j} --port=80 --target-port=80
    k -n ns${i} expose deployment test${j} --port=80 --target-port=80 --cluster-ip=None --name=test${j}-headless
  done
  echo "sleep 30sec"
  sleep 30
  echo "####################"
  echo "number of node"
  k get no -A --no-headers | wc -l
  echo "number of pod"
  k get po -A --no-headers | wc -l
  echo "number of service"
  k get svc -A --no-headers | wc -l
  echo "k top pod"
  k top pod --containers -n measure1 | grep istio-proxy
  k top pod --containers -n measure2 | grep istio-proxy
  echo "number of cluster"
  istioctl proxy-config cluster test1-6658f86b76-mppvq.measure1 | wc -l
  istioctl proxy-config cluster test1-6658f86b76-vqkhm.measure2 | wc -l
  echo "number of listener"
  istioctl proxy-config listener test1-6658f86b76-mppvq.measure1 | wc -l
  istioctl proxy-config listener test1-6658f86b76-vqkhm.measure2 | wc -l
  echo "number of route"
  istioctl proxy-config route test1-6658f86b76-mppvq.measure1 | wc -l
  istioctl proxy-config route test1-6658f86b76-vqkhm.measure2 | wc -l
  echo "number of endpoint"
  istioctl proxy-config endpoint test1-6658f86b76-mppvq.measure1 | wc -l
  istioctl proxy-config endpoint test1-6658f86b76-vqkhm.measure2 | wc -l
  echo "####################"
done

実行ログ

2024919日 木曜日 184532秒 JST
create ns1
namespace/ns1 created
namespace/ns1 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     257
number of service
     104
k top pod
test1-6658f86b76-mppvq   istio-proxy   6m           42Mi            
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     116
      12
number of listener
     222
      17
number of route
     108
       7
number of endpoint
      70
      13
####################
2024919日 木曜日 184827秒 JST
create ns2
namespace/ns2 created
namespace/ns2 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     310
number of service
     204
k top pod
test1-6658f86b76-mppvq   istio-proxy   2m           56Mi            
test1-6658f86b76-vqkhm   istio-proxy   2m           24Mi            
number of cluster
     216
      12
number of listener
     422
      17
number of route
     208
       7
number of endpoint
     132
      25
####################
2024919日 木曜日 185122秒 JST
create ns3
namespace/ns3 created
namespace/ns3 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     360
number of service
     304
k top pod
test1-6658f86b76-mppvq   istio-proxy   29m          73Mi            
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     316
      12
number of listener
     622
      17
number of route
     308
       7
number of endpoint
     182
      25
####################
2024919日 木曜日 185419秒 JST
create ns4
namespace/ns4 created
namespace/ns4 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     410
number of service
     404
k top pod
test1-6658f86b76-mppvq   istio-proxy   22m          89Mi            
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     416
      12
number of listener
     822
      17
number of route
     408
       7
number of endpoint
     232
      25
####################
2024919日 木曜日 185718秒 JST
create ns5
namespace/ns5 created
namespace/ns5 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     460
number of service
     504
k top pod
test1-6658f86b76-mppvq   istio-proxy   3m           105Mi           
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     516
      12
number of listener
    1022
      17
number of route
     508
       7
number of endpoint
     282
      25
####################
2024919日 木曜日 190019秒 JST
create ns6
namespace/ns6 created
namespace/ns6 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     510
number of service
     604
k top pod
test1-6658f86b76-mppvq   istio-proxy   3m           123Mi           
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     616
      12
number of listener
    1222
      17
number of route
     608
       7
number of endpoint
     332
      25
####################
2024919日 木曜日 190317秒 JST
create ns7
namespace/ns7 created
namespace/ns7 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     560
number of service
     704
k top pod
test1-6658f86b76-mppvq   istio-proxy   9m           136Mi           
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     716
      12
number of listener
    1422
      17
number of route
     708
       7
number of endpoint
     382
      25
####################
2024919日 木曜日 190617秒 JST
create ns8
namespace/ns8 created
namespace/ns8 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     610
number of service
     804
k top pod
test1-6658f86b76-mppvq   istio-proxy   19m          151Mi           
test1-6658f86b76-vqkhm   istio-proxy   2m           23Mi            
number of cluster
     816
      12
number of listener
    1622
      17
number of route
     808
       7
number of endpoint
     432
      25
####################
2024919日 木曜日 190922秒 JST
create ns9
namespace/ns9 created
namespace/ns9 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     660
number of service
     904
k top pod
test1-6658f86b76-mppvq   istio-proxy   3m           169Mi           
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     916
      12
number of listener
    1822
      17
number of route
     908
       7
number of endpoint
     482
      25
####################
2024919日 木曜日 191227秒 JST
create ns10
namespace/ns10 created
namespace/ns10 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     710
number of service
    1004
k top pod
test1-6658f86b76-mppvq   istio-proxy   12m          179Mi           
test1-6658f86b76-vqkhm   istio-proxy   2m           23Mi            
number of cluster
    1016
      12
number of listener
    2022
      17
number of route
    1008
       7
number of endpoint
     532
      25
####################

グラフ

アプリケーションのインスタンス数に応じて線形に増えていることが確認できる。Sidecar がある場合は他の Namespace の影響を受けないので一切増えないことも確認できる。

Istio の Envoy Proxy のメモリ使用量の調査

Istio の Envoy Proxy のメモリ使用量を調査する。

クラスターの作成

クラスターを作成する。

CLUSTER_NAME="istio"
MY_ARN=$(aws sts get-caller-identity --output text --query Arn)
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1
  version: "1.29"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true

accessConfig:
  bootstrapClusterCreatorAdminPermissions: false
  authenticationMode: API
  accessEntries:
    - principalARN: arn:aws:iam::${AWS_ACCOUNT_ID}:role/Admin
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
          accessScope:
            type: cluster
EOF
eksctl create cluster -f cluster.yaml

大きなインスタンス (m6i.32xlarge, 128 core) が 1 ノードのノードグループを作成する。

cat << EOF > m2.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: m2
    instanceType: m6i.32xlarge
    minSize: 1
    maxSize: 20
    desiredCapacity: 1
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m2.yaml

ノードを確認する。

$ k get nodes
NAME                                            STATUS   ROLES    AGE   VERSION
ip-10-0-80-97.ap-northeast-1.compute.internal   Ready    <none>   10m   v1.29.6-eks-1552ad0

metrics-server のインストール

メモリ使用量計測のため metrics-server をインストールする。

$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
$ k -n kube-system get pods
NAME                              READY   STATUS    RESTARTS   AGE
aws-node-fwtlr                    2/2     Running   0          8m46s
coredns-676bf68468-f56zh          1/1     Running   0          41m
coredns-676bf68468-pmkwl          1/1     Running   0          15m
kube-proxy-99shl                  1/1     Running   0          8m46s
metrics-server-75bf97fcc9-9thcf   1/1     Running   0          33s

Istio のインストール

諸事情により Helm でインストールする。

helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update

諸事情により core と istiod のみをインストールする。

$ helm install istio-base -n istio-system istio/base --version 1.23.0 --create-namespace
NAME: istio-base
LAST DEPLOYED: Wed Sep 18 20:47:17 2024
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Istio base successfully installed!

To learn more about the release, try:
  $ helm status istio-base -n istio-system
  $ helm get all istio-base -n istio-system
$ helm install istiod -n istio-system istio/istiod --version 1.23.0
NAME: istiod
LAST DEPLOYED: Wed Sep 18 20:47:43 2024
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
"istiod" successfully installed!

To learn more about the release, try:
  $ helm status istiod -n istio-system
  $ helm get all istiod -n istio-system

Next steps:
  * Deploy a Gateway: https://istio.io/latest/docs/setup/additional-setup/gateway/
  * Try out our tasks to get started on common configurations:
    * https://istio.io/latest/docs/tasks/traffic-management
    * https://istio.io/latest/docs/tasks/security/
    * https://istio.io/latest/docs/tasks/policy-enforcement/
  * Review the list of actively supported releases, CVE publications and our hardening guide:
    * https://istio.io/latest/docs/releases/supported-releases/
    * https://istio.io/latest/news/security/
    * https://istio.io/latest/docs/ops/best-practices/security/

For further documentation see https://istio.io website

Pod を確認する。

$ k -n istio-system get po
NAME                     READY   STATUS    RESTARTS   AGE
istiod-dd95d7bdc-hxv47   1/1     Running   0          3m57s

Pod 1 個

Namespace を作成し、自動インジェクションするためのラベルをつける。

$ k create ns ns1
namespace/ns1 created
$ k label namespace ns1 istio-injection=enabled
namespace/ns1 labeled

nginx の Deployment を作成する。

$ k -n ns1 create deployment test --image=nginx
deployment.apps/test created
$ k -n ns1 get po
NAME                    READY   STATUS    RESTARTS   AGE
test-7955cf7657-8zbn8   2/2     Running   0          8s

この状態のメモリ使用量を確認する。25MiB 程度

$ k -n ns1 top pod --containers
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-8zbn8   istio-proxy   27m          25Mi            
test-7955cf7657-8zbn8   nginx         33m          95Mi 

オブジェクトの数を確認する。

$ k get no -A --no-headers | wc -l
       1
$ k get po -A --no-headers | wc -l
       7
$ k get svc -A --no-headers | wc -l
       4

istioctl のバージョンを確認する。

$ istioctl version
client version: 1.23.1
control plane version: 1.23.0
data plane version: 1.23.0 (1 proxies)

メッシュの状態を確認する。

$ istioctl proxy-status
NAME                          CLUSTER        CDS              LDS              EDS              RDS              ECDS        ISTIOD                     VERSION
test-7955cf7657-8zbn8.ns1     Kubernetes     SYNCED (41s)     SYNCED (41s)     SYNCED (41s)     SYNCED (41s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0

設定とその数を確認する。

$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1 | wc -l
      16
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1 | wc -l
      22
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1 | wc -l
       8
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1 | wc -l
      16
$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1
SERVICE FQDN                                     PORT      SUBSET     DIRECTION     TYPE             DESTINATION RULE
BlackHoleCluster                                 -         -          -             STATIC           
InboundPassthroughCluster                        -         -          -             ORIGINAL_DST     
PassthroughCluster                               -         -          -             ORIGINAL_DST     
agent                                            -         -          -             STATIC           
istiod.istio-system.svc.cluster.local            443       -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15010     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15012     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15014     -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           53        -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           9153      -          outbound      EDS              
kubernetes.default.svc.cluster.local             443       -          outbound      EDS              
metrics-server.kube-system.svc.cluster.local     443       -          outbound      EDS              
prometheus_stats                                 -         -          -             STATIC           
sds-grpc                                         -         -          -             STATIC 
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1
ADDRESSES      PORT  MATCH                                                   DESTINATION
172.20.0.10    53    ALL                                                     Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
172.20.0.1     443   ALL                                                     Cluster: outbound|443||kubernetes.default.svc.cluster.local
172.20.143.212 443   ALL                                                     Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
172.20.34.15   443   ALL                                                     Cluster: outbound|443||istiod.istio-system.svc.cluster.local
172.20.0.10    9153  Trans: raw_buffer; App: http/1.1,h2c                    Route: kube-dns.kube-system.svc.cluster.local:9153
172.20.0.10    9153  ALL                                                     Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        15001 ALL                                                     PassthroughCluster
0.0.0.0        15001 Addr: *:15001                                           Non-HTTP/Non-TCP
0.0.0.0        15006 Addr: *:15006                                           Non-HTTP/Non-TCP
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2 InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c                    InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: TCP TLS                                InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer                                       InboundPassthroughCluster
0.0.0.0        15006 Trans: tls                                              InboundPassthroughCluster
0.0.0.0        15010 Trans: raw_buffer; App: http/1.1,h2c                    Route: 15010
0.0.0.0        15010 ALL                                                     PassthroughCluster
172.20.34.15   15012 ALL                                                     Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 Trans: raw_buffer; App: http/1.1,h2c                    Route: 15014
0.0.0.0        15014 ALL                                                     PassthroughCluster
0.0.0.0        15021 ALL                                                     Inline Route: /healthz/ready*
0.0.0.0        15090 ALL                                                     Inline Route: /stats/prometheus*
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1
NAME                                            VHOST NAME                                      DOMAINS                               MATCH                  VIRTUAL SERVICE
15010                                           istiod.istio-system.svc.cluster.local:15010     istiod.istio-system, 172.20.34.15     /*                     
kube-dns.kube-system.svc.cluster.local:9153     kube-dns.kube-system.svc.cluster.local:9153     *                                     /*                     
15014                                           istiod.istio-system.svc.cluster.local:15014     istiod.istio-system, 172.20.34.15     /*                     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
                                                backend                                         *                                     /healthz/ready*        
                                                backend                                         *                                     /stats/prometheus*     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*     
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1
ENDPOINT                                                STATUS      OUTLIER CHECK     CLUSTER
10.0.100.189:443                                        HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.75.112:10250                                       HEALTHY     OK                outbound|443||metrics-server.kube-system.svc.cluster.local
10.0.75.6:15010                                         HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.0.75.6:15012                                         HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.0.75.6:15014                                         HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.0.75.6:15017                                         HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.0.81.87:53                                           HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.81.87:9153                                         HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.85.135:443                                         HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.87.136:53                                          HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.87.136:9153                                        HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
127.0.0.1:15000                                         HEALTHY     OK                prometheus_stats
127.0.0.1:15020                                         HEALTHY     OK                agent
unix://./etc/istio/proxy/XDS                            HEALTHY     OK                xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket     HEALTHY     OK                sds-grpc

Pod 100 個

Pod をスケールして 100 個にする。

$ k -n ns1 scale deployment test --replicas=100
deployment.apps/test scaled

すべて Running なことを確認する。

$ k -n ns1 get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7955cf7657-2dq7j   2/2     Running   0          109s
test-7955cf7657-2kl8f   2/2     Running   0          109s
test-7955cf7657-2pwf7   2/2     Running   0          106s
test-7955cf7657-2szkw   2/2     Running   0          108s

(省略)

test-7955cf7657-zhm5p   2/2     Running   0          108s
test-7955cf7657-zm4hp   2/2     Running   0          107s
test-7955cf7657-zs7n7   2/2     Running   0          108s
test-7955cf7657-zwswj   2/2     Running   0          108s

メモリ使用量を確認すると 22-24MiB 程度で、増えていない。単純に Pod だけ増やしても設定が増えていないからと思われる。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-2dq7j   istio-proxy   4m           23Mi            
test-7955cf7657-2dq7j   nginx         0m           92Mi            
test-7955cf7657-2kl8f   istio-proxy   3m           22Mi            
test-7955cf7657-2kl8f   nginx         0m           90Mi            
test-7955cf7657-2pwf7   istio-proxy   4m           22Mi            
test-7955cf7657-2pwf7   nginx         0m           91Mi            
test-7955cf7657-2szkw   istio-proxy   4m           24Mi            
test-7955cf7657-2szkw   nginx         0m           90Mi            
test-7955cf7657-4wgqj   istio-proxy   4m           23Mi     

設定の数は増えてない。

$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1 | wc -l
      16
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1 | wc -l
      22
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1 | wc -l
       8
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1 | wc -l
      16

Service の作成

Service を作ってみる。

$ k -n ns1 expose deployment test --port=80 --target-port=80
service/test exposed

メモリ使用量は 24-25MiB 程度で、微増した程度。

$ k -n ns1 top pod --containers | head                            
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-2dq7j   istio-proxy   5m           24Mi            
test-7955cf7657-2dq7j   nginx         0m           92Mi            
test-7955cf7657-2kl8f   istio-proxy   5m           25Mi            
test-7955cf7657-2kl8f   nginx         0m           90Mi            
test-7955cf7657-2pwf7   istio-proxy   5m           24Mi            
test-7955cf7657-2pwf7   nginx         0m           91Mi            
test-7955cf7657-2szkw   istio-proxy   5m           25Mi            
test-7955cf7657-2szkw   nginx         0m           90Mi            
test-7955cf7657-4wgqj   istio-proxy   5m           24Mi  

設定も微増している。今回の場合、Service を追加したのでアウトバウンドが増えているが、自身がその Service なので、インバウンドの分も増えている。endpoint は Pod の数分増えた。

$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1 | wc -l
      18
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1 | wc -l
      29
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1 | wc -l
      11
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1 | wc -l
     116
$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1
SERVICE FQDN                                     PORT      SUBSET     DIRECTION     TYPE             DESTINATION RULE
                                                 80        -          inbound       ORIGINAL_DST     
BlackHoleCluster                                 -         -          -             STATIC           
InboundPassthroughCluster                        -         -          -             ORIGINAL_DST     
PassthroughCluster                               -         -          -             ORIGINAL_DST     
agent                                            -         -          -             STATIC           
istiod.istio-system.svc.cluster.local            443       -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15010     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15012     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15014     -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           53        -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           9153      -          outbound      EDS              
kubernetes.default.svc.cluster.local             443       -          outbound      EDS              
metrics-server.kube-system.svc.cluster.local     443       -          outbound      EDS              
prometheus_stats                                 -         -          -             STATIC           
sds-grpc                                         -         -          -             STATIC           
test.ns1.svc.cluster.local                       80        -          outbound      EDS              
xds-grpc                                         -         -          -             STATIC   
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1
ADDRESSES      PORT  MATCH                                                               DESTINATION
172.20.0.10    53    ALL                                                                 Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
172.20.160.18  80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
172.20.160.18  80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
172.20.0.1     443   ALL                                                                 Cluster: outbound|443||kubernetes.default.svc.cluster.local
172.20.143.212 443   ALL                                                                 Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
172.20.34.15   443   ALL                                                                 Cluster: outbound|443||istiod.istio-system.svc.cluster.local
172.20.0.10    9153  Trans: raw_buffer; App: http/1.1,h2c                                Route: kube-dns.kube-system.svc.cluster.local:9153
172.20.0.10    9153  ALL                                                                 Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        15001 ALL                                                                 PassthroughCluster
0.0.0.0        15001 Addr: *:15001                                                       Non-HTTP/Non-TCP
0.0.0.0        15006 Addr: *:15006                                                       Non-HTTP/Non-TCP
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2             InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c                                InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: TCP TLS                                            InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer                                                   InboundPassthroughCluster
0.0.0.0        15006 Trans: tls                                                          InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2; Addr: *:80 Cluster: inbound|80||
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c; Addr: *:80                    Cluster: inbound|80||
0.0.0.0        15006 Trans: tls; App: TCP TLS; Addr: *:80                                Cluster: inbound|80||
0.0.0.0        15006 Trans: raw_buffer; Addr: *:80                                       Cluster: inbound|80||
0.0.0.0        15006 Trans: tls; Addr: *:80                                              Cluster: inbound|80||
0.0.0.0        15010 Trans: raw_buffer; App: http/1.1,h2c                                Route: 15010
0.0.0.0        15010 ALL                                                                 PassthroughCluster
172.20.34.15   15012 ALL                                                                 Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 Trans: raw_buffer; App: http/1.1,h2c                                Route: 15014
0.0.0.0        15014 ALL                                                                 PassthroughCluster
0.0.0.0        15021 ALL                                                                 Inline Route: /healthz/ready*
0.0.0.0        15090 ALL                                                                 Inline Route: /stats/prometheus*
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1
NAME                                            VHOST NAME                                      DOMAINS                               MATCH                  VIRTUAL SERVICE
15014                                           istiod.istio-system.svc.cluster.local:15014     istiod.istio-system, 172.20.34.15     /*                     
test.ns1.svc.cluster.local:80                   test.ns1.svc.cluster.local:80                   *                                     /*                     
15010                                           istiod.istio-system.svc.cluster.local:15010     istiod.istio-system, 172.20.34.15     /*                     
kube-dns.kube-system.svc.cluster.local:9153     kube-dns.kube-system.svc.cluster.local:9153     *                                     /*                     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
inbound|80||                                    inbound|http|80                                 *                                     /*                     
                                                backend                                         *                                     /healthz/ready*        
                                                backend                                         *                                     /stats/prometheus*     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
inbound|80||                                    inbound|http|80                                 *                                     /*     
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1
ENDPOINT                                                STATUS      OUTLIER CHECK     CLUSTER
10.0.100.189:443                                        HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.64.108:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.64.140:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.64.147:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.64.35:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.64.97:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.65.151:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.65.50:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.65.60:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.65.99:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.110:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.125:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.137:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.21:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.70:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.67.199:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.67.58:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.69.119:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.69.180:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.69.84:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.189:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.19:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.243:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.247:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.9:80                                            HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.18:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.200:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.27:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.63:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.93:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.72.13:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.72.242:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.73.178:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.73.94:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.74.117:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.74.14:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.74.159:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.108:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.112:10250                                       HEALTHY     OK                outbound|443||metrics-server.kube-system.svc.cluster.local
10.0.75.146:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.200:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.248:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.51:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.6:15010                                         HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.0.75.6:15012                                         HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.0.75.6:15014                                         HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.0.75.6:15017                                         HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.0.76.216:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.76.229:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.76.80:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.76.83:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.77.219:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.77.59:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.78.160:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.78.19:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.78.215:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.79.181:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.79.43:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.79.57:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.80.127:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.80.252:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.81.201:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.81.23:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.81.87:53                                           HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.81.87:9153                                         HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.82.119:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.82.208:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.82.24:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.82.40:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.83.218:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.84.174:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.84.212:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.84.58:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.85.135:443                                         HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.85.229:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.85.230:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.85.55:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.86.118:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.86.171:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.86.237:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.86.91:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.87.126:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.87.136:53                                          HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.87.136:9153                                        HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.87.21:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.87.97:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.169:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.189:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.53:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.71:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.73:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.89.118:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.89.147:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.89.46:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.90.10:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.90.50:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.91.125:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.91.250:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.91.253:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.92.180:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.92.25:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.102:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.206:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.212:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.243:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.25:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.78:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.94.255:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.95.111:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.95.225:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.95.64:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
127.0.0.1:15000                                         HEALTHY     OK                prometheus_stats
127.0.0.1:15020                                         HEALTHY     OK                agent
unix://./etc/istio/proxy/XDS                            HEALTHY     OK                xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket     HEALTHY     OK                sds-grpc

Headless Service の作成

一度 Service を削除して Headless Service として再作成する。

$ k -n ns1 delete svc test
service "test" deleted
$ k -n ns1 expose deployment test --port=80 --target-port=80 --cluster-ip=None
service/test exposed
$ k -n ns1 get svc
NAME   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
test   ClusterIP   None         <none>        80/TCP    14s

31MiB と少し増えた。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-2dq7j   istio-proxy   3m           31Mi            
test-7955cf7657-2dq7j   nginx         0m           92Mi            
test-7955cf7657-2kl8f   istio-proxy   4m           31Mi            
test-7955cf7657-2kl8f   nginx         0m           90Mi            
test-7955cf7657-2pwf7   istio-proxy   4m           31Mi            
test-7955cf7657-2pwf7   nginx         0m           91Mi            
test-7955cf7657-2szkw   istio-proxy   3m           31Mi            
test-7955cf7657-2szkw   nginx         0m           90Mi            
test-7955cf7657-4wgqj   istio-proxy   3m           31Mi  

設定が減ったとしても、メモリ使用量はすぐには減らないような気がするので、念のため rollout して Pod を再作成してみる。

$ k -n ns1 rollout restart deployment test
deployment.apps/test restarted

むしろ rollout したことで微増した。rollout 中にオブジェクトが増えることで設定が増えてしまうのかもしれない。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-22sjb   istio-proxy   10m          37Mi            
test-74c897698f-22sjb   nginx         0m           92Mi            
test-74c897698f-2h2x9   istio-proxy   8m           36Mi            
test-74c897698f-2h2x9   nginx         0m           90Mi            
test-74c897698f-2scl7   istio-proxy   10m          38Mi            
test-74c897698f-2scl7   nginx         0m           90Mi            
test-74c897698f-4258d   istio-proxy   11m          37Mi            
test-74c897698f-4258d   nginx         0m           91Mi            
test-74c897698f-45fnj   istio-proxy   9m           37Mi   

設定を見てみる。

$ istioctl proxy-status | head
NAME                          CLUSTER        CDS              LDS              EDS              RDS              ECDS        ISTIOD                     VERSION
test-74c897698f-22sjb.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-2h2x9.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-2scl7.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-4258d.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-45fnj.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-4zxz7.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-56xsq.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-5rmvq.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-5z6kd.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0

Headless Service だと listener は増えたが、 endpoint が減った。

$ istioctl proxy-config cluster test-74c897698f-22sjb.ns1 | wc -l
      18
$ istioctl proxy-config listener test-74c897698f-22sjb.ns1 | wc -l
     225
$ istioctl proxy-config route test-74c897698f-22sjb.ns1 | wc -l
      11
$ istioctl proxy-config endpoint test-74c897698f-22sjb.ns1 | wc -l
      16
$ istioctl proxy-config cluster test-74c897698f-22sjb.ns1
SERVICE FQDN                                     PORT      SUBSET     DIRECTION     TYPE             DESTINATION RULE
                                                 80        -          inbound       ORIGINAL_DST     
BlackHoleCluster                                 -         -          -             STATIC           
InboundPassthroughCluster                        -         -          -             ORIGINAL_DST     
PassthroughCluster                               -         -          -             ORIGINAL_DST     
agent                                            -         -          -             STATIC           
istiod.istio-system.svc.cluster.local            443       -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15010     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15012     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15014     -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           53        -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           9153      -          outbound      EDS              
kubernetes.default.svc.cluster.local             443       -          outbound      EDS              
metrics-server.kube-system.svc.cluster.local     443       -          outbound      EDS              
prometheus_stats                                 -         -          -             STATIC           
sds-grpc                                         -         -          -             STATIC           
test.ns1.svc.cluster.local                       80        -          outbound      ORIGINAL_DST     
xds-grpc                                         -         -          -             STATIC          
$ istioctl proxy-config listener test-74c897698f-22sjb.ns1
ADDRESSES      PORT  MATCH                                                               DESTINATION
172.20.0.10    53    ALL                                                                 Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.64.197    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.64.197    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.64.218    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.64.218    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.64.91     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.64.91     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.64.92     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.64.92     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.14     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.14     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.15     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.15     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.153    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.153    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.76     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.76     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.95     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.95     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.66.200    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.66.200    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.66.207    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.66.207    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.113    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.113    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.132    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.132    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.208    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.208    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.249    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.249    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.60     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.60     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.68.175    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.68.175    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.69.160    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.69.160    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.69.194    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.69.194    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.69.52     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.69.52     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.70.140    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.70.140    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.70.242    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.70.242    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.152    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.152    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.190    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.190    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.221    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.221    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.29     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.29     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.58     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.58     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.72.127    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.72.127    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.141    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.73.141    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.188    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.73.188    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.32     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.73.32     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.73     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.73.73     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.74.216    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.74.216    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.74.73     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.74.73     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.147    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.75.147    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.178    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.75.178    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.197    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.75.197    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.215    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.75.215    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.76.34     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.76.34     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.77.106    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.77.106    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.77.114    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.77.114    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.107    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.107    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.112    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.112    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.119    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.119    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.125    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.125    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.230    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.230    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.244    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.244    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.31     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.31     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.63     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.63     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.83     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.83     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.153    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.153    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.161    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.161    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.21     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.21     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.238    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.238    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.239    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.239    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.80.166    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.80.166    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.80.223    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.80.223    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.80.29     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.80.29     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.81.133    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.81.133    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.81.192    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.81.192    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.81.231    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.81.231    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.127    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.82.127    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.141    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.82.141    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.220    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.82.220    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.235    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.82.235    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.83.105    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.83.105    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.83.26     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.83.26     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.83.30     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.83.30     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.84.208    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.84.208    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.85.138    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.85.138    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.85.228    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.85.228    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.85.69     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.85.69     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.86.125    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.86.125    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.86.130    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.86.130    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.87.144    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.87.144    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.87.254    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.87.254    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.87.90     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.87.90     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.89.183    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.89.183    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.89.82     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.89.82     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.90.139    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.90.139    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.17     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.91.17     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.226    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.91.226    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.233    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.91.233    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.4      80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.91.4      80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.92.126    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.92.126    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.125    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.93.125    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.131    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.93.131    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.142    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.93.142    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.204    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.93.204    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.112    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.112    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.118    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.118    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.236    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.236    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.33     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.33     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.44     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.44     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.54     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.54     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.238    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.95.238    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.244    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.95.244    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.4      80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.95.4      80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.58     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.95.58     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
172.20.0.1     443   ALL                                                                 Cluster: outbound|443||kubernetes.default.svc.cluster.local
172.20.143.212 443   ALL                                                                 Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
172.20.34.15   443   ALL                                                                 Cluster: outbound|443||istiod.istio-system.svc.cluster.local
172.20.0.10    9153  Trans: raw_buffer; App: http/1.1,h2c                                Route: kube-dns.kube-system.svc.cluster.local:9153
172.20.0.10    9153  ALL                                                                 Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        15001 ALL                                                                 PassthroughCluster
0.0.0.0        15001 Addr: *:15001                                                       Non-HTTP/Non-TCP
0.0.0.0        15006 Addr: *:15006                                                       Non-HTTP/Non-TCP
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2             InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c                                InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: TCP TLS                                            InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer                                                   InboundPassthroughCluster
0.0.0.0        15006 Trans: tls                                                          InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2; Addr: *:80 Cluster: inbound|80||
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c; Addr: *:80                    Cluster: inbound|80||
0.0.0.0        15006 Trans: tls; App: TCP TLS; Addr: *:80                                Cluster: inbound|80||
0.0.0.0        15006 Trans: raw_buffer; Addr: *:80                                       Cluster: inbound|80||
0.0.0.0        15006 Trans: tls; Addr: *:80                                              Cluster: inbound|80||
0.0.0.0        15010 Trans: raw_buffer; App: http/1.1,h2c                                Route: 15010
0.0.0.0        15010 ALL                                                                 PassthroughCluster
172.20.34.15   15012 ALL                                                                 Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 Trans: raw_buffer; App: http/1.1,h2c                                Route: 15014
0.0.0.0        15014 ALL                                                                 PassthroughCluster
0.0.0.0        15021 ALL                                                                 Inline Route: /healthz/ready*
0.0.0.0        15090 ALL                                                                 Inline Route: /stats/prometheus*
$ istioctl proxy-config route test-74c897698f-22sjb.ns1
NAME                                            VHOST NAME                                      DOMAINS                               MATCH                  VIRTUAL SERVICE
test.ns1.svc.cluster.local:80                   test.ns1.svc.cluster.local:80                   *                                     /*                     
15010                                           istiod.istio-system.svc.cluster.local:15010     istiod.istio-system, 172.20.34.15     /*                     
kube-dns.kube-system.svc.cluster.local:9153     kube-dns.kube-system.svc.cluster.local:9153     *                                     /*                     
15014                                           istiod.istio-system.svc.cluster.local:15014     istiod.istio-system, 172.20.34.15     /*                     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
inbound|80||                                    inbound|http|80                                 *                                     /*                     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
inbound|80||                                    inbound|http|80                                 *                                     /*                     
                                                backend                                         *                                     /healthz/ready*        
                                                backend                                         *                                     /stats/prometheus*     
$ istioctl proxy-config endpoint test-74c897698f-22sjb.ns1
ENDPOINT                                                STATUS      OUTLIER CHECK     CLUSTER
10.0.100.189:443                                        HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.75.112:10250                                       HEALTHY     OK                outbound|443||metrics-server.kube-system.svc.cluster.local
10.0.75.6:15010                                         HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.0.75.6:15012                                         HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.0.75.6:15014                                         HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.0.75.6:15017                                         HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.0.81.87:53                                           HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.81.87:9153                                         HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.85.135:443                                         HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.87.136:53                                          HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.87.136:9153                                        HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
127.0.0.1:15000                                         HEALTHY     OK                prometheus_stats
127.0.0.1:15020                                         HEALTHY     OK                agent
unix://./etc/istio/proxy/XDS                            HEALTHY     OK                xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket     HEALTHY     OK                sds-grpc

rollout ではなく、スケールインして少し待ってスケールアウトしてみる。

$ k -n ns1 scale deployment test --replicas=1
deployment.apps/test scaled
$ k -n ns1 scale deployment test --replicas=100
deployment.apps/test scaled

これだとさっきより少し減った。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-24x2k   istio-proxy   3m           33Mi            
test-74c897698f-24x2k   nginx         0m           91Mi            
test-74c897698f-2kcck   istio-proxy   4m           33Mi            
test-74c897698f-2kcck   nginx         0m           95Mi            
test-74c897698f-2kgdx   istio-proxy   3m           33Mi            
test-74c897698f-2kgdx   nginx         0m           94Mi            
test-74c897698f-462wj   istio-proxy   3m           33Mi            
test-74c897698f-462wj   nginx         0m           91Mi            
test-74c897698f-48rhq   istio-proxy   3m           34Mi     

ノード追加

大きな 1 ノードではなく、小さな 100 ノードに分散してみる。

小さなインスタンス (m6i.large, 2 core) のノードグループを作成する。

cat << EOF > m3.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: m3
    instanceType: m6i.large
    minSize: 1
    maxSize: 20
    desiredCapacity: 20
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m3.yaml

大きなインスタンスは削除する。

eksctl delete nodegroup m2 --cluster ${CLUSTER_NAME}

ノードを確認する。

$ k get nodes
NAME                                              STATUS                        ROLES    AGE     VERSION
ip-10-0-106-38.ap-northeast-1.compute.internal    Ready                         <none>   4m21s   v1.29.6-eks-1552ad0
ip-10-0-107-27.ap-northeast-1.compute.internal    Ready                         <none>   4m19s   v1.29.6-eks-1552ad0
ip-10-0-107-55.ap-northeast-1.compute.internal    Ready                         <none>   4m23s   v1.29.6-eks-1552ad0
ip-10-0-108-95.ap-northeast-1.compute.internal    Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-109-108.ap-northeast-1.compute.internal   Ready                         <none>   4m22s   v1.29.6-eks-1552ad0
ip-10-0-114-75.ap-northeast-1.compute.internal    Ready                         <none>   4m10s   v1.29.6-eks-1552ad0
ip-10-0-117-226.ap-northeast-1.compute.internal   Ready                         <none>   4m22s   v1.29.6-eks-1552ad0
ip-10-0-121-37.ap-northeast-1.compute.internal    Ready                         <none>   4m11s   v1.29.6-eks-1552ad0
ip-10-0-122-44.ap-northeast-1.compute.internal    Ready                         <none>   4m21s   v1.29.6-eks-1552ad0
ip-10-0-64-210.ap-northeast-1.compute.internal    Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-65-152.ap-northeast-1.compute.internal    Ready                         <none>   4m19s   v1.29.6-eks-1552ad0
ip-10-0-71-158.ap-northeast-1.compute.internal    Ready                         <none>   4m18s   v1.29.6-eks-1552ad0
ip-10-0-71-188.ap-northeast-1.compute.internal    Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-73-100.ap-northeast-1.compute.internal    Ready                         <none>   4m15s   v1.29.6-eks-1552ad0
ip-10-0-73-13.ap-northeast-1.compute.internal     Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-80-97.ap-northeast-1.compute.internal     NotReady,SchedulingDisabled   <none>   66m     v1.29.6-eks-1552ad0
ip-10-0-81-103.ap-northeast-1.compute.internal    Ready                         <none>   4m18s   v1.29.6-eks-1552ad0
ip-10-0-88-105.ap-northeast-1.compute.internal    Ready                         <none>   4m16s   v1.29.6-eks-1552ad0
ip-10-0-94-113.ap-northeast-1.compute.internal    Ready                         <none>   4m18s   v1.29.6-eks-1552ad0
ip-10-0-95-162.ap-northeast-1.compute.internal    Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-97-3.ap-northeast-1.compute.internal      Ready                         <none>   4m21s   v1.29.6-eks-1552ad0

念のためスケールインしてスケールアウトする。

$ k -n ns1 scale deployment test --replicas=1
deployment.apps/test scaled
$ k -n ns1 scale deployment test --replicas=100
deployment.apps/test scaled

メモリ使用量はほとんど変わっていない。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-2ctvb   istio-proxy   2m           33Mi            
test-74c897698f-2ctvb   nginx         0m           2Mi             
test-74c897698f-2fvbs   istio-proxy   1m           33Mi            
test-74c897698f-2fvbs   nginx         0m           2Mi             
test-74c897698f-2vgdl   istio-proxy   2m           33Mi            
test-74c897698f-2vgdl   nginx         0m           2Mi             
test-74c897698f-4fb2g   istio-proxy   1m           33Mi            
test-74c897698f-4fb2g   nginx         0m           2Mi             
test-74c897698f-4lh9r   istio-proxy   2m           33Mi   

設定も増えておらず、ノードが増えてもそれだけだと変わらないことがわかった。

$ istioctl proxy-config cluster test-74c897698f-2ctvb.ns1 | wc -l
      18
$ istioctl proxy-config listener test-74c897698f-2ctvb.ns1 | wc -l
     225
$ istioctl proxy-config route test-74c897698f-2ctvb.ns1 | wc -l
      11
$ istioctl proxy-config endpoint test-74c897698f-2ctvb.ns1 | wc -l
      16

ネームスペース追加

ns2 にも同じような構成を作る。こちらは Headless ではない Service で作る。

$ k create ns ns2
namespace/ns2 created
$ k label namespace ns2 istio-injection=enabled
namespace/ns2 labeled
$ k -n ns2 create deployment test --image=nginx
deployment.apps/test created
$ k -n ns2 scale deployment test --replicas=100
deployment.apps/test scaled
$ k -n ns2 expose deployment test --port=80 --target-port=80
service/test exposed

全ての Pod が Running なことを確認する。

$ k get po -A
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
istio-system   istiod-dd95d7bdc-jw984            1/1     Running   0          18m
kube-system    aws-node-2nhft                    2/2     Running   0          20m
kube-system    aws-node-2wzwq                    2/2     Running   0          20m
kube-system    aws-node-4jqdn                    2/2     Running   0          20m
kube-system    aws-node-5h9gd                    2/2     Running   0          20m
kube-system    aws-node-6q9kv                    2/2     Running   0          20m
kube-system    aws-node-d4z89                    2/2     Running   0          20m
kube-system    aws-node-dmpzs                    2/2     Running   0          20m
kube-system    aws-node-jbrt8                    2/2     Running   0          20m
kube-system    aws-node-k5v7d                    2/2     Running   0          20m
kube-system    aws-node-lphnm                    2/2     Running   0          20m
kube-system    aws-node-lz5xq                    2/2     Running   0          20m
kube-system    aws-node-p46mp                    2/2     Running   0          20m
kube-system    aws-node-p4llc                    2/2     Running   0          20m
kube-system    aws-node-q2n84                    2/2     Running   0          20m
kube-system    aws-node-rg87t                    2/2     Running   0          20m
kube-system    aws-node-tkwdd                    2/2     Running   0          20m
kube-system    aws-node-vt67z                    2/2     Running   0          20m
kube-system    aws-node-wbd9v                    2/2     Running   0          20m
kube-system    aws-node-wtq4m                    2/2     Running   0          20m
kube-system    aws-node-z6mft                    2/2     Running   0          20m
kube-system    coredns-676bf68468-8kg66          1/1     Running   0          18m
kube-system    coredns-676bf68468-tjl4f          1/1     Running   0          19m
kube-system    kube-proxy-2mzvv                  1/1     Running   0          20m
kube-system    kube-proxy-47fms                  1/1     Running   0          20m
kube-system    kube-proxy-4vhzw                  1/1     Running   0          20m
kube-system    kube-proxy-67z7x                  1/1     Running   0          20m
kube-system    kube-proxy-788vj                  1/1     Running   0          20m
kube-system    kube-proxy-d7pns                  1/1     Running   0          20m
kube-system    kube-proxy-g6xvm                  1/1     Running   0          20m
kube-system    kube-proxy-h5vtq                  1/1     Running   0          20m
kube-system    kube-proxy-h7kjq                  1/1     Running   0          20m
kube-system    kube-proxy-kmrsz                  1/1     Running   0          20m
kube-system    kube-proxy-lbfwz                  1/1     Running   0          20m
kube-system    kube-proxy-mz7cj                  1/1     Running   0          20m
kube-system    kube-proxy-nr6wn                  1/1     Running   0          20m
kube-system    kube-proxy-qtsbk                  1/1     Running   0          20m
kube-system    kube-proxy-tcjf5                  1/1     Running   0          20m
kube-system    kube-proxy-vjc64                  1/1     Running   0          20m
kube-system    kube-proxy-wrh2h                  1/1     Running   0          20m
kube-system    kube-proxy-x492q                  1/1     Running   0          20m
kube-system    kube-proxy-zngh4                  1/1     Running   0          20m
kube-system    kube-proxy-zrh4c                  1/1     Running   0          20m
kube-system    metrics-server-75bf97fcc9-fhwmj   1/1     Running   0          19m
ns1            test-74c897698f-2ctvb             2/2     Running   0          16m
ns1            test-74c897698f-2fvbs             2/2     Running   0          16m
ns1            test-74c897698f-2vgdl             2/2     Running   0          16m
ns1            test-74c897698f-4fb2g             2/2     Running   0          16m

(省略)

ns2            test-7955cf7657-z58s8             2/2     Running   0          106s
ns2            test-7955cf7657-zhz67             2/2     Running   0          106s
ns2            test-7955cf7657-zplrx             2/2     Running   0          105s
ns2            test-7955cf7657-zx6zd             2/2     Running   0          107s

メモリ使用量の増加は ns1 に 100 Pod と Service を追加したときと同程度。

$ k -n ns1 top pod --containers | head -5
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-2ctvb   istio-proxy   2m           45Mi            
test-74c897698f-2ctvb   nginx         0m           2Mi             
test-74c897698f-2fvbs   istio-proxy   1m           38Mi            
test-74c897698f-2fvbs   nginx         0m           2Mi             
$ k -n ns2 top pod --containers | head -5
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-27xkv   istio-proxy   2m           38Mi            
test-7955cf7657-27xkv   nginx         0m           2Mi             
test-7955cf7657-29jhg   istio-proxy   2m           38Mi            
test-7955cf7657-29jhg   nginx         0m           2Mi  

設定も endpoint が増えるが他はさほど変わらない。

$ istioctl proxy-config cluster test-74c897698f-2ctvb.ns1 | wc -l
      19
$ istioctl proxy-config listener test-74c897698f-2ctvb.ns1 | wc -l
     227
$ istioctl proxy-config route test-74c897698f-2ctvb.ns1 | wc -l
      12
$ istioctl proxy-config endpoint test-74c897698f-2ctvb.ns1 | wc -l
     116

Service を追加

宛先が同じ 100 Pod な Service を 9 つ追加する。

$ k -n ns1 expose deployment test --port=81 --target-port=80 --name test81
service/test81 exposed
$ k -n ns1 expose deployment test --port=82 --target-port=80 --name test82
service/test82 exposed
$ k -n ns1 expose deployment test --port=83 --target-port=80 --name test83
service/test83 exposed
$ k -n ns1 expose deployment test --port=84 --target-port=80 --name test84
service/test84 exposed
$ k -n ns1 expose deployment test --port=85 --target-port=80 --name test85
service/test85 exposed
$ k -n ns1 expose deployment test --port=86 --target-port=80 --name test86
service/test86 exposed
$ k -n ns1 expose deployment test --port=87 --target-port=80 --name test87
service/test87 exposed
$ k -n ns1 expose deployment test --port=88 --target-port=80 --name test88
service/test88 exposed
$ k -n ns1 expose deployment test --port=89 --target-port=80 --name test89
service/test89 exposed
$ k -n ns2 expose deployment test --port=81 --target-port=80 --name test81
service/test81 exposed
$ k -n ns2 expose deployment test --port=82 --target-port=80 --name test82
service/test82 exposed
$ k -n ns2 expose deployment test --port=83 --target-port=80 --name test83
service/test83 exposed
$ k -n ns2 expose deployment test --port=84 --target-port=80 --name test84
service/test84 exposed
$ k -n ns2 expose deployment test --port=85 --target-port=80 --name test85
service/test85 exposed
$ k -n ns2 expose deployment test --port=86 --target-port=80 --name test86
service/test86 exposed
$ k -n ns2 expose deployment test --port=87 --target-port=80 --name test87
service/test87 exposed
$ k -n ns2 expose deployment test --port=88 --target-port=80 --name test88
service/test88 exposed
$ k -n ns2 expose deployment test --port=89 --target-port=80 --name test89
service/test89 exposed

これもそれほど増えるわけではない。

$ k -n ns1 top pod --containers | head -5                                 
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-2ctvb   istio-proxy   2m           47Mi            
test-74c897698f-2ctvb   nginx         0m           2Mi             
test-74c897698f-2fvbs   istio-proxy   1m           40Mi            
test-74c897698f-2fvbs   nginx         0m           2Mi             
$ k -n ns2 top pod --containers | head -5                                 
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-27xkv   istio-proxy   2m           41Mi            
test-7955cf7657-27xkv   nginx         0m           2Mi             
test-7955cf7657-29jhg   istio-proxy   2m           40Mi            
test-7955cf7657-29jhg   nginx         0m           2Mi    

endpoint の数はかなり増えている。

$ istioctl proxy-config cluster test-74c897698f-2ctvb.ns1 | wc -l
      37
$ istioctl proxy-config listener test-74c897698f-2ctvb.ns1 | wc -l
     263
$ istioctl proxy-config route test-74c897698f-2ctvb.ns1 | wc -l
      30
$ istioctl proxy-config endpoint test-74c897698f-2ctvb.ns1 | wc -l
    1916

この状態のオブジェクトの数を確認する。

$ k get no -A --no-headers | wc -l
      20
$ k get po -A --no-headers | wc -l
     244
$ k get svc -A --no-headers | wc -l
      24

まとめ

  • 1 node, 7 pod, 4 svc の時の 25MiB から、20 node, 244 pod 24 svc でも 47MiB まで程度しか増やせなかった
  • Pod が増えてもそれだけだと増えず、Service が必要
  • Service を作ることで endpoint が増える (このとき Service 配下の Pod の数の分が増える)
  • Headless Service の場合は endpoint は増えず listener が増える
  • ノードを増やしてもそれだけだと増えない
  • Service をもつ DaemonSet がある場合はノードが増えるとメモリ使用量が増えると推測できる

結局のところ、クラスターにデプロイされているアプリケーションの Pod や Service といったオブジェクトの数やルーティングの複雑さに依存する。一概に Pod やノードや Service の数との関係を出すのは難しい。

続きます

補足

大きなクラスターでは Envoy のメモリ使用量が肥大してしまうことがあり、Sidecar を使って通信範囲を絞ることが重要とのこと。

オフラインインストールのための Ubuntu パッケージの取得

Ubuntu のパッケージ事前に取得し、オフラインでインストールするための手順のメモ。

手順

パッケージ情報を更新する。

apt-get udpate

キャッシュをクリアしておく。

apt clean

必要なパッケージをダウンロードする。

root@ip-172-31-40-206:~# apt --download-only --yes install apache2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  apache2-bin apache2-data apache2-utils bzip2 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.3-0 mailcap
  mime-support ssl-cert
Suggested packages:
  apache2-doc apache2-suexec-pristine | apache2-suexec-custom www-browser bzip2-doc
The following NEW packages will be installed:
  apache2 apache2-bin apache2-data apache2-utils bzip2 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.3-0
  mailcap mime-support ssl-cert
0 upgraded, 13 newly installed, 0 to remove and 28 not upgraded.
Need to get 2139 kB of archives.
After this operation, 8521 kB of additional disk space will be used.
Get:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libapr1 amd64 1.7.0-8ubuntu0.22.04.1 [108 kB]
Get:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libaprutil1 amd64 1.6.1-5ubuntu4.22.04.2 [92.8 kB]
Get:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libaprutil1-dbd-sqlite3 amd64 1.6.1-5ubuntu4.22.04.2 [11.3 kB]
Get:4 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libaprutil1-ldap amd64 1.6.1-5ubuntu4.22.04.2 [9170 B]
Get:5 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-0 amd64 5.3.6-1build1 [140 kB]
Get:6 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 apache2-bin amd64 2.4.52-1ubuntu4.9 [1347 kB]
Get:7 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 apache2-data all 2.4.52-1ubuntu4.9 [165 kB]
Get:8 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 apache2-utils amd64 2.4.52-1ubuntu4.9 [88.7 kB]
Get:9 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 mailcap all 3.70+nmu1ubuntu1 [23.8 kB]
Get:10 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 mime-support all 3.66 [3696 B]
Get:11 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 apache2 amd64 2.4.52-1ubuntu4.9 [97.9 kB]
Get:12 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 bzip2 amd64 1.0.8-5build1 [34.8 kB]
Get:13 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 ssl-cert all 1.1.2 [17.4 kB]
Fetched 2139 kB in 0s (25.6 MB/s)
Download complete and in download only mode
root@ip-172-31-40-206:~#

ダウンロードされた deb パッケージは以下にある。

root@ip-172-31-40-206:~# ls -l /var/cache/apt/archives/*.deb
-rw-r--r-- 1 root root 1346534 Apr 11 16:19 /var/cache/apt/archives/apache2-bin_2.4.52-1ubuntu4.9_amd64.deb
-rw-r--r-- 1 root root  164870 Apr 11 16:19 /var/cache/apt/archives/apache2-data_2.4.52-1ubuntu4.9_all.deb
-rw-r--r-- 1 root root   88746 Apr 11 16:19 /var/cache/apt/archives/apache2-utils_2.4.52-1ubuntu4.9_amd64.deb
-rw-r--r-- 1 root root   97878 Apr 11 16:19 /var/cache/apt/archives/apache2_2.4.52-1ubuntu4.9_amd64.deb
-rw-r--r-- 1 root root   34822 Mar 24  2022 /var/cache/apt/archives/bzip2_1.0.8-5build1_amd64.deb
-rw-r--r-- 1 root root  108002 Feb 27  2023 /var/cache/apt/archives/libapr1_1.7.0-8ubuntu0.22.04.1_amd64.deb
-rw-r--r-- 1 root root   11344 Sep  4  2023 /var/cache/apt/archives/libaprutil1-dbd-sqlite3_1.6.1-5ubuntu4.22.04.2_amd64.deb
-rw-r--r-- 1 root root    9170 Sep  4  2023 /var/cache/apt/archives/libaprutil1-ldap_1.6.1-5ubuntu4.22.04.2_amd64.deb
-rw-r--r-- 1 root root   92758 Sep  4  2023 /var/cache/apt/archives/libaprutil1_1.6.1-5ubuntu4.22.04.2_amd64.deb
-rw-r--r-- 1 root root  140026 Mar 25  2022 /var/cache/apt/archives/liblua5.3-0_5.3.6-1build1_amd64.deb
-rw-r--r-- 1 root root   23828 Dec 10  2021 /var/cache/apt/archives/mailcap_3.70+nmu1ubuntu1_all.deb
-rw-r--r-- 1 root root    3696 Nov 20  2020 /var/cache/apt/archives/mime-support_3.66_all.deb
-rw-r--r-- 1 root root   17364 Jan 26  2022 /var/cache/apt/archives/ssl-cert_1.1.2_all.deb
root@ip-172-31-40-206:~#

これをまとめて持っていき、オフライン環境で、以下を実行すればよい。

apt -y install ./*.deb

取得元の URL が必要な場合は、パッケージ名を以下の出力で確認し、

The following NEW packages will be installed:
  apache2 apache2-bin apache2-data apache2-utils bzip2 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.3-0
  mailcap mime-support ssl-cert

apt-cache show コマンドで確認できる。

root@ip-172-31-40-206:~# apt-cache show apache2 | grep Filename
Filename: pool/main/a/apache2/apache2_2.4.52-1ubuntu4.9_amd64.deb
Filename: pool/main/a/apache2/apache2_2.4.52-1ubuntu4_amd64.deb

これを http://archive.ubuntu.com/ubuntu/ とくっつけて、

http://archive.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.4.52-1ubuntu4.9_amd64.deb

とすればよい。

参考リンク

2024 年 5 月の読書メモを書いてないけど読んだ本 (3 冊)

2024 年 4 月の読書メモを書いてないけど読んだ本 (1 冊)

MinIO を試す

MinIO を試す。EKS 用の手順もあるようだが、Upstream と書かれている手順を試す。

クラスターの作成

CLUSTER_NAME="minio"
MY_ARN=$(aws sts get-caller-identity --output text --query Arn)
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1
  version: "1.29"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true

accessConfig:
  bootstrapClusterCreatorAdminPermissions: false
  authenticationMode: API
  accessEntries:
    - principalARN: arn:aws:iam::${AWS_ACCOUNT_ID}:role/Admin
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
          accessScope:
            type: cluster
EOF
eksctl create cluster -f cluster.yaml

ノードを作成する。

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > m1.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: m1
    minSize: 3
    maxSize: 3
    desiredCapacity: 3
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m1.yaml

ノードを確認する。

$ k get node
NAME                                              STATUS   ROLES    AGE   VERSION
ip-10-0-105-238.ap-northeast-1.compute.internal   Ready    <none>   24m   v1.29.0-eks-5e0fdde
ip-10-0-117-206.ap-northeast-1.compute.internal   Ready    <none>   24m   v1.29.0-eks-5e0fdde
ip-10-0-86-177.ap-northeast-1.compute.internal    Ready    <none>   24m   v1.29.0-eks-5e0fdde

Pod を確認する。

$ k get po -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-cxt7p             2/2     Running   0          25m
kube-system   aws-node-dp2p4             2/2     Running   0          25m
kube-system   aws-node-vmj54             2/2     Running   0          25m
kube-system   coredns-676bf68468-bqbnp   1/1     Running   0          37m
kube-system   coredns-676bf68468-g846n   1/1     Running   0          37m
kube-system   kube-proxy-hkj5h           1/1     Running   0          25m
kube-system   kube-proxy-lms44           1/1     Running   0          25m
kube-system   kube-proxy-t87sp           1/1     Running   0          25m

MinIO のインストール

まずはデフォルト設定で入れてみる。MinIO Operator をデプロイする。

$ kubectl minio init
# Warning: 'patchesJson6902' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
namespace/minio-operator created
serviceaccount/minio-operator created
clusterrole.rbac.authorization.k8s.io/minio-operator-role created
clusterrolebinding.rbac.authorization.k8s.io/minio-operator-binding created
customresourcedefinition.apiextensions.k8s.io/tenants.minio.min.io created
customresourcedefinition.apiextensions.k8s.io/policybindings.sts.min.io created
customresourcedefinition.apiextensions.k8s.io/miniojobs.job.min.io created
service/operator created
service/sts created
deployment.apps/minio-operator created
serviceaccount/console-sa created
secret/console-sa-secret created
clusterrole.rbac.authorization.k8s.io/console-sa-role created
clusterrolebinding.rbac.authorization.k8s.io/console-sa-binding created
configmap/console-env created
service/console created
deployment.apps/console created
-----------------

To open Operator UI, start a port forward using this command:

kubectl minio proxy -n minio-operator

-----------------

Pod を確認する。

$ k get po -A
NAMESPACE        NAME                              READY   STATUS    RESTARTS   AGE
kube-system      aws-node-cxt7p                    2/2     Running   0          28m
kube-system      aws-node-dp2p4                    2/2     Running   0          28m
kube-system      aws-node-vmj54                    2/2     Running   0          28m
kube-system      coredns-676bf68468-bqbnp          1/1     Running   0          40m
kube-system      coredns-676bf68468-g846n          1/1     Running   0          40m
kube-system      kube-proxy-hkj5h                  1/1     Running   0          28m
kube-system      kube-proxy-lms44                  1/1     Running   0          28m
kube-system      kube-proxy-t87sp                  1/1     Running   0          28m
minio-operator   console-86878b559f-tkzts          1/1     Running   0          22s
minio-operator   minio-operator-54bf877d58-7rbx9   1/1     Running   0          22s
minio-operator   minio-operator-54bf877d58-mx64t   1/1     Running   0          22s

Operator コンソールにアクセスする。

$ kubectl minio proxy
Starting port forward of the Console UI.

To connect open a browser and go to http://localhost:9090

Current JWT to login: eyJhbGciOiJSUzI1NiIsImtpZCI6IlljTERycFVtNS1FdTBpMXBiYkZ0RDUyUUZIT1Fwdlk5MmtTTFI3bzlSY00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaW5pby1vcGVyYXRvciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjb25zb2xlLXNhLXNlY3JldCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjb25zb2xlLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNWMwODM0NjEtNzk3Yy00ZDc4LTlkZDgtNjUxNmYzYjJmNzU5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1pbmlvLW9wZXJhdG9yOmNvbnNvbGUtc2EifQ.esXqD_rdGqn8cnca2_Rr83anD1TWQiUY8J0o4_JEh7Kk-vLaTFo83l1wisrBhooNWxqFCo-5Ypc0MRG7lMHoRLo8Zq4mcPCQN1uElnlNLalwZtgkmcu2khaV6SmonNyr0i7tw7mXXfx6VOiM6fSFQZMoK0YwXZx1Dso_TWnZo1eWPmuxGfzpSkUCvIdqgtAq0N7a2YYxmf9yvgCySTreNQEN1xVt6G2lo__KVN2F0E0PBJJDofnLeRNz3u2hBtHYFgFgSbnWCjRMscxXT8s83JtoqTFbPck-ZVXKh8nWqXaRqY7rIoZq0lrTYz5piALpW-xlgdFLtjaw-dHDhycbcw

Forwarding from 0.0.0.0:9090 -> 9090

EBS CSI Driver の導入

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > addon.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

addons:
  - name: vpc-cni
    version: latest
    attachPolicyARNs:
      - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
    # serviceAccountRoleARN: arn:aws:iam::XXXXXXXXXXXX:role/eksctl-fully-private-addon-iamserviceaccount-Role1-LRQ0AZXOE60K
    configurationValues: |-
      env:
        WARM_IP_TARGET: "2"
        MINIMUM_IP_TARGET: "10"
    resolveConflicts: overwrite
  - name: coredns
    version: latest
  - name: kube-proxy
    version: latest
  - name: aws-ebs-csi-driver
    version: latest
    wellKnownPolicies:
      ebsCSIController: true
EOF
eksctl create addon -f addon.yaml
$ k get po -A
NAMESPACE        NAME                                  READY   STATUS    RESTARTS   AGE
kube-system      aws-node-8n8mq                        2/2     Running   0          4m39s
kube-system      aws-node-cf5b5                        2/2     Running   0          5m25s
kube-system      aws-node-dtvlm                        2/2     Running   0          5m2s
kube-system      coredns-5877997cb7-4hxql              1/1     Running   0          2m30s
kube-system      coredns-5877997cb7-8nf5z              1/1     Running   0          2m29s
kube-system      ebs-csi-controller-7cddb57f8d-9xrn2   5/6     Running   0          12s
kube-system      ebs-csi-controller-7cddb57f8d-hjk2w   5/6     Running   0          12s
kube-system      ebs-csi-node-btcwj                    3/3     Running   0          12s
kube-system      ebs-csi-node-cjnl7                    3/3     Running   0          12s
kube-system      ebs-csi-node-qd4tm                    3/3     Running   0          12s
kube-system      kube-proxy-2xm44                      1/1     Running   0          2m29s
kube-system      kube-proxy-k9fhs                      1/1     Running   0          2m26s
kube-system      kube-proxy-x4j2c                      1/1     Running   0          2m23s
minio-operator   console-86878b559f-tkzts              1/1     Running   0          20m
minio-operator   minio-operator-54bf877d58-7rbx9       1/1     Running   0          20m
minio-operator   minio-operator-54bf877d58-mx64t       1/1     Running   0          20m
$ k get storageclass
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  61m

テナントの作成

Operator コンソールでもできそうだが、ここでは kubectl で実施する。

Tenant のマニフェストを生成してみる。

$ kubectl minio tenant create minio1 \
  --capacity 16Gi \
  --servers 4 \
  --volumes 8 \
  --namespace minio-tenant-1 \
  --storage-class gp2 \
  --output
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
  creationTimestamp: null
  name: minio1
  namespace: minio-tenant-1
scheduler:
  name: ""
spec:
  certConfig:
    commonName: '*.minio1-hl.minio-tenant-1.svc.cluster.local'
    dnsNames:
    - minio1-ss-0-{0...3}.minio1-hl.minio-tenant-1.svc.cluster.local
    organizationName:
    - system:nodes
  configuration:
    name: minio1-env-configuration
  exposeServices: {}
  features:
    enableSFTP: false
  image: minio/minio:RELEASE.2024-02-09T21-25-16Z
  imagePullPolicy: IfNotPresent
  imagePullSecret: {}
  mountPath: /export
  podManagementPolicy: Parallel
  pools:
  - affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: v1.min.io/tenant
              operator: In
              values:
              - minio1
            - key: v1.min.io/pool
              operator: In
              values:
              - ""
          topologyKey: kubernetes.io/hostname
    name: ss-0
    resources: {}
    servers: 4
    volumeClaimTemplate:
      apiVersion: v1
      kind: persistentvolumeclaims
      metadata:
        creationTimestamp: null
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 2Gi
        storageClassName: gp2
      status: {}
    volumesPerServer: 2
  requestAutoCert: true
  serviceAccountName: minio1-sa
  users:
  - name: minio1-user-1
status:
  availableReplicas: 0
  certificates: {}
  currentState: ""
  pools: null
  revision: 0
  syncVersion: ""
  usage: {}

---
apiVersion: v1
data:
  config.env: ZXhwb3J0IE1JTklPX1JPT1RfUEFTU1dPUkQ9IklCU0hiN1JLQ1ZOYWpzOHo2VEt0bWNlZmppdzg4Y3JseEVhZm44anAiCmV4cG9ydCBNSU5JT19ST09UX1VTRVI9IjJDQk9aRVZRWlkyOFdBU0tTOVdMIgo=
kind: Secret
metadata:
  creationTimestamp: null
  name: minio1-env-configuration
  namespace: minio-tenant-1

---
apiVersion: v1
data:
  CONSOLE_ACCESS_KEY: NVBOMjQxUTNFTUU0WFNBUTNZWFE=
  CONSOLE_SECRET_KEY: bkVqa3RsWEFJYlJyQXZvT3dlMFVMQ003eHVWNEliRFowRVI3QjFObg==
kind: Secret
metadata:
  creationTimestamp: null
  name: minio1-user-1
  namespace: minio-tenant-1

apply する。

$ k create ns minio-tenant-1
namespace/minio-tenant-1 created
$ kubectl minio tenant create minio1 \
  --capacity 16Gi \
  --servers 4 \
  --volumes 8 \
  --namespace minio-tenant-1 \
  --storage-class gp2
W0308 18:58:11.797099   36063 warnings.go:70] unknown field "spec.pools[0].volumeClaimTemplate.metadata.creationTimestamp"

Tenant 'minio1' created in 'minio-tenant-1' Namespace

  Username: ET3U6W5UKQXFG7FDY1Q0
  Password: AAnut9ff6x6jwABk4qSsF0nHJpvChKo8TH99ZUaE
  Note: Copy the credentials to a secure location. MinIO will not display these again.

APPLICATION     SERVICE NAME    NAMESPACE       SERVICE TYPE    SERVICE PORT
MinIO           minio           minio-tenant-1  ClusterIP       443         
Console         minio1-console  minio-tenant-1  ClusterIP       9443        

Pod を確認する。

$ k get po -A
NAMESPACE        NAME                                  READY   STATUS    RESTARTS       AGE
kube-system      aws-node-8n8mq                        2/2     Running   2 (5d7h ago)   8d
kube-system      aws-node-cf5b5                        2/2     Running   2 (5d7h ago)   8d
kube-system      aws-node-dtvlm                        2/2     Running   2 (5d7h ago)   8d
kube-system      coredns-5877997cb7-4hxql              1/1     Running   1 (5d7h ago)   8d
kube-system      coredns-5877997cb7-8nf5z              1/1     Running   1 (5d7h ago)   8d
kube-system      ebs-csi-controller-7cddb57f8d-9xrn2   6/6     Running   6 (5d7h ago)   8d
kube-system      ebs-csi-controller-7cddb57f8d-hjk2w   6/6     Running   6 (5d7h ago)   8d
kube-system      ebs-csi-node-btcwj                    3/3     Running   3 (5d7h ago)   8d
kube-system      ebs-csi-node-cjnl7                    3/3     Running   3 (5d7h ago)   8d
kube-system      ebs-csi-node-qd4tm                    3/3     Running   3 (5d7h ago)   8d
kube-system      kube-proxy-2xm44                      1/1     Running   1 (5d7h ago)   8d
kube-system      kube-proxy-k9fhs                      1/1     Running   1 (5d7h ago)   8d
kube-system      kube-proxy-x4j2c                      1/1     Running   1 (5d7h ago)   8d
minio-operator   console-86878b559f-jxxvg              1/1     Running   0              5m1s
minio-operator   minio-operator-54bf877d58-8jrvl       1/1     Running   0              5m1s
minio-operator   minio-operator-54bf877d58-fm8t7       1/1     Running   0              5m1s
minio-tenant-1   minio1-ss-0-0                         2/2     Running   0              2m3s
minio-tenant-1   minio1-ss-0-1                         2/2     Running   0              2m2s
minio-tenant-1   minio1-ss-0-2                         2/2     Running   0              2m2s
minio-tenant-1   minio1-ss-0-3                         2/2     Running   0              2m2s

ポートフォワードしてコンソールにアクセスする。

$ k -n minio-tenant-1 get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
minio            ClusterIP   172.20.1.251    <none>        443/TCP    2m34s
minio1-console   ClusterIP   172.20.15.213   <none>        9443/TCP   2m34s
minio1-hl        ClusterIP   None            <none>        9000/TCP   2m33s
$ k -n minio-tenant-1 port-forward svc/minio1-console 9443:9443
Forwarding from 127.0.0.1:9443 -> 9443
Forwarding from [::1]:9443 -> 9443

PV/PVC を確認する。

$ k get pv,pvc -A
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-1ebcb2c9-051c-4800-8974-e0664ac25fa3   2Gi        RWO            Delete           Bound    minio-tenant-1/1-minio1-ss-0-3   gp2            <unset>                          4m30s
persistentvolume/pvc-2c60e4b0-3473-44ef-8bc7-a30043f5efcf   2Gi        RWO            Delete           Bound    minio-tenant-1/1-minio1-ss-0-0   gp2            <unset>                          4m30s
persistentvolume/pvc-2d7e0cd1-d9e6-4ea5-8ab3-00272dc54350   2Gi        RWO            Delete           Bound    minio-tenant-1/0-minio1-ss-0-0   gp2            <unset>                          4m30s
persistentvolume/pvc-535a3c5c-f7a8-4fe6-b7b2-ec2256e312d2   2Gi        RWO            Delete           Bound    minio-tenant-1/0-minio1-ss-0-2   gp2            <unset>                          4m30s
persistentvolume/pvc-5aa0edf5-423f-431e-8547-7bc01a00ca25   2Gi        RWO            Delete           Bound    minio-tenant-1/1-minio1-ss-0-2   gp2            <unset>                          4m30s
persistentvolume/pvc-84e16280-bb09-4cb3-a64e-1de7f4f8b469   2Gi        RWO            Delete           Bound    minio-tenant-1/0-minio1-ss-0-1   gp2            <unset>                          4m30s
persistentvolume/pvc-b19e0010-0f93-4dbe-bdc3-3a22be6795e8   2Gi        RWO            Delete           Bound    minio-tenant-1/0-minio1-ss-0-3   gp2            <unset>                          4m30s
persistentvolume/pvc-bc73c448-5ed2-4740-bc85-be7481126b0e   2Gi        RWO            Delete           Bound    minio-tenant-1/1-minio1-ss-0-1   gp2            <unset>                          4m30s

NAMESPACE        NAME                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
minio-tenant-1   persistentvolumeclaim/0-minio1-ss-0-0   Bound    pvc-2d7e0cd1-d9e6-4ea5-8ab3-00272dc54350   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/0-minio1-ss-0-1   Bound    pvc-84e16280-bb09-4cb3-a64e-1de7f4f8b469   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/0-minio1-ss-0-2   Bound    pvc-535a3c5c-f7a8-4fe6-b7b2-ec2256e312d2   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/0-minio1-ss-0-3   Bound    pvc-b19e0010-0f93-4dbe-bdc3-3a22be6795e8   2Gi        RWO            gp2            <unset>                 4m34s
minio-tenant-1   persistentvolumeclaim/1-minio1-ss-0-0   Bound    pvc-2c60e4b0-3473-44ef-8bc7-a30043f5efcf   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/1-minio1-ss-0-1   Bound    pvc-bc73c448-5ed2-4740-bc85-be7481126b0e   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/1-minio1-ss-0-2   Bound    pvc-5aa0edf5-423f-431e-8547-7bc01a00ca25   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/1-minio1-ss-0-3   Bound    pvc-1ebcb2c9-051c-4800-8974-e0664ac25fa3   2Gi        RWO            gp2            <unset>                 4m34s

AWS CLI を使ったアクセス

AWS CLI にクレデンシャルを設定する。ここでは、上述のテナントのコンソールユーザーのクレデンシャルを使用する。

$ aws configure --profile minio
AWS Access Key ID [None]: ET3U6W5UKQXFG7FDY1Q0
AWS Secret Access Key [None]: AAnut9ff6x6jwABk4qSsF0nHJpvChKo8TH99ZUaE
Default region name [None]: ap-northeast-1
Default output format [None]:

署名バージョンを指定する。

aws configure set s3.signature_version s3v4 --profile minio

.aws/config は以下のようになる。

[profile minio]
region = ap-northeast-1
s3 =
    signature_version = s3v4

別のターミナルでポートフォワードしておく。

$ k -n minio-tenant-1 port-forward svc/minio1-hl 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000

プロファイルを指定する。

export AWS_PROFILE=minio

エンドポイントを指定して AWS CLI を実行する。

$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 ls
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 mb s3://hoge-bucket
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
make_bucket: hoge-bucket
$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 ls
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
2024-03-21 18:38:51 hoge-bucket
$ echo hello > hello.txt
$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 cp hello.txt s3://hoge-bucket/
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
upload: ./hello.txt to s3://hoge-bucket/hello.txt
$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 ls s3://hoge-bucket/
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
2024-03-21 19:08:13          6 hello.txt

エラーがでているが、バケットの作成と表示、オブジェクトのコピーができた。

SFTP でのアクセス

テナントの設定を変更する。

k -n minio-tenant-1 edit tenant minio1
spec:
...
  features:
    enableSFTP: true # false から true に変更

Service のポートに 8022 が追加される。

$ k -n minio-tenant-1 get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
minio            ClusterIP   172.20.1.251    <none>        443/TCP             13d
minio1-console   ClusterIP   172.20.15.213   <none>        9443/TCP            13d
minio1-hl        ClusterIP   None            <none>        9000/TCP,8022/TCP   13d

これもポートフォワードする。

$ k -n minio-tenant-1 port-forward svc/minio1-hl 8022:8022
Forwarding from 127.0.0.1:8022 -> 8022
Forwarding from [::1]:8022 -> 8022

sftp でアクセスする。

$ sftp -P 8022 ET3U6W5UKQXFG7FDY1Q0@localhost
The authenticity of host '[localhost]:8022 ([::1]:8022)' can't be established.
ECDSA key fingerprint is SHA256:fzvIFWM20Ay8Nj4zo/K+gvu4blDaoHSf2p9fdcQA5JI.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[localhost]:8022' (ECDSA) to the list of known hosts.
ET3U6W5UKQXFG7FDY1Q0@localhost's password:
Connected to localhost.
sftp> ls
hoge-bucket
sftp> ls hoge-bucket
hoge-bucket/hello.txt
sftp>