Istio の Envoy Proxy のメモリ使用量を調査する。
クラスターを作成する。
CLUSTER_NAME="istio"
MY_ARN=$(aws sts get-caller-identity --output text --query Arn)
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${CLUSTER_NAME}
region: ap-northeast-1
version: "1.29"
vpc:
cidr: "10.0.0.0/16"
availabilityZones:
- ap-northeast-1a
- ap-northeast-1c
cloudWatch:
clusterLogging:
enableTypes: ["*"]
iam:
withOIDC: true
accessConfig:
bootstrapClusterCreatorAdminPermissions: false
authenticationMode: API
accessEntries:
- principalARN: arn:aws:iam::${AWS_ACCOUNT_ID}:role/Admin
accessPolicies:
- policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
accessScope:
type: cluster
EOF
eksctl create cluster -f cluster.yaml
大きなインスタンス (m6i.32xlarge, 128 core) が 1 ノードのノードグループを作成する。
cat << EOF > m2.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${CLUSTER_NAME}
region: ap-northeast-1
managedNodeGroups:
- name: m2
instanceType: m6i.32xlarge
minSize: 1
maxSize: 20
desiredCapacity: 1
privateNetworking: true
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m2.yaml
ノードを確認する。
$ k get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-80-97.ap-northeast-1.compute.internal Ready <none> 10m v1.29.6-eks-1552ad0
metrics-server のインストール
メモリ使用量計測のため metrics-server をインストールする。
$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
$ k -n kube-system get pods
NAME READY STATUS RESTARTS AGE
aws-node-fwtlr 2/2 Running 0 8m46s
coredns-676bf68468-f56zh 1/1 Running 0 41m
coredns-676bf68468-pmkwl 1/1 Running 0 15m
kube-proxy-99shl 1/1 Running 0 8m46s
metrics-server-75bf97fcc9-9thcf 1/1 Running 0 33s
Istio のインストール
諸事情により Helm でインストールする。
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
諸事情により core と istiod のみをインストールする。
$ helm install istio-base -n istio-system istio/base --version 1.23.0 --create-namespace
NAME: istio-base
LAST DEPLOYED: Wed Sep 18 20:47:17 2024
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Istio base successfully installed!
To learn more about the release, try:
$ helm status istio-base -n istio-system
$ helm get all istio-base -n istio-system
$ helm install istiod -n istio-system istio/istiod --version 1.23.0
NAME: istiod
LAST DEPLOYED: Wed Sep 18 20:47:43 2024
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
"istiod" successfully installed!
To learn more about the release, try:
$ helm status istiod -n istio-system
$ helm get all istiod -n istio-system
Next steps:
* Deploy a Gateway: https://istio.io/latest/docs/setup/additional-setup/gateway/
* Try out our tasks to get started on common configurations:
* https://istio.io/latest/docs/tasks/traffic-management
* https://istio.io/latest/docs/tasks/security/
* https://istio.io/latest/docs/tasks/policy-enforcement/
* Review the list of actively supported releases, CVE publications and our hardening guide:
* https://istio.io/latest/docs/releases/supported-releases/
* https://istio.io/latest/news/security/
* https://istio.io/latest/docs/ops/best-practices/security/
For further documentation see https://istio.io website
Pod を確認する。
$ k -n istio-system get po
NAME READY STATUS RESTARTS AGE
istiod-dd95d7bdc-hxv47 1/1 Running 0 3m57s
Pod 1 個
Namespace を作成し、自動インジェクションするためのラベルをつける。
$ k create ns ns1
namespace/ns1 created
$ k label namespace ns1 istio-injection=enabled
namespace/ns1 labeled
nginx の Deployment を作成する。
$ k -n ns1 create deployment test --image=nginx
deployment.apps/test created
$ k -n ns1 get po
NAME READY STATUS RESTARTS AGE
test-7955cf7657-8zbn8 2/2 Running 0 8s
この状態のメモリ使用量を確認する。25MiB 程度
$ k -n ns1 top pod --containers
POD NAME CPU(cores) MEMORY(bytes)
test-7955cf7657-8zbn8 istio-proxy 27m 25Mi
test-7955cf7657-8zbn8 nginx 33m 95Mi
オブジェクトの数を確認する。
$ k get no -A --no-headers | wc -l
1
$ k get po -A --no-headers | wc -l
7
$ k get svc -A --no-headers | wc -l
4
istioctl のバージョンを確認する。
$ istioctl version
client version: 1.23.1
control plane version: 1.23.0
data plane version: 1.23.0 (1 proxies)
メッシュの状態を確認する。
$ istioctl proxy-status
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
test-7955cf7657-8zbn8.ns1 Kubernetes SYNCED (41s) SYNCED (41s) SYNCED (41s) SYNCED (41s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
設定とその数を確認する。
$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1 | wc -l
16
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1 | wc -l
22
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1 | wc -l
8
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1 | wc -l
16
$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
InboundPassthroughCluster - - - ORIGINAL_DST
PassthroughCluster - - - ORIGINAL_DST
agent - - - STATIC
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
metrics-server.kube-system.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1
ADDRESSES PORT MATCH DESTINATION
172.20.0.10 53 ALL Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
172.20.0.1 443 ALL Cluster: outbound|443||kubernetes.default.svc.cluster.local
172.20.143.212 443 ALL Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
172.20.34.15 443 ALL Cluster: outbound|443||istiod.istio-system.svc.cluster.local
172.20.0.10 9153 Trans: raw_buffer; App: http/1.1,h2c Route: kube-dns.kube-system.svc.cluster.local:9153
172.20.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0 15001 ALL PassthroughCluster
0.0.0.0 15001 Addr: *:15001 Non-HTTP/Non-TCP
0.0.0.0 15006 Addr: *:15006 Non-HTTP/Non-TCP
0.0.0.0 15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2 InboundPassthroughCluster
0.0.0.0 15006 Trans: raw_buffer; App: http/1.1,h2c InboundPassthroughCluster
0.0.0.0 15006 Trans: tls; App: TCP TLS InboundPassthroughCluster
0.0.0.0 15006 Trans: raw_buffer InboundPassthroughCluster
0.0.0.0 15006 Trans: tls InboundPassthroughCluster
0.0.0.0 15010 Trans: raw_buffer; App: http/1.1,h2c Route: 15010
0.0.0.0 15010 ALL PassthroughCluster
172.20.34.15 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0 15014 Trans: raw_buffer; App: http/1.1,h2c Route: 15014
0.0.0.0 15014 ALL PassthroughCluster
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1
NAME VHOST NAME DOMAINS MATCH VIRTUAL SERVICE
15010 istiod.istio-system.svc.cluster.local:15010 istiod.istio-system, 172.20.34.15 /*
kube-dns.kube-system.svc.cluster.local:9153 kube-dns.kube-system.svc.cluster.local:9153 * /*
15014 istiod.istio-system.svc.cluster.local:15014 istiod.istio-system, 172.20.34.15 /*
InboundPassthroughCluster inbound|http|0 * /*
backend * /healthz/ready*
backend * /stats/prometheus*
InboundPassthroughCluster inbound|http|0 * /*
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.0.100.189:443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.0.75.112:10250 HEALTHY OK outbound|443||metrics-server.kube-system.svc.cluster.local
10.0.75.6:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.0.75.6:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.0.75.6:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.0.75.6:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.0.81.87:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.81.87:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.85.135:443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.0.87.136:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.87.136:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
unix://./etc/istio/proxy/XDS HEALTHY OK xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket HEALTHY OK sds-grpc
Pod 100 個
Pod をスケールして 100 個にする。
$ k -n ns1 scale deployment test --replicas=100
deployment.apps/test scaled
すべて Running なことを確認する。
$ k -n ns1 get pods
NAME READY STATUS RESTARTS AGE
test-7955cf7657-2dq7j 2/2 Running 0 109s
test-7955cf7657-2kl8f 2/2 Running 0 109s
test-7955cf7657-2pwf7 2/2 Running 0 106s
test-7955cf7657-2szkw 2/2 Running 0 108s
(省略)
test-7955cf7657-zhm5p 2/2 Running 0 108s
test-7955cf7657-zm4hp 2/2 Running 0 107s
test-7955cf7657-zs7n7 2/2 Running 0 108s
test-7955cf7657-zwswj 2/2 Running 0 108s
メモリ使用量を確認すると 22-24MiB 程度で、増えていない。単純に Pod だけ増やしても設定が増えていないからと思われる。
$ k -n ns1 top pod --containers | head
POD NAME CPU(cores) MEMORY(bytes)
test-7955cf7657-2dq7j istio-proxy 4m 23Mi
test-7955cf7657-2dq7j nginx 0m 92Mi
test-7955cf7657-2kl8f istio-proxy 3m 22Mi
test-7955cf7657-2kl8f nginx 0m 90Mi
test-7955cf7657-2pwf7 istio-proxy 4m 22Mi
test-7955cf7657-2pwf7 nginx 0m 91Mi
test-7955cf7657-2szkw istio-proxy 4m 24Mi
test-7955cf7657-2szkw nginx 0m 90Mi
test-7955cf7657-4wgqj istio-proxy 4m 23Mi
設定の数は増えてない。
$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1 | wc -l
16
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1 | wc -l
22
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1 | wc -l
8
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1 | wc -l
16
Service の作成
Service を作ってみる。
$ k -n ns1 expose deployment test --port=80 --target-port=80
service/test exposed
メモリ使用量は 24-25MiB 程度で、微増した程度。
$ k -n ns1 top pod --containers | head
POD NAME CPU(cores) MEMORY(bytes)
test-7955cf7657-2dq7j istio-proxy 5m 24Mi
test-7955cf7657-2dq7j nginx 0m 92Mi
test-7955cf7657-2kl8f istio-proxy 5m 25Mi
test-7955cf7657-2kl8f nginx 0m 90Mi
test-7955cf7657-2pwf7 istio-proxy 5m 24Mi
test-7955cf7657-2pwf7 nginx 0m 91Mi
test-7955cf7657-2szkw istio-proxy 5m 25Mi
test-7955cf7657-2szkw nginx 0m 90Mi
test-7955cf7657-4wgqj istio-proxy 5m 24Mi
設定も微増している。今回の場合、Service を追加したのでアウトバウンドが増えているが、自身がその Service なので、インバウンドの分も増えている。endpoint は Pod の数分増えた。
$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1 | wc -l
18
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1 | wc -l
29
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1 | wc -l
11
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1 | wc -l
116
$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
80 - inbound ORIGINAL_DST
BlackHoleCluster - - - STATIC
InboundPassthroughCluster - - - ORIGINAL_DST
PassthroughCluster - - - ORIGINAL_DST
agent - - - STATIC
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
metrics-server.kube-system.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
test.ns1.svc.cluster.local 80 - outbound EDS
xds-grpc - - - STATIC
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1
ADDRESSES PORT MATCH DESTINATION
172.20.0.10 53 ALL Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
172.20.160.18 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
172.20.160.18 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
172.20.0.1 443 ALL Cluster: outbound|443||kubernetes.default.svc.cluster.local
172.20.143.212 443 ALL Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
172.20.34.15 443 ALL Cluster: outbound|443||istiod.istio-system.svc.cluster.local
172.20.0.10 9153 Trans: raw_buffer; App: http/1.1,h2c Route: kube-dns.kube-system.svc.cluster.local:9153
172.20.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0 15001 ALL PassthroughCluster
0.0.0.0 15001 Addr: *:15001 Non-HTTP/Non-TCP
0.0.0.0 15006 Addr: *:15006 Non-HTTP/Non-TCP
0.0.0.0 15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2 InboundPassthroughCluster
0.0.0.0 15006 Trans: raw_buffer; App: http/1.1,h2c InboundPassthroughCluster
0.0.0.0 15006 Trans: tls; App: TCP TLS InboundPassthroughCluster
0.0.0.0 15006 Trans: raw_buffer InboundPassthroughCluster
0.0.0.0 15006 Trans: tls InboundPassthroughCluster
0.0.0.0 15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15006 Trans: raw_buffer; App: http/1.1,h2c; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15006 Trans: raw_buffer; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15006 Trans: tls; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15010 Trans: raw_buffer; App: http/1.1,h2c Route: 15010
0.0.0.0 15010 ALL PassthroughCluster
172.20.34.15 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0 15014 Trans: raw_buffer; App: http/1.1,h2c Route: 15014
0.0.0.0 15014 ALL PassthroughCluster
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1
NAME VHOST NAME DOMAINS MATCH VIRTUAL SERVICE
15014 istiod.istio-system.svc.cluster.local:15014 istiod.istio-system, 172.20.34.15 /*
test.ns1.svc.cluster.local:80 test.ns1.svc.cluster.local:80 * /*
15010 istiod.istio-system.svc.cluster.local:15010 istiod.istio-system, 172.20.34.15 /*
kube-dns.kube-system.svc.cluster.local:9153 kube-dns.kube-system.svc.cluster.local:9153 * /*
InboundPassthroughCluster inbound|http|0 * /*
inbound|80|| inbound|http|80 * /*
backend * /healthz/ready*
backend * /stats/prometheus*
InboundPassthroughCluster inbound|http|0 * /*
inbound|80|| inbound|http|80 * /*
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.0.100.189:443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.0.64.108:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.64.140:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.64.147:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.64.35:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.64.97:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.65.151:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.65.50:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.65.60:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.65.99:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.66.110:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.66.125:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.66.137:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.66.21:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.66.70:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.67.199:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.67.58:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.69.119:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.69.180:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.69.84:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.70.189:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.70.19:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.70.243:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.70.247:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.70.9:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.71.18:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.71.200:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.71.27:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.71.63:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.71.93:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.72.13:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.72.242:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.73.178:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.73.94:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.74.117:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.74.14:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.74.159:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.75.108:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.75.112:10250 HEALTHY OK outbound|443||metrics-server.kube-system.svc.cluster.local
10.0.75.146:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.75.200:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.75.248:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.75.51:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.75.6:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.0.75.6:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.0.75.6:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.0.75.6:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.0.76.216:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.76.229:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.76.80:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.76.83:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.77.219:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.77.59:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.78.160:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.78.19:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.78.215:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.79.181:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.79.43:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.79.57:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.80.127:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.80.252:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.81.201:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.81.23:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.81.87:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.81.87:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.82.119:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.82.208:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.82.24:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.82.40:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.83.218:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.84.174:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.84.212:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.84.58:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.85.135:443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.0.85.229:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.85.230:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.85.55:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.86.118:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.86.171:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.86.237:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.86.91:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.87.126:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.87.136:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.87.136:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.87.21:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.87.97:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.88.169:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.88.189:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.88.53:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.88.71:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.88.73:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.89.118:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.89.147:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.89.46:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.90.10:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.90.50:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.91.125:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.91.250:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.91.253:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.92.180:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.92.25:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.93.102:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.93.206:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.93.212:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.93.243:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.93.25:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.93.78:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.94.255:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.95.111:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.95.225:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
10.0.95.64:80 HEALTHY OK outbound|80||test.ns1.svc.cluster.local
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
unix://./etc/istio/proxy/XDS HEALTHY OK xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket HEALTHY OK sds-grpc
Headless Service の作成
一度 Service を削除して Headless Service として再作成する。
$ k -n ns1 delete svc test
service "test" deleted
$ k -n ns1 expose deployment test --port=80 --target-port=80 --cluster-ip=None
service/test exposed
$ k -n ns1 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test ClusterIP None <none> 80/TCP 14s
31MiB と少し増えた。
$ k -n ns1 top pod --containers | head
POD NAME CPU(cores) MEMORY(bytes)
test-7955cf7657-2dq7j istio-proxy 3m 31Mi
test-7955cf7657-2dq7j nginx 0m 92Mi
test-7955cf7657-2kl8f istio-proxy 4m 31Mi
test-7955cf7657-2kl8f nginx 0m 90Mi
test-7955cf7657-2pwf7 istio-proxy 4m 31Mi
test-7955cf7657-2pwf7 nginx 0m 91Mi
test-7955cf7657-2szkw istio-proxy 3m 31Mi
test-7955cf7657-2szkw nginx 0m 90Mi
test-7955cf7657-4wgqj istio-proxy 3m 31Mi
設定が減ったとしても、メモリ使用量はすぐには減らないような気がするので、念のため rollout して Pod を再作成してみる。
$ k -n ns1 rollout restart deployment test
deployment.apps/test restarted
むしろ rollout したことで微増した。rollout 中にオブジェクトが増えることで設定が増えてしまうのかもしれない。
$ k -n ns1 top pod --containers | head
POD NAME CPU(cores) MEMORY(bytes)
test-74c897698f-22sjb istio-proxy 10m 37Mi
test-74c897698f-22sjb nginx 0m 92Mi
test-74c897698f-2h2x9 istio-proxy 8m 36Mi
test-74c897698f-2h2x9 nginx 0m 90Mi
test-74c897698f-2scl7 istio-proxy 10m 38Mi
test-74c897698f-2scl7 nginx 0m 90Mi
test-74c897698f-4258d istio-proxy 11m 37Mi
test-74c897698f-4258d nginx 0m 91Mi
test-74c897698f-45fnj istio-proxy 9m 37Mi
設定を見てみる。
$ istioctl proxy-status | head
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
test-74c897698f-22sjb.ns1 Kubernetes SYNCED (85s) SYNCED (85s) SYNCED (55s) SYNCED (85s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
test-74c897698f-2h2x9.ns1 Kubernetes SYNCED (85s) SYNCED (85s) SYNCED (55s) SYNCED (85s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
test-74c897698f-2scl7.ns1 Kubernetes SYNCED (85s) SYNCED (85s) SYNCED (55s) SYNCED (85s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
test-74c897698f-4258d.ns1 Kubernetes SYNCED (85s) SYNCED (85s) SYNCED (55s) SYNCED (85s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
test-74c897698f-45fnj.ns1 Kubernetes SYNCED (85s) SYNCED (85s) SYNCED (55s) SYNCED (85s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
test-74c897698f-4zxz7.ns1 Kubernetes SYNCED (85s) SYNCED (85s) SYNCED (55s) SYNCED (85s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
test-74c897698f-56xsq.ns1 Kubernetes SYNCED (85s) SYNCED (85s) SYNCED (55s) SYNCED (85s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
test-74c897698f-5rmvq.ns1 Kubernetes SYNCED (85s) SYNCED (85s) SYNCED (55s) SYNCED (85s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
test-74c897698f-5z6kd.ns1 Kubernetes SYNCED (85s) SYNCED (85s) SYNCED (55s) SYNCED (85s) IGNORED istiod-dd95d7bdc-lbk7q 1.23.0
Headless Service だと listener は増えたが、 endpoint が減った。
$ istioctl proxy-config cluster test-74c897698f-22sjb.ns1 | wc -l
18
$ istioctl proxy-config listener test-74c897698f-22sjb.ns1 | wc -l
225
$ istioctl proxy-config route test-74c897698f-22sjb.ns1 | wc -l
11
$ istioctl proxy-config endpoint test-74c897698f-22sjb.ns1 | wc -l
16
$ istioctl proxy-config cluster test-74c897698f-22sjb.ns1
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
80 - inbound ORIGINAL_DST
BlackHoleCluster - - - STATIC
InboundPassthroughCluster - - - ORIGINAL_DST
PassthroughCluster - - - ORIGINAL_DST
agent - - - STATIC
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
metrics-server.kube-system.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
test.ns1.svc.cluster.local 80 - outbound ORIGINAL_DST
xds-grpc - - - STATIC
$ istioctl proxy-config listener test-74c897698f-22sjb.ns1
ADDRESSES PORT MATCH DESTINATION
172.20.0.10 53 ALL Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.64.197 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.64.197 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.64.218 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.64.218 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.64.91 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.64.91 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.64.92 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.64.92 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.14 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.65.14 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.15 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.65.15 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.153 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.65.153 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.76 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.65.76 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.95 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.65.95 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.66.200 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.66.200 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.66.207 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.66.207 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.113 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.67.113 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.132 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.67.132 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.208 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.67.208 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.249 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.67.249 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.60 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.67.60 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.68.175 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.68.175 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.69.160 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.69.160 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.69.194 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.69.194 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.69.52 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.69.52 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.70.140 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.70.140 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.70.242 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.70.242 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.152 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.71.152 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.190 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.71.190 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.221 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.71.221 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.29 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.71.29 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.58 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.71.58 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.72.127 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.72.127 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.141 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.73.141 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.188 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.73.188 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.32 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.73.32 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.73 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.73.73 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.74.216 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.74.216 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.74.73 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.74.73 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.147 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.75.147 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.178 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.75.178 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.197 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.75.197 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.215 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.75.215 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.76.34 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.76.34 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.77.106 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.77.106 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.77.114 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.77.114 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.107 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.78.107 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.112 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.78.112 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.119 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.78.119 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.125 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.78.125 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.230 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.78.230 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.244 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.78.244 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.31 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.78.31 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.63 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.78.63 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.83 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.78.83 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.153 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.79.153 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.161 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.79.161 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.21 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.79.21 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.238 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.79.238 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.239 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.79.239 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.80.166 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.80.166 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.80.223 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.80.223 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.80.29 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.80.29 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.81.133 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.81.133 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.81.192 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.81.192 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.81.231 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.81.231 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.127 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.82.127 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.141 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.82.141 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.220 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.82.220 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.235 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.82.235 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.83.105 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.83.105 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.83.26 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.83.26 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.83.30 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.83.30 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.84.208 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.84.208 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.85.138 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.85.138 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.85.228 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.85.228 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.85.69 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.85.69 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.86.125 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.86.125 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.86.130 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.86.130 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.87.144 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.87.144 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.87.254 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.87.254 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.87.90 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.87.90 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.89.183 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.89.183 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.89.82 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.89.82 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.90.139 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.90.139 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.17 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.91.17 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.226 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.91.226 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.233 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.91.233 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.4 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.91.4 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.92.126 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.92.126 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.125 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.93.125 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.131 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.93.131 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.142 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.93.142 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.204 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.93.204 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.112 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.94.112 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.118 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.94.118 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.236 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.94.236 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.33 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.94.33 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.44 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.94.44 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.54 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.94.54 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.238 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.95.238 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.244 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.95.244 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.4 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.95.4 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.58 80 Trans: raw_buffer; App: http/1.1,h2c Route: test.ns1.svc.cluster.local:80
10.0.95.58 80 ALL Cluster: outbound|80||test.ns1.svc.cluster.local
172.20.0.1 443 ALL Cluster: outbound|443||kubernetes.default.svc.cluster.local
172.20.143.212 443 ALL Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
172.20.34.15 443 ALL Cluster: outbound|443||istiod.istio-system.svc.cluster.local
172.20.0.10 9153 Trans: raw_buffer; App: http/1.1,h2c Route: kube-dns.kube-system.svc.cluster.local:9153
172.20.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0 15001 ALL PassthroughCluster
0.0.0.0 15001 Addr: *:15001 Non-HTTP/Non-TCP
0.0.0.0 15006 Addr: *:15006 Non-HTTP/Non-TCP
0.0.0.0 15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2 InboundPassthroughCluster
0.0.0.0 15006 Trans: raw_buffer; App: http/1.1,h2c InboundPassthroughCluster
0.0.0.0 15006 Trans: tls; App: TCP TLS InboundPassthroughCluster
0.0.0.0 15006 Trans: raw_buffer InboundPassthroughCluster
0.0.0.0 15006 Trans: tls InboundPassthroughCluster
0.0.0.0 15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15006 Trans: raw_buffer; App: http/1.1,h2c; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15006 Trans: raw_buffer; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15006 Trans: tls; Addr: *:80 Cluster: inbound|80||
0.0.0.0 15010 Trans: raw_buffer; App: http/1.1,h2c Route: 15010
0.0.0.0 15010 ALL PassthroughCluster
172.20.34.15 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0 15014 Trans: raw_buffer; App: http/1.1,h2c Route: 15014
0.0.0.0 15014 ALL PassthroughCluster
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
$ istioctl proxy-config route test-74c897698f-22sjb.ns1
NAME VHOST NAME DOMAINS MATCH VIRTUAL SERVICE
test.ns1.svc.cluster.local:80 test.ns1.svc.cluster.local:80 * /*
15010 istiod.istio-system.svc.cluster.local:15010 istiod.istio-system, 172.20.34.15 /*
kube-dns.kube-system.svc.cluster.local:9153 kube-dns.kube-system.svc.cluster.local:9153 * /*
15014 istiod.istio-system.svc.cluster.local:15014 istiod.istio-system, 172.20.34.15 /*
InboundPassthroughCluster inbound|http|0 * /*
inbound|80|| inbound|http|80 * /*
InboundPassthroughCluster inbound|http|0 * /*
inbound|80|| inbound|http|80 * /*
backend * /healthz/ready*
backend * /stats/prometheus*
$ istioctl proxy-config endpoint test-74c897698f-22sjb.ns1
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.0.100.189:443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.0.75.112:10250 HEALTHY OK outbound|443||metrics-server.kube-system.svc.cluster.local
10.0.75.6:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.0.75.6:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.0.75.6:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.0.75.6:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.0.81.87:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.81.87:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.85.135:443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.0.87.136:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.87.136:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
unix://./etc/istio/proxy/XDS HEALTHY OK xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket HEALTHY OK sds-grpc
rollout ではなく、スケールインして少し待ってスケールアウトしてみる。
$ k -n ns1 scale deployment test --replicas=1
deployment.apps/test scaled
$ k -n ns1 scale deployment test --replicas=100
deployment.apps/test scaled
これだとさっきより少し減った。
$ k -n ns1 top pod --containers | head
POD NAME CPU(cores) MEMORY(bytes)
test-74c897698f-24x2k istio-proxy 3m 33Mi
test-74c897698f-24x2k nginx 0m 91Mi
test-74c897698f-2kcck istio-proxy 4m 33Mi
test-74c897698f-2kcck nginx 0m 95Mi
test-74c897698f-2kgdx istio-proxy 3m 33Mi
test-74c897698f-2kgdx nginx 0m 94Mi
test-74c897698f-462wj istio-proxy 3m 33Mi
test-74c897698f-462wj nginx 0m 91Mi
test-74c897698f-48rhq istio-proxy 3m 34Mi
ノード追加
大きな 1 ノードではなく、小さな 100 ノードに分散してみる。
小さなインスタンス (m6i.large, 2 core) のノードグループを作成する。
cat << EOF > m3.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${CLUSTER_NAME}
region: ap-northeast-1
managedNodeGroups:
- name: m3
instanceType: m6i.large
minSize: 1
maxSize: 20
desiredCapacity: 20
privateNetworking: true
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m3.yaml
大きなインスタンスは削除する。
eksctl delete nodegroup m2 --cluster ${CLUSTER_NAME}
ノードを確認する。
$ k get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-106-38.ap-northeast-1.compute.internal Ready <none> 4m21s v1.29.6-eks-1552ad0
ip-10-0-107-27.ap-northeast-1.compute.internal Ready <none> 4m19s v1.29.6-eks-1552ad0
ip-10-0-107-55.ap-northeast-1.compute.internal Ready <none> 4m23s v1.29.6-eks-1552ad0
ip-10-0-108-95.ap-northeast-1.compute.internal Ready <none> 4m17s v1.29.6-eks-1552ad0
ip-10-0-109-108.ap-northeast-1.compute.internal Ready <none> 4m22s v1.29.6-eks-1552ad0
ip-10-0-114-75.ap-northeast-1.compute.internal Ready <none> 4m10s v1.29.6-eks-1552ad0
ip-10-0-117-226.ap-northeast-1.compute.internal Ready <none> 4m22s v1.29.6-eks-1552ad0
ip-10-0-121-37.ap-northeast-1.compute.internal Ready <none> 4m11s v1.29.6-eks-1552ad0
ip-10-0-122-44.ap-northeast-1.compute.internal Ready <none> 4m21s v1.29.6-eks-1552ad0
ip-10-0-64-210.ap-northeast-1.compute.internal Ready <none> 4m17s v1.29.6-eks-1552ad0
ip-10-0-65-152.ap-northeast-1.compute.internal Ready <none> 4m19s v1.29.6-eks-1552ad0
ip-10-0-71-158.ap-northeast-1.compute.internal Ready <none> 4m18s v1.29.6-eks-1552ad0
ip-10-0-71-188.ap-northeast-1.compute.internal Ready <none> 4m17s v1.29.6-eks-1552ad0
ip-10-0-73-100.ap-northeast-1.compute.internal Ready <none> 4m15s v1.29.6-eks-1552ad0
ip-10-0-73-13.ap-northeast-1.compute.internal Ready <none> 4m17s v1.29.6-eks-1552ad0
ip-10-0-80-97.ap-northeast-1.compute.internal NotReady,SchedulingDisabled <none> 66m v1.29.6-eks-1552ad0
ip-10-0-81-103.ap-northeast-1.compute.internal Ready <none> 4m18s v1.29.6-eks-1552ad0
ip-10-0-88-105.ap-northeast-1.compute.internal Ready <none> 4m16s v1.29.6-eks-1552ad0
ip-10-0-94-113.ap-northeast-1.compute.internal Ready <none> 4m18s v1.29.6-eks-1552ad0
ip-10-0-95-162.ap-northeast-1.compute.internal Ready <none> 4m17s v1.29.6-eks-1552ad0
ip-10-0-97-3.ap-northeast-1.compute.internal Ready <none> 4m21s v1.29.6-eks-1552ad0
念のためスケールインしてスケールアウトする。
$ k -n ns1 scale deployment test --replicas=1
deployment.apps/test scaled
$ k -n ns1 scale deployment test --replicas=100
deployment.apps/test scaled
メモリ使用量はほとんど変わっていない。
$ k -n ns1 top pod --containers | head
POD NAME CPU(cores) MEMORY(bytes)
test-74c897698f-2ctvb istio-proxy 2m 33Mi
test-74c897698f-2ctvb nginx 0m 2Mi
test-74c897698f-2fvbs istio-proxy 1m 33Mi
test-74c897698f-2fvbs nginx 0m 2Mi
test-74c897698f-2vgdl istio-proxy 2m 33Mi
test-74c897698f-2vgdl nginx 0m 2Mi
test-74c897698f-4fb2g istio-proxy 1m 33Mi
test-74c897698f-4fb2g nginx 0m 2Mi
test-74c897698f-4lh9r istio-proxy 2m 33Mi
設定も増えておらず、ノードが増えてもそれだけだと変わらないことがわかった。
$ istioctl proxy-config cluster test-74c897698f-2ctvb.ns1 | wc -l
18
$ istioctl proxy-config listener test-74c897698f-2ctvb.ns1 | wc -l
225
$ istioctl proxy-config route test-74c897698f-2ctvb.ns1 | wc -l
11
$ istioctl proxy-config endpoint test-74c897698f-2ctvb.ns1 | wc -l
16
ネームスペース追加
ns2 にも同じような構成を作る。こちらは Headless ではない Service で作る。
$ k create ns ns2
namespace/ns2 created
$ k label namespace ns2 istio-injection=enabled
namespace/ns2 labeled
$ k -n ns2 create deployment test --image=nginx
deployment.apps/test created
$ k -n ns2 scale deployment test --replicas=100
deployment.apps/test scaled
$ k -n ns2 expose deployment test --port=80 --target-port=80
service/test exposed
全ての Pod が Running なことを確認する。
$ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
istio-system istiod-dd95d7bdc-jw984 1/1 Running 0 18m
kube-system aws-node-2nhft 2/2 Running 0 20m
kube-system aws-node-2wzwq 2/2 Running 0 20m
kube-system aws-node-4jqdn 2/2 Running 0 20m
kube-system aws-node-5h9gd 2/2 Running 0 20m
kube-system aws-node-6q9kv 2/2 Running 0 20m
kube-system aws-node-d4z89 2/2 Running 0 20m
kube-system aws-node-dmpzs 2/2 Running 0 20m
kube-system aws-node-jbrt8 2/2 Running 0 20m
kube-system aws-node-k5v7d 2/2 Running 0 20m
kube-system aws-node-lphnm 2/2 Running 0 20m
kube-system aws-node-lz5xq 2/2 Running 0 20m
kube-system aws-node-p46mp 2/2 Running 0 20m
kube-system aws-node-p4llc 2/2 Running 0 20m
kube-system aws-node-q2n84 2/2 Running 0 20m
kube-system aws-node-rg87t 2/2 Running 0 20m
kube-system aws-node-tkwdd 2/2 Running 0 20m
kube-system aws-node-vt67z 2/2 Running 0 20m
kube-system aws-node-wbd9v 2/2 Running 0 20m
kube-system aws-node-wtq4m 2/2 Running 0 20m
kube-system aws-node-z6mft 2/2 Running 0 20m
kube-system coredns-676bf68468-8kg66 1/1 Running 0 18m
kube-system coredns-676bf68468-tjl4f 1/1 Running 0 19m
kube-system kube-proxy-2mzvv 1/1 Running 0 20m
kube-system kube-proxy-47fms 1/1 Running 0 20m
kube-system kube-proxy-4vhzw 1/1 Running 0 20m
kube-system kube-proxy-67z7x 1/1 Running 0 20m
kube-system kube-proxy-788vj 1/1 Running 0 20m
kube-system kube-proxy-d7pns 1/1 Running 0 20m
kube-system kube-proxy-g6xvm 1/1 Running 0 20m
kube-system kube-proxy-h5vtq 1/1 Running 0 20m
kube-system kube-proxy-h7kjq 1/1 Running 0 20m
kube-system kube-proxy-kmrsz 1/1 Running 0 20m
kube-system kube-proxy-lbfwz 1/1 Running 0 20m
kube-system kube-proxy-mz7cj 1/1 Running 0 20m
kube-system kube-proxy-nr6wn 1/1 Running 0 20m
kube-system kube-proxy-qtsbk 1/1 Running 0 20m
kube-system kube-proxy-tcjf5 1/1 Running 0 20m
kube-system kube-proxy-vjc64 1/1 Running 0 20m
kube-system kube-proxy-wrh2h 1/1 Running 0 20m
kube-system kube-proxy-x492q 1/1 Running 0 20m
kube-system kube-proxy-zngh4 1/1 Running 0 20m
kube-system kube-proxy-zrh4c 1/1 Running 0 20m
kube-system metrics-server-75bf97fcc9-fhwmj 1/1 Running 0 19m
ns1 test-74c897698f-2ctvb 2/2 Running 0 16m
ns1 test-74c897698f-2fvbs 2/2 Running 0 16m
ns1 test-74c897698f-2vgdl 2/2 Running 0 16m
ns1 test-74c897698f-4fb2g 2/2 Running 0 16m
(省略)
ns2 test-7955cf7657-z58s8 2/2 Running 0 106s
ns2 test-7955cf7657-zhz67 2/2 Running 0 106s
ns2 test-7955cf7657-zplrx 2/2 Running 0 105s
ns2 test-7955cf7657-zx6zd 2/2 Running 0 107s
メモリ使用量の増加は ns1 に 100 Pod と Service を追加したときと同程度。
$ k -n ns1 top pod --containers | head -5
POD NAME CPU(cores) MEMORY(bytes)
test-74c897698f-2ctvb istio-proxy 2m 45Mi
test-74c897698f-2ctvb nginx 0m 2Mi
test-74c897698f-2fvbs istio-proxy 1m 38Mi
test-74c897698f-2fvbs nginx 0m 2Mi
$ k -n ns2 top pod --containers | head -5
POD NAME CPU(cores) MEMORY(bytes)
test-7955cf7657-27xkv istio-proxy 2m 38Mi
test-7955cf7657-27xkv nginx 0m 2Mi
test-7955cf7657-29jhg istio-proxy 2m 38Mi
test-7955cf7657-29jhg nginx 0m 2Mi
設定も endpoint が増えるが他はさほど変わらない。
$ istioctl proxy-config cluster test-74c897698f-2ctvb.ns1 | wc -l
19
$ istioctl proxy-config listener test-74c897698f-2ctvb.ns1 | wc -l
227
$ istioctl proxy-config route test-74c897698f-2ctvb.ns1 | wc -l
12
$ istioctl proxy-config endpoint test-74c897698f-2ctvb.ns1 | wc -l
116
Service を追加
宛先が同じ 100 Pod な Service を 9 つ追加する。
$ k -n ns1 expose deployment test --port=81 --target-port=80 --name test81
service/test81 exposed
$ k -n ns1 expose deployment test --port=82 --target-port=80 --name test82
service/test82 exposed
$ k -n ns1 expose deployment test --port=83 --target-port=80 --name test83
service/test83 exposed
$ k -n ns1 expose deployment test --port=84 --target-port=80 --name test84
service/test84 exposed
$ k -n ns1 expose deployment test --port=85 --target-port=80 --name test85
service/test85 exposed
$ k -n ns1 expose deployment test --port=86 --target-port=80 --name test86
service/test86 exposed
$ k -n ns1 expose deployment test --port=87 --target-port=80 --name test87
service/test87 exposed
$ k -n ns1 expose deployment test --port=88 --target-port=80 --name test88
service/test88 exposed
$ k -n ns1 expose deployment test --port=89 --target-port=80 --name test89
service/test89 exposed
$ k -n ns2 expose deployment test --port=81 --target-port=80 --name test81
service/test81 exposed
$ k -n ns2 expose deployment test --port=82 --target-port=80 --name test82
service/test82 exposed
$ k -n ns2 expose deployment test --port=83 --target-port=80 --name test83
service/test83 exposed
$ k -n ns2 expose deployment test --port=84 --target-port=80 --name test84
service/test84 exposed
$ k -n ns2 expose deployment test --port=85 --target-port=80 --name test85
service/test85 exposed
$ k -n ns2 expose deployment test --port=86 --target-port=80 --name test86
service/test86 exposed
$ k -n ns2 expose deployment test --port=87 --target-port=80 --name test87
service/test87 exposed
$ k -n ns2 expose deployment test --port=88 --target-port=80 --name test88
service/test88 exposed
$ k -n ns2 expose deployment test --port=89 --target-port=80 --name test89
service/test89 exposed
これもそれほど増えるわけではない。
$ k -n ns1 top pod --containers | head -5
POD NAME CPU(cores) MEMORY(bytes)
test-74c897698f-2ctvb istio-proxy 2m 47Mi
test-74c897698f-2ctvb nginx 0m 2Mi
test-74c897698f-2fvbs istio-proxy 1m 40Mi
test-74c897698f-2fvbs nginx 0m 2Mi
$ k -n ns2 top pod --containers | head -5
POD NAME CPU(cores) MEMORY(bytes)
test-7955cf7657-27xkv istio-proxy 2m 41Mi
test-7955cf7657-27xkv nginx 0m 2Mi
test-7955cf7657-29jhg istio-proxy 2m 40Mi
test-7955cf7657-29jhg nginx 0m 2Mi
endpoint の数はかなり増えている。
$ istioctl proxy-config cluster test-74c897698f-2ctvb.ns1 | wc -l
37
$ istioctl proxy-config listener test-74c897698f-2ctvb.ns1 | wc -l
263
$ istioctl proxy-config route test-74c897698f-2ctvb.ns1 | wc -l
30
$ istioctl proxy-config endpoint test-74c897698f-2ctvb.ns1 | wc -l
1916
この状態のオブジェクトの数を確認する。
$ k get no -A --no-headers | wc -l
20
$ k get po -A --no-headers | wc -l
244
$ k get svc -A --no-headers | wc -l
24
まとめ
- 1 node, 7 pod, 4 svc の時の 25MiB から、20 node, 244 pod 24 svc でも 47MiB まで程度しか増やせなかった
- Pod が増えてもそれだけだと増えず、Service が必要
- Service を作ることで endpoint が増える (このとき Service 配下の Pod の数の分が増える)
- Headless Service の場合は endpoint は増えず listener が増える
- ノードを増やしてもそれだけだと増えない
- Service をもつ DaemonSet がある場合はノードが増えるとメモリ使用量が増えると推測できる
結局のところ、クラスターにデプロイされているアプリケーションの Pod や Service といったオブジェクトの数やルーティングの複雑さに依存する。一概に Pod やノードや Service の数との関係を出すのは難しい。
続きます。
補足
大きなクラスターでは Envoy のメモリ使用量が肥大してしまうことがあり、Sidecar を使って通信範囲を絞ることが重要とのこと。