EKS AnywhereとEKS Connectorを雑に試すメモ。
参考リンク
- EKS Anywhereのファーストインプレッション
- EKS AnywhereをProductionでデプロイした話
- https://anywhere.eks.amazonaws.com/docs/
- https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html
手順
Dockerプロバイダーでローカルで試したいが、手元のMacではメモリが足りなそうなので、要件に合うUbuntuのEC2インスタンスを立てて試す。
以下は立てたインスタンスにログインして作業する。
準備
sudo apt-get remove docker docker-engine docker.io containerd runc sudo apt-get update sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io
ubuntuユーザーをdockerグループに追加する。
sudo usermod -aG docker ubuntu
一度ログアウトして再ログインし、Dockerのインストールを確認。
ubuntu@ip-172-31-34-33:~$ docker version Client: Docker Engine - Community Version: 20.10.8 API version: 1.41 Go version: go1.16.6 Git commit: 3967b7d Built: Fri Jul 30 19:54:27 2021 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.8 API version: 1.41 (minimum version 1.12) Go version: go1.16.6 Git commit: 75249d8 Built: Fri Jul 30 19:52:33 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.9 GitCommit: e25210fe30a0a703442421b0f60afac609f950a3 runc: Version: 1.0.1 GitCommit: v1.0.1-0-g4144b63 docker-init: Version: 0.19.0 GitCommit: de40ad0
eksctlをインストールする。
curl "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \ --silent --location \ | tar xz -C /tmp sudo mv /tmp/eksctl /usr/local/bin/
ubuntu@ip-172-31-34-33:~$ eksctl version 0.66.0
eksctl-anywhereプラグインをインストールする。
export EKSA_RELEASE="0.5.0" OS="$(uname -s | tr A-Z a-z)" curl "https://anywhere-assets.eks.amazonaws.com/releases/eks-a/1/artifacts/eks-a/v${EKSA_RELEASE}/${OS}/eksctl-anywhere-v${EKSA_RELEASE}-${OS}-amd64.tar.gz" \ --silent --location \ | tar xz ./eksctl-anywhere sudo mv ./eksctl-anywhere /usr/local/bin/
ubuntu@ip-172-31-34-33:~$ eksctl anywhere version v0.5.0
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl chmod +x kubectl sudo mv kubectl /usr/local/bin/
ローカルクラスターの作成
ローカルクラスターを作成する。
Cluster Configを作成する。
CLUSTER_NAME=dev-cluster eksctl anywhere generate clusterconfig $CLUSTER_NAME \ --provider docker > $CLUSTER_NAME.yaml
作成されたファイルを確認する。
ubuntu@ip-172-31-34-33:~$ cat $CLUSTER_NAME.yaml apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: Cluster metadata: name: dev-cluster spec: clusterNetwork: cni: cilium pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12 controlPlaneConfiguration: count: 1 datacenterRef: kind: DockerDatacenterConfig name: dev-cluster externalEtcdConfiguration: count: 1 kubernetesVersion: "1.21" workerNodeGroupConfigurations: - count: 1 --- apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: DockerDatacenterConfig metadata: name: dev-cluster spec: {} ---
クラスターを作成する。Bootstrap Clusterが作られ、そこからCluster APIでWorkload Clusterが作られている。
ubuntu@ip-172-31-34-33:~$ eksctl anywhere create cluster -f $CLUSTER_NAME.yaml Performing setup and validations Warning: The docker infrastructure provider is meant for local development and testing only ✅ Docker Provider setup is valid Creating new bootstrap cluster Installing cluster-api providers on bootstrap cluster Provider specific setup Creating new workload cluster Installing networking on workload cluster Installing storage class on workload cluster Installing cluster-api providers on workload cluster Moving cluster management from bootstrap to workload cluster Installing EKS-A custom components (CRD and controller) on workload cluster Creating EKS-A CRDs instances on workload cluster Installing AddonManager and GitOps Toolkit on workload cluster GitOps field not specified, bootstrap flux skipped Writing cluster config file Deleting bootstrap cluster 🎉 Cluster created!
ローカルにKUBECONFIGファイルが作成されているので確認する。
ubuntu@ip-172-31-34-33:~$ cat ${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://127.0.0.1:44563 name: dev-cluster contexts: - context: cluster: dev-cluster user: dev-cluster-admin name: dev-cluster-admin@dev-cluster current-context: dev-cluster-admin@dev-cluster kind: Config preferences: {} users: - name: dev-cluster-admin user: client-certificate-data: (snip) client-key-data: (snip)
クラスターに接続する。
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
クラスターの様子を見てみる。
ubuntu@ip-172-31-34-33:~$ kubectl get ns NAME STATUS AGE capd-system Active 8m27s capi-kubeadm-bootstrap-system Active 8m37s capi-kubeadm-control-plane-system Active 8m30s capi-system Active 8m38s capi-webhook-system Active 8m40s cert-manager Active 9m20s default Active 10m eksa-system Active 7m42s etcdadm-bootstrap-provider-system Active 8m36s etcdadm-controller-system Active 8m33s kube-node-lease Active 10m kube-public Active 10m kube-system Active 10m
ubuntu@ip-172-31-34-33:~$ kubectl get node NAME STATUS ROLES AGE VERSION dev-cluster-l6r28 Ready control-plane,master 13m v1.21.2-eks-1-21-4 dev-cluster-md-0-8475954cd9-28n48 Ready <none> 12m v1.21.2-eks-1-21-4
ubuntu@ip-172-31-34-33:~$ kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE capd-system capd-controller-manager-659dd5f8bc-thw5j 2/2 Running 0 13m capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-69889cb844-glfq2 2/2 Running 0 13m capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-6ddc66fb75-fvfdp 2/2 Running 0 13m capi-system capi-controller-manager-db59f5789-t5c9f 2/2 Running 0 13m capi-webhook-system capi-controller-manager-64b8c548db-9b4qh 2/2 Running 0 13m capi-webhook-system capi-kubeadm-bootstrap-controller-manager-68b8cc9759-7vnx5 2/2 Running 0 13m capi-webhook-system capi-kubeadm-control-plane-controller-manager-7dc88f767d-k7jkq 2/2 Running 0 13m cert-manager cert-manager-5f6b885b4-tp8vn 1/1 Running 0 14m cert-manager cert-manager-cainjector-bb6d9bcb5-2lfhp 1/1 Running 0 14m cert-manager cert-manager-webhook-56cbc8f5b8-9sxg9 1/1 Running 0 14m eksa-system eksa-controller-manager-6769764b45-vgr5j 2/2 Running 0 12m etcdadm-bootstrap-provider-system etcdadm-bootstrap-provider-controller-manager-54476b7bf9-8vfnp 2/2 Running 0 13m etcdadm-controller-system etcdadm-controller-controller-manager-d5795556-qr668 2/2 Running 0 13m kube-system cilium-cmpn5 1/1 Running 0 14m kube-system cilium-operator-6bf46cc6c6-75bqs 1/1 Running 0 14m kube-system cilium-operator-6bf46cc6c6-f5d8m 1/1 Running 0 14m kube-system cilium-zdxrf 1/1 Running 0 14m kube-system coredns-7c68f85774-6vm9m 1/1 Running 0 15m kube-system coredns-7c68f85774-9tcdr 1/1 Running 0 15m kube-system kube-apiserver-dev-cluster-l6r28 1/1 Running 0 15m kube-system kube-controller-manager-dev-cluster-l6r28 1/1 Running 0 15m kube-system kube-proxy-74p8l 1/1 Running 0 15m kube-system kube-proxy-nlghh 1/1 Running 0 14m kube-system kube-scheduler-dev-cluster-l6r28 1/1 Running 0 15m
kindで動いている。コンテナが4つ起動している。Workerノード、Masterノード、etcdノード、ロードバランサー。
ubuntu@ip-172-31-34-33:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 01722a50e6f5 public.ecr.aws/eks-anywhere/kubernetes-sigs/kind/node:v1.21.2-eks-d-1-21-4-eks-a-1 "/usr/local/bin/entr…" 15 minutes ago Up 15 minutes dev-cluster-md-0-8475954cd9-28n48 27be70abeb6b public.ecr.aws/eks-anywhere/kubernetes-sigs/kind/node:v1.21.2-eks-d-1-21-4-eks-a-1 "/usr/local/bin/entr…" 16 minutes ago Up 16 minutes 42591/tcp, 127.0.0.1:42591->6443/tcp dev-cluster-l6r28 631d71cf079b public.ecr.aws/eks-anywhere/kubernetes-sigs/kind/node:v1.21.2-eks-d-1-21-4-eks-a-1 "/usr/local/bin/entr…" 16 minutes ago Up 16 minutes dev-cluster-etcd-lgz9t 4fef8b84f08e kindest/haproxy:v20210715-a6da3463 "haproxy -sf 7 -W -d…" 16 minutes ago Up 16 minutes 44563/tcp, 0.0.0.0:44563->6443/tcp dev-cluster-lb
テストワークロードのデプロイ
ubuntu@ip-172-31-34-33:~$ kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml" deployment.apps/hello-eks-a created service/hello-eks-a created
Podを確認する。
ubuntu@ip-172-31-34-33:~$ kubectl get pods -l app=hello-eks-a NAME READY STATUS RESTARTS AGE hello-eks-a-9644dd8dc-pl5t2 1/1 Running 0 9s
ログを確認する。
ubuntu@ip-172-31-34-33:~$ kubectl logs -l app=hello-eks-a 2021/09/09 21:08:31 [notice] 1#1: using the "epoll" event method 2021/09/09 21:08:31 [notice] 1#1: nginx/1.21.1 2021/09/09 21:08:31 [notice] 1#1: built by gcc 10.3.1 20210424 (Alpine 10.3.1_git20210424) 2021/09/09 21:08:31 [notice] 1#1: OS: Linux 5.4.0-1045-aws 2021/09/09 21:08:31 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2021/09/09 21:08:31 [notice] 1#1: start worker processes 2021/09/09 21:08:31 [notice] 1#1: start worker process 37 2021/09/09 21:08:31 [notice] 1#1: start worker process 38 2021/09/09 21:08:31 [notice] 1#1: start worker process 39 2021/09/09 21:08:31 [notice] 1#1: start worker process 40
ポートフォワードする。
kubectl port-forward deploy/hello-eks-a 8000:80
別のターミナルからcurlしてみる。
ubuntu@ip-172-31-34-33:~$ curl localhost:8000 ⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢ Thank you for using ███████╗██╗ ██╗███████╗ ██╔════╝██║ ██╔╝██╔════╝ █████╗ █████╔╝ ███████╗ ██╔══╝ ██╔═██╗ ╚════██║ ███████╗██║ ██╗███████║ ╚══════╝╚═╝ ╚═╝╚══════╝ █████╗ ███╗ ██╗██╗ ██╗██╗ ██╗██╗ ██╗███████╗██████╗ ███████╗ ██╔══██╗████╗ ██║╚██╗ ██╔╝██║ ██║██║ ██║██╔════╝██╔══██╗██╔════╝ ███████║██╔██╗ ██║ ╚████╔╝ ██║ █╗ ██║███████║█████╗ ██████╔╝█████╗ ██╔══██║██║╚██╗██║ ╚██╔╝ ██║███╗██║██╔══██║██╔══╝ ██╔══██╗██╔══╝ ██║ ██║██║ ╚████║ ██║ ╚███╔███╔╝██║ ██║███████╗██║ ██║███████╗ ╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═╝ ╚══╝╚══╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚══════╝ You have successfully deployed the hello-eks-a pod hello-eks-a-9644dd8dc-pl5t2 For more information check out https://anywhere.eks.amazonaws.com ⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢⬡⬢
EKS Connector
EKS Connectorも試してみる。
以下はローカルPCでの作業。
$ aws iam create-service-linked-role --aws-service-name eks-connector.amazonaws.com { "Role": { "Path": "/aws-service-role/eks-connector.amazonaws.com/", "RoleName": "AWSServiceRoleForAmazonEKSConnector", "RoleId": "AROASYSBLVT2C6G5MXYRD", "Arn": "arn:aws:iam::XXXXXXXXXXXX:role/aws-service-role/eks-connector.amazonaws.com/AWSServiceRoleForAmazonEKSConnector", "CreateDate": "2021-09-09T21:19:47+00:00", "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Action": [ "sts:AssumeRole" ], "Effect": "Allow", "Principal": { "Service": [ "eks-connector.amazonaws.com" ] } } ] } } }
EKS Connector AgentのIAMロールを作成する。
信頼ポリシーのjsonを作成する。
cat << EOF > eks-connector-agent-trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Sid": "SSMAccess", "Effect": "Allow", "Principal": { "Service": [ "ssm.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } EOF
IAMロールを作成する。
aws iam create-role \ --role-name AmazonEKSConnectorAgentRole \ --assume-role-policy-document file://eks-connector-agent-trust-policy.json
IAMポリシーのjsonを作成する。
cat << EOF > eks-connector-agent-policy.json { "Version": "2012-10-17", "Statement": [ { "Sid": "SsmControlChannel", "Effect": "Allow", "Action": [ "ssmmessages:CreateControlChannel" ], "Resource": "arn:aws:eks:*:*:cluster/*" }, { "Sid": "ssmDataplaneOperations", "Effect": "Allow", "Action": [ "ssmmessages:CreateDataChannel", "ssmmessages:OpenDataChannel", "ssmmessages:OpenControlChannel" ], "Resource": "*" } ] } EOF
IAMロールにインラインポリシーをアタッチする。
aws iam put-role-policy \ --role-name AmazonEKSConnectorAgentRole \ --policy-name AmazonEKSConnectorAgentPolicy \ --policy-document file://eks-connector-agent-policy.json
クラスターを登録する。
CLIでもできるがマネジメントコンソールでやってみる。「クラスターを追加」のボタンから「登録」を選択する。
プロバイダーがいくつか選べるようになっている。ここではEKS Anywhereを選択する。
先ほど作成したEKS Connector用のロールを選択して登録する。
yamlをダウンロードして適用するように言われる。
とりあえずローカルPCにyamlをダウンロードする。
--- apiVersion: v1 kind: Namespace metadata: name: eks-connector --- apiVersion: v1 kind: Secret metadata: name: eks-connector-activation-config namespace: eks-connector type: Opaque data: code: VGVKTzhyR2d4ZTRpbWczTVBBSFI= --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: eks-connector-secret-access namespace: eks-connector rules: - apiGroups: [ "" ] resources: - secrets verbs: [ "get", "update" ] resourceNames: - eks-connector-state-0 - eks-connector-state-1 - apiGroups: [ "" ] resources: - secrets verbs: [ "create" ] --- apiVersion: v1 kind: ServiceAccount metadata: name: eks-connector namespace: eks-connector automountServiceAccountToken: false --- apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: eks-connector-token namespace: eks-connector annotations: kubernetes.io/service-account.name: eks-connector --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: eks-connector-secret-access namespace: eks-connector subjects: - kind: ServiceAccount name: eks-connector roleRef: kind: Role name: eks-connector-secret-access apiGroup: rbac.authorization.k8s.io --- apiVersion: v1 kind: ConfigMap metadata: name: eks-connector-agent namespace: eks-connector data: amazon-ssm-agent.json: | { "Profile": { "KeyAutoRotateDays": 7 }, "Agent": { "ContainerMode": true }, "Identity": { "ConsumptionOrder": [ "OnPrem" ] } } seelog.xml: | <seelog type="adaptive" mininterval="2000000" maxinterval="100000000" critmsgcount="500" minlevel="info"> <exceptions> <exception filepattern="test*" minlevel="error"/> </exceptions> <outputs formatid="fmtinfo"> <console formatid="fmtinfo"/> <rollingfile type="size" filename="/var/log/amazon/ssm/amazon-ssm-agent.log" maxsize="30000000" maxrolls="5"/> <filter levels="error,critical" formatid="fmterror"> <console formatid="fmterror"/> <rollingfile type="size" filename="/var/log/amazon/ssm/errors.log" maxsize="10000000" maxrolls="5"/> </filter> </outputs> <formats> <format id="fmterror" format="%Date %Time %LEVEL [%FuncShort @ %File.%Line] %Msg%n"/> <format id="fmtdebug" format="%Date %Time %LEVEL [%FuncShort @ %File.%Line] %Msg%n"/> <format id="fmtinfo" format="%Date %Time %LEVEL %Msg%n"/> </formats> </seelog> --- apiVersion: apps/v1 kind: StatefulSet metadata: namespace: eks-connector name: eks-connector labels: app: eks-connector spec: replicas: 2 selector: matchLabels: app: eks-connector serviceName: "eks-connector" template: metadata: labels: app: eks-connector spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - eks-connector topologyKey: "kubernetes.io/hostname" serviceAccountName: eks-connector tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master - key: CriticalAddonsOnly operator: Exists initContainers: - name: connector-init image: public.ecr.aws/eks-connector/eks-connector:0.0.2 imagePullPolicy: IfNotPresent env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: EKS_ACTIVATION_CODE valueFrom: secretKeyRef: name: eks-connector-activation-config key: code - name: EKS_ACTIVATION_ID value: hogehoge args: - "init" - "--activation.id=$(EKS_ACTIVATION_ID)" - "--activation.code=$(EKS_ACTIVATION_CODE)" - "--agent.region=ap-northeast-1" volumeMounts: - name: service-account-token mountPath: /var/run/secrets/kubernetes.io/serviceaccount - name: eks-agent-vault mountPath: /var/lib/amazon/ssm/Vault securityContext: allowPrivilegeEscalation: false capabilities: drop: - all containers: - name: connector-agent image: public.ecr.aws/amazon-ssm-agent/amazon-ssm-agent:3.1.90.0 imagePullPolicy: IfNotPresent volumeMounts: - name: eks-connector-shared mountPath: /var/eks/shared - name: eks-agent-vault mountPath: /var/lib/amazon/ssm/Vault - name: eks-agent-config mountPath: /etc/amazon/ssm/amazon-ssm-agent.json subPath: amazon-ssm-agent.json - name: eks-agent-config mountPath: /etc/amazon/ssm/seelog.xml subPath: seelog.xml securityContext: allowPrivilegeEscalation: false capabilities: add: - DAC_OVERRIDE drop: - all - name: connector-proxy image: public.ecr.aws/eks-connector/eks-connector:0.0.2 imagePullPolicy: IfNotPresent env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - "server" volumeMounts: - name: service-account-token mountPath: /var/run/secrets/kubernetes.io/serviceaccount - name: eks-connector-shared mountPath: /var/eks/shared - name: eks-agent-vault mountPath: /var/lib/amazon/ssm/Vault securityContext: allowPrivilegeEscalation: false capabilities: add: - DAC_OVERRIDE drop: - all volumes: - name: eks-connector-shared emptyDir: { } - name: eks-agent-vault emptyDir: { } - name: eks-agent-config configMap: name: eks-connector-agent - name: service-account-token secret: secretName: eks-connector-token
ローカルPCはクラスターに接続できないので、このファイルをクラスターに接続できるEC2インスタンスに持っていく。Cluster Configとファイル名が同じなので注意。
このファイルをapplyする。
ubuntu@ip-172-31-34-33:~$ kubectl apply -f dev-cluster-regist.yaml namespace/eks-connector created secret/eks-connector-activation-config created role.rbac.authorization.k8s.io/eks-connector-secret-access created serviceaccount/eks-connector created secret/eks-connector-token created rolebinding.rbac.authorization.k8s.io/eks-connector-secret-access created configmap/eks-connector-agent created statefulset.apps/eks-connector created
クラスターがActiveになったが、権限がないと言われる。
- マネジメントコンソールを使用しているユーザーには
eks:AccessKubernetesApi
が必要。今回はAdministratorAccess
を持つユーザーを使用している。 - EKS ConnectorのServiceAccountが、IAMユーザー/ロールになりすます(impersonate)権限が必要。
eks-connector
ClusterRole のテンプレートをダウンロードする。
curl -o eks-connector-clusterrole.yaml https://amazon-eks.s3.us-west-2.amazonaws.com/eks-connector/manifests/eks-connector-console-roles/eks-connector-clusterrole.yaml
マニフェストを確認する。
ubuntu@ip-172-31-34-33:~$ cat eks-connector-clusterrole.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: eks-connector-service subjects: - kind: ServiceAccount name: eks-connector namespace: eks-connector roleRef: kind: ClusterRole name: eks-connector-service apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: eks-connector-service rules: - apiGroups: [ "" ] resources: - users verbs: - impersonate resourceNames: # TODO: 1. ADD your IAM identity arn here - "%IAM_ARN%"
"%IAM_ARN%"
のところに、マネジメントコンソールにアクセスしているIAMユーザー/ロールのARNを記載し、マニフェストを適用する。
ubuntu@ip-172-31-34-33:~$ kubectl apply -f eks-connector-clusterrole.yaml clusterrolebinding.rbac.authorization.k8s.io/eks-connector-service created clusterrole.rbac.authorization.k8s.io/eks-connector-service created
これでエラーが少し変わった。接続できているが、クラスター内の権限がないので見られない。
すべてのNamespaceのリソースを見る権限を与えるテンプレートをダウンロードする。
curl -o eks-connector-console-dashboard-full-access-group.yaml https://amazon-eks.s3.us-west-2.amazonaws.com/eks-connector/manifests/eks-connector-console-roles/eks-connector-console-dashboard-full-access-group.yaml
マニフェストを確認する。
ubuntu@ip-172-31-34-33:~$ cat eks-connector-console-dashboard-full-access-group.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: eks-connector-console-dashboard-full-access-clusterrole rules: - apiGroups: - "" resources: - nodes - namespaces - pods - events verbs: - get - list - apiGroups: - apps resources: - deployments - daemonsets - statefulsets - replicasets verbs: - get - list - apiGroups: - batch resources: - jobs verbs: - get - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: eks-connector-console-dashboard-full-access-clusterrole-binding subjects: - kind: User name: "%IAM_ARN%" apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: eks-connector-console-dashboard-full-access-clusterrole apiGroup: rbac.authorization.k8s.io ---
また%IAM_ARN%
のところを書き換える必要がある。書き換えて適用する。
ubuntu@ip-172-31-34-33:~$ kubectl apply -f eks-connector-console-dashboard-full-access-group.yaml clusterrole.rbac.authorization.k8s.io/eks-connector-console-dashboard-full-access-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/eks-connector-console-dashboard-full-access-clusterrole-binding created
これでエラーが消えてリソースが見られるようになった。