kopsを試してみたメモ。
コンポーネント | バージョン | 備考 |
---|---|---|
kops | 1.18.2 |
参考リンク
- https://github.com/kubernetes/kops
- https://kops.sigs.k8s.io/
- kopsを使ったAWS上でのKubernetesのインストール
- kopsを使ってKubernetesクラスタをAWS上で構成
手順
準備
Getting Startedに従う。
kopsをインストールする。
brew update && brew install kops
IAMユーザーには以下の権限が必要。
AmazonEC2FullAccess AmazonRoute53FullAccess AmazonS3FullAccess IAMFullAccess AmazonVPCFullAccess
kops
というIAMグループとIAMユーザーを作成する。既にAdministorAccessを持つユーザーで作業しているのであればこのIAMユーザーの作成作業は不要。
aws iam create-group --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops aws iam create-user --user-name kops aws iam add-user-to-group --user-name kops --group-name kops aws iam create-access-key --user-name kops
プロファイルを作る。
$ aws configure --profile kops@sotosugi+1 AWS Access Key ID [None]: XXXX AWS Secret Access Key [None]: XXXX Default region name [None]: ap-northeast-1 Default output format [None]:
あらためてexportする。
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id --profile kops@sotosugi+1) export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key --profile kops@sotosugi+1)
IAMエンティティを確認する。
$ aws sts get-caller-identity { "UserId": "AIDAQJQWSUX3I4EJSYNTM", "Account": "XXXXXXXXXXXX", "Arn": "arn:aws:iam::XXXXXXXXXXXX:user/kops" }
ドメインはRoute53でホストしている自分のドメインを使う。
$ aws route53 list-hosted-zones { "HostedZones": [ { "Id": "/hostedzone/Z06455402UOML17Q3T2EH", "Name": "sotosugi.work.", "CallerReference": "ef411a33-7be6-4c33-a9f6-9a350919eb39", "Config": { "Comment": "", "PrivateZone": false }, "ResourceRecordSetCount": 2 }, { "Id": "/hostedzone/Z0667643XTSXTCYD6CB2", "Name": "sotoiwa.dev.", "CallerReference": "fe07c4a1-ca33-4170-bf91-0c37fd668cb7", "Config": { "Comment": "", "PrivateZone": false }, "ResourceRecordSetCount": 2 } ] } $ dig ns sotoiwa.dev ; <<>> DiG 9.10.6 <<>> ns sotoiwa.dev ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62405 ;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;sotoiwa.dev. IN NS ;; ANSWER SECTION: sotoiwa.dev. 10800 IN NS ns-1176.awsdns-19.org. sotoiwa.dev. 10800 IN NS ns-1602.awsdns-08.co.uk. sotoiwa.dev. 10800 IN NS ns-548.awsdns-04.net. sotoiwa.dev. 10800 IN NS ns-433.awsdns-54.com. ;; ADDITIONAL SECTION: ns-433.awsdns-54.com. 35203 IN A 205.251.193.177 ns-548.awsdns-04.net. 33654 IN A 205.251.194.36 ns-1176.awsdns-19.org. 34113 IN A 205.251.196.152 ns-1602.awsdns-08.co.uk. 33854 IN A 205.251.198.66 ;; Query time: 103 msec ;; SERVER: 172.17.192.154#53(172.17.192.154) ;; WHEN: Sat Dec 05 00:10:57 JST 2020 ;; MSG SIZE rcvd: 244
クラスターストレージを作成する。推奨に従いus-east-1
に作成する。
BUCKET_NAME="sotoiwa-dev-state-store" aws s3api create-bucket \ --bucket ${BUCKET_NAME} \ --region us-east-1
バージョニングを有効にする。
aws s3api put-bucket-versioning \ --bucket ${BUCKET_NAME} \ --versioning-configuration Status=Enabled
バケットのデフォルト暗号化を有効にする。SSE-S3を設定する。
aws s3api put-bucket-encryption \ --bucket ${BUCKET_NAME} \ --server-side-encryption-configuration '{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}'
クラスターの作成
変数をセットする。
export NAME=myfirstcluster.sotoiwa.dev export KOPS_STATE_STORE=s3://sotoiwa-dev-state-store
東京のAZを確認する。
aws ec2 describe-availability-zones --region ap-northeast-1 | jq -r '.AvailabilityZones[].ZoneName'
ここでは、最もベーシックなシングルAZのクラスターとする。
createコマンドでクラスターの設定が作成され、S3バケットに保存される。--yes
をつけない場合は何が作成されるのかが表示されるだけで実際の作成はされない。
--dry-run -o yaml
でドライランしてyamlを確認することができる。
$ kops create cluster \ > --zones=ap-northeast-1a \ > -o yaml --dry-run \ > ${NAME} I1205 02:18:10.845799 3844 create_cluster.go:555] Inferred --cloud=aws from zone "ap-northeast-1a" I1205 02:18:10.980389 3844 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet ap-northeast-1a apiVersion: kops.k8s.io/v1alpha2 kind: Cluster metadata: creationTimestamp: null name: myfirstcluster.sotoiwa.dev spec: api: dns: {} authorization: rbac: {} channel: stable cloudProvider: aws configBase: s3://sotoiwa-dev-state-store/myfirstcluster.sotoiwa.dev containerRuntime: docker etcdClusters: - cpuRequest: 200m etcdMembers: - instanceGroup: master-ap-northeast-1a name: a memoryRequest: 100Mi name: main - cpuRequest: 100m etcdMembers: - instanceGroup: master-ap-northeast-1a name: a memoryRequest: 100Mi name: events iam: allowContainerRegistry: true legacy: false kubelet: anonymousAuth: false kubernetesApiAccess: - 0.0.0.0/0 kubernetesVersion: 1.18.12 masterPublicName: api.myfirstcluster.sotoiwa.dev networkCIDR: 172.20.0.0/16 networking: kubenet: {} nonMasqueradeCIDR: 100.64.0.0/10 sshAccess: - 0.0.0.0/0 subnets: - cidr: 172.20.32.0/19 name: ap-northeast-1a type: Public zone: ap-northeast-1a topology: dns: type: Public masters: public nodes: public --- apiVersion: kops.k8s.io/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: null labels: kops.k8s.io/cluster: myfirstcluster.sotoiwa.dev name: master-ap-northeast-1a spec: image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20201026 machineType: t3.medium maxSize: 1 minSize: 1 nodeLabels: kops.k8s.io/instancegroup: master-ap-northeast-1a role: Master subnets: - ap-northeast-1a --- apiVersion: kops.k8s.io/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: null labels: kops.k8s.io/cluster: myfirstcluster.sotoiwa.dev name: nodes spec: image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20201026 machineType: t3.medium maxSize: 2 minSize: 2 nodeLabels: kops.k8s.io/instancegroup: nodes role: Node subnets: - ap-northeast-1a
ClusterとInstanceGroupというリソースがある。この生成したyamlを-f
でわたすこともできそう。
--yes
をつけずに実行することでS3バケットに設定が保存される。
kops create cluster \ --zones=ap-northeast-1a \ ${NAME}
getコマンドで確認できる。
$ kops get cluster NAME CLOUD ZONES myfirstcluster.sotoiwa.dev aws ap-northeast-1a $ kops get cluster ${NAME} -o yaml apiVersion: kops.k8s.io/v1alpha2 kind: Cluster metadata: creationTimestamp: "2020-12-04T15:29:46Z" name: myfirstcluster.sotoiwa.dev spec: api: dns: {} authorization: rbac: {} channel: stable cloudProvider: aws configBase: s3://sotoiwa-dev-state-store/myfirstcluster.sotoiwa.dev containerRuntime: docker etcdClusters: - cpuRequest: 200m etcdMembers: - instanceGroup: master-ap-northeast-1a name: a memoryRequest: 100Mi name: main - cpuRequest: 100m etcdMembers: - instanceGroup: master-ap-northeast-1a name: a memoryRequest: 100Mi name: events iam: allowContainerRegistry: true legacy: false kubelet: anonymousAuth: false kubernetesApiAccess: - 0.0.0.0/0 kubernetesVersion: 1.18.12 masterPublicName: api.myfirstcluster.sotoiwa.dev networkCIDR: 172.20.0.0/16 networking: kubenet: {} nonMasqueradeCIDR: 100.64.0.0/10 sshAccess: - 0.0.0.0/0 subnets: - cidr: 172.20.32.0/19 name: ap-northeast-1a type: Public zone: ap-northeast-1a topology: dns: type: Public masters: public nodes: public
InstanceGroupの設定を見てみる。
$ kops get ig --name myfirstcluster.sotoiwa.dev NAME ROLE MACHINETYPE MIN MAX ZONES master-ap-northeast-1a Master t3.medium 1 1 ap-northeast-1a nodes Node t3.medium 2 2 ap-northeast-1a $ kops get ig --name myfirstcluster.sotoiwa.dev -o yaml master-ap-northeast-1a apiVersion: kops.k8s.io/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: "2020-12-04T15:29:46Z" labels: kops.k8s.io/cluster: myfirstcluster.sotoiwa.dev name: master-ap-northeast-1a spec: image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20201026 machineType: t3.medium maxSize: 1 minSize: 1 nodeLabels: kops.k8s.io/instancegroup: master-ap-northeast-1a role: Master subnets: - ap-northeast-1a $ kops get ig --name myfirstcluster.sotoiwa.dev -o yaml nodes apiVersion: kops.k8s.io/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: "2020-12-04T15:29:47Z" labels: kops.k8s.io/cluster: myfirstcluster.sotoiwa.dev name: nodes spec: image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20201026 machineType: t3.medium maxSize: 2 minSize: 2 nodeLabels: kops.k8s.io/instancegroup: nodes role: Node subnets: - ap-northeast-1a
editコマンドでClusterやInstanceGroupの設定を編集できる。ここでは何も編集しない。
kops edit cluster ${NAME}
既にS3に設定があるので、updateコマンドを--yes
をつけて実行して実際に適用する。
$ kops update cluster ${NAME} --yes I1205 00:41:32.060735 76834 executor.go:103] Tasks: 0 done / 87 total; 43 can run I1205 00:41:33.474046 76834 vfs_castore.go:590] Issuing new certificate: "etcd-clients-ca" I1205 00:41:33.565816 76834 vfs_castore.go:590] Issuing new certificate: "etcd-peers-ca-main" I1205 00:41:33.576745 76834 vfs_castore.go:590] Issuing new certificate: "etcd-manager-ca-main" I1205 00:41:33.669743 76834 vfs_castore.go:590] Issuing new certificate: "ca" I1205 00:41:33.688305 76834 vfs_castore.go:590] Issuing new certificate: "apiserver-aggregator-ca" I1205 00:41:33.833162 76834 vfs_castore.go:590] Issuing new certificate: "etcd-manager-ca-events" I1205 00:41:33.905804 76834 vfs_castore.go:590] Issuing new certificate: "etcd-peers-ca-events" I1205 00:41:37.319352 76834 executor.go:103] Tasks: 43 done / 87 total; 26 can run I1205 00:41:38.595853 76834 vfs_castore.go:590] Issuing new certificate: "kubelet" I1205 00:41:38.613058 76834 vfs_castore.go:590] Issuing new certificate: "apiserver-aggregator" I1205 00:41:38.651152 76834 vfs_castore.go:590] Issuing new certificate: "kube-scheduler" I1205 00:41:38.718151 76834 vfs_castore.go:590] Issuing new certificate: "kubelet-api" I1205 00:41:38.749780 76834 vfs_castore.go:590] Issuing new certificate: "kops" I1205 00:41:38.752418 76834 vfs_castore.go:590] Issuing new certificate: "kubecfg" I1205 00:41:38.798180 76834 vfs_castore.go:590] Issuing new certificate: "kube-proxy" I1205 00:41:38.798760 76834 vfs_castore.go:590] Issuing new certificate: "apiserver-proxy-client" I1205 00:41:38.851444 76834 vfs_castore.go:590] Issuing new certificate: "master" I1205 00:41:38.920562 76834 vfs_castore.go:590] Issuing new certificate: "kube-controller-manager" I1205 00:41:42.417620 76834 executor.go:103] Tasks: 69 done / 87 total; 16 can run I1205 00:41:42.807082 76834 launchconfiguration.go:378] waiting for IAM instance profile "nodes.myfirstcluster.sotoiwa.dev" to be ready I1205 00:41:53.530279 76834 executor.go:103] Tasks: 85 done / 87 total; 2 can run I1205 00:41:54.535610 76834 executor.go:103] Tasks: 87 done / 87 total; 0 can run I1205 00:41:54.536770 76834 dns.go:156] Pre-creating DNS records I1205 00:41:56.912270 76834 update_cluster.go:308] Exporting kubecfg for cluster kops has set your kubectl context to myfirstcluster.sotoiwa.dev Cluster is starting. It should be ready in a few minutes. Suggestions: * validate cluster: kops validate cluster --wait 10m * list nodes: kubectl get nodes --show-labels * ssh to the master: ssh -i ~/.ssh/id_rsa ubuntu@api.myfirstcluster.sotoiwa.dev * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS. * read about installing addons at: https://kops.sigs.k8s.io/operations/addons.
わずか5分程度でクラスターの作成が完了する。
$ kops validate cluster --wait 10m Using cluster from kubectl context: myfirstcluster.sotoiwa.dev Validating cluster myfirstcluster.sotoiwa.dev INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-ap-northeast-1a Master t3.medium 1 1 ap-northeast-1a nodes Node t3.medium 2 2 ap-northeast-1a NODE STATUS NAME ROLE READY ip-172-20-43-29.ap-northeast-1.compute.internal master True ip-172-20-45-219.ap-northeast-1.compute.internal node True ip-172-20-59-245.ap-northeast-1.compute.internal node True Your cluster myfirstcluster.sotoiwa.dev is ready
コンテキストは設定されており、kubectlが実行できる。証明書が作成されてそれで認証されているよう。
$ kubectl config view (抜粋) apiVersion: v1 kind: Config preferences: {} current-context: myfirstcluster.sotoiwa.dev contexts: - context: cluster: myfirstcluster.sotoiwa.dev user: myfirstcluster.sotoiwa.dev name: myfirstcluster.sotoiwa.dev clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://api.myfirstcluster.sotoiwa.dev name: myfirstcluster.sotoiwa.dev users: - name: myfirstcluster.sotoiwa.dev user: client-certificate-data: REDACTED client-key-data: REDACTED
クラスターの確認
NodeとPodの状態を確認する。
$ kubectl get node NAME STATUS ROLES AGE VERSION ip-172-20-43-29.ap-northeast-1.compute.internal Ready master 7m25s v1.18.12 ip-172-20-45-219.ap-northeast-1.compute.internal Ready node 5m58s v1.18.12 ip-172-20-59-245.ap-northeast-1.compute.internal Ready node 5m56s v1.18.12 $ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system dns-controller-85fb76bb5b-bqk2r 1/1 Running 0 7m19s kube-system etcd-manager-events-ip-172-20-43-29.ap-northeast-1.compute.internal 1/1 Running 0 6m50s kube-system etcd-manager-main-ip-172-20-43-29.ap-northeast-1.compute.internal 1/1 Running 0 6m43s kube-system kops-controller-jqjn9 1/1 Running 0 6m33s kube-system kube-apiserver-ip-172-20-43-29.ap-northeast-1.compute.internal 2/2 Running 0 6m19s kube-system kube-controller-manager-ip-172-20-43-29.ap-northeast-1.compute.internal 1/1 Running 0 6m55s kube-system kube-dns-6c699b5445-nb52l 3/3 Running 0 7m19s kube-system kube-dns-6c699b5445-qkf6z 3/3 Running 0 5m47s kube-system kube-dns-autoscaler-cd7778b7b-6nv4v 1/1 Running 0 7m19s kube-system kube-proxy-ip-172-20-43-29.ap-northeast-1.compute.internal 1/1 Running 0 6m51s kube-system kube-proxy-ip-172-20-45-219.ap-northeast-1.compute.internal 1/1 Running 0 5m12s kube-system kube-proxy-ip-172-20-59-245.ap-northeast-1.compute.internal 1/1 Running 0 5m18s kube-system kube-scheduler-ip-172-20-43-29.ap-northeast-1.compute.internal 1/1 Running 0 6m52s
AWSリソースはどうなっているのか確認する。
CloudFormationは使われていない。
Route 53のレコードができている。
VCPができている。
キーペアが作成されノードのインスタンスに設定されている。ローカルの~/.ssh/id_rsa.pub
がインポートされている。
AMIはUbuntuのコミュニティAMI。
ASGの起動設定に仕掛けがありそう。ここのユーザーデータにマスターは約300行、ノードは約200行のスクリプトが入っている。
本番で使うにあたってのガイドがあり、HA、CNI、プライベートトポロジー、クラスター設定はCLIのパラメータでは渡素のではなくyamlでバージョン管理すべきことが書かれている。
クラスターの削除
クラスターを削除する。--yes
をつけないで実行すると何が削除されるのかが確認できる。
$ kops delete cluster --name ${NAME} TYPE NAME ID autoscaling-config master-ap-northeast-1a.masters.myfirstcluster.sotoiwa.dev-20201204154142 master-ap-northeast-1a.masters.myfirstcluster.sotoiwa.dev-20201204154142 autoscaling-config nodes.myfirstcluster.sotoiwa.dev-20201204154142 nodes.myfirstcluster.sotoiwa.dev-20201204154142 autoscaling-group master-ap-northeast-1a.masters.myfirstcluster.sotoiwa.dev master-ap-northeast-1a.masters.myfirstcluster.sotoiwa.dev autoscaling-group nodes.myfirstcluster.sotoiwa.dev nodes.myfirstcluster.sotoiwa.dev dhcp-options myfirstcluster.sotoiwa.dev dopt-00cfffaa7c17c590b iam-instance-profile masters.myfirstcluster.sotoiwa.dev masters.myfirstcluster.sotoiwa.dev iam-instance-profile nodes.myfirstcluster.sotoiwa.dev nodes.myfirstcluster.sotoiwa.dev iam-role masters.myfirstcluster.sotoiwa.dev masters.myfirstcluster.sotoiwa.dev iam-role nodes.myfirstcluster.sotoiwa.dev nodes.myfirstcluster.sotoiwa.dev instance master-ap-northeast-1a.masters.myfirstcluster.sotoiwa.dev i-02760463ed5fa2c50 instance nodes.myfirstcluster.sotoiwa.dev i-08bc4394b2b15ddcc instance nodes.myfirstcluster.sotoiwa.dev i-0ee7325ad7be53325 internet-gateway myfirstcluster.sotoiwa.dev igw-02bfd5880f1106eae keypair kubernetes.myfirstcluster.sotoiwa.dev-ff:40:d0:0c:21:47:7c:68:a4:26:fe:f7:38:b7:f3:d8 kubernetes.myfirstcluster.sotoiwa.dev-ff:40:d0:0c:21:47:7c:68:a4:26:fe:f7:38:b7:f3:d8 route-table myfirstcluster.sotoiwa.dev rtb-06b83c00d15a69f72 route53-record api.internal.myfirstcluster.sotoiwa.dev. Z0667643XTSXTCYD6CB2/api.internal.myfirstcluster.sotoiwa.dev. route53-record api.myfirstcluster.sotoiwa.dev. Z0667643XTSXTCYD6CB2/api.myfirstcluster.sotoiwa.dev. security-group masters.myfirstcluster.sotoiwa.dev sg-0f78d10c30b028c9b security-group nodes.myfirstcluster.sotoiwa.dev sg-09e580a5d3d8de0d7 subnet ap-northeast-1a.myfirstcluster.sotoiwa.dev subnet-04a4751f6208ba792 volume a.etcd-events.myfirstcluster.sotoiwa.dev vol-08f569e6285252c1d volume a.etcd-main.myfirstcluster.sotoiwa.dev vol-051e11b82e50ef1ac vpc myfirstcluster.sotoiwa.dev vpc-060e8e8dc2998bb9b Must specify --yes to delete cluster
削除する。
kops delete cluster ${NAME} --yes