kubeadmでEKS Distroをデプロイしてみたメモ。
- kubeadmを使用したシングルコントロールプレーンクラスターの作成
- AWSでCentOS 7にKubernetes 1.11をkubeadmでインストール
- Running Kubernetes cluster with Amazon EKS Distro across AWS Snowball Edge
シングルコントロールプレーンのミニマムなクラスターを目指す。
普通の手順
復習を兼ねてまず普通にkubeadmでクラスターを作ってみる。
VMの準備
簡単にやりたいので、Default VPCにパブリックIPを割り当てたCentOS 7のEC2インスタンスを1台起動する。 セキュリティグループは22ポートとセキュリティグループ内の通信を許可しておく。
ログインして以下を実施。
sudo yum update
- ec2にログインしてホスト名を変更
- 再起動時にホスト名が変更されないようにするため、
/etc/cloud/cloud.cfg
にpreserve_hostname: true
を追加 /etc/hosts
でホスト名が解決できるようにしておくsudo reboot
Dockerのインストール
以下の手順に従ってDockerをインストールする。
古いバージョンのDockerを削除する。
sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine
リポジトリを追加する。
sudo yum install -y yum-utils sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
Docker CEをインストールする。
sudo yum install docker-ce docker-ce-cli containerd.io
==================================================================================================================================== Package アーキテクチャー バージョン リポジトリー 容量 ==================================================================================================================================== インストール中: containerd.io x86_64 1.4.3-3.1.el7 docker-ce-stable 33 M docker-ce x86_64 3:20.10.1-3.el7 docker-ce-stable 27 M docker-ce-cli x86_64 1:20.10.1-3.el7 docker-ce-stable 33 M 依存性関連でのインストールをします: container-selinux noarch 2:2.119.2-1.911c772.el7_8 extras 40 k docker-ce-rootless-extras x86_64 20.10.1-3.el7 docker-ce-stable 9.0 M fuse-overlayfs x86_64 0.7.2-6.el7_8 extras 54 k fuse3-libs x86_64 3.6.1-4.el7 extras 82 k slirp4netns x86_64 0.4.3-4.el7_8 extras 81 k トランザクションの要約 ==================================================================================================================================== インストール 3 パッケージ (+5 個の依存関係のパッケージ)
Dockerデーモンを起動する。
sudo systemctl enable docker && sudo systemctl start docker
centos
ユーザーをdocker
グループに追加する。
sudo usermod -aG docker centos
kubeadmのインストール
以下の手順に従ってkubeadmをインストールする。
ここからはrootユーザーで作業を実施する。
SELinuxをpermissiveモードに設定する。
setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
リポジトリを追加する。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
kubelet、kubeadm、kubectlをインストールする。
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
==================================================================================================================================== Package アーキテクチャー バージョン リポジトリー 容量 ==================================================================================================================================== インストール中: kubeadm x86_64 1.20.0-0 kubernetes 8.3 M kubectl x86_64 1.20.0-0 kubernetes 8.5 M kubelet x86_64 1.20.0-0 kubernetes 20 M 依存性関連でのインストールをします: conntrack-tools x86_64 1.4.4-7.el7 base 187 k cri-tools x86_64 1.13.0-0 kubernetes 5.1 M ebtables x86_64 2.0.10-16.el7 base 123 k kubernetes-cni x86_64 0.8.7-0 kubernetes 19 M libnetfilter_cthelper x86_64 1.0.0-11.el7 base 18 k libnetfilter_cttimeout x86_64 1.0.0-7.el7 base 18 k libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k socat x86_64 1.7.3.2-2.el7 base 290 k トランザクションの要約 ==================================================================================================================================== インストール 3 パッケージ (+8 個の依存関係のパッケージ)
kubeletを起動する。
systemctl enable --now kubelet
VMの複製
ここまででの作業がMasterノードとWorkerノードで同じなので、AMI化し、AMIからWorkerノード用に2つマシンを作成する。
作成したらホスト名を変え、/etc/hosts
に追記する。
kubeadmによるクラスターの作成
Masterノード上でrootユーザーでkubeadm init
コマンドを実行する。CNIはCalicoを使う想定のCIDRを引数に渡す。
[root@k8s-master ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 [init] Using Kubernetes version: v1.20.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.41.31] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.41.31 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.41.31 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 13.503294 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: oemsyo.07mhud68obn97tt4 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.31.41.31:6443 --token oemsyo.07mhud68obn97tt4 \ --discovery-token-ca-cert-hash sha256:145b4063fd095d2322681f4eca73c482c31420e03d13a05e065f3ee83bb0d5ce
一般ユーザーに戻ってkubeconfigを設定する。
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectlが実行できることを確認する。
[centos@k8s-master ~]$ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane,master 2m39s v1.20.0
CNIをインストールする。
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
ノードがReadyになる。
[centos@k8s-master ~]$ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 4m27s v1.20.0
Workerノード上でrootユーザーで上記のkubeadm join
コマンドを実行する。
[root@k8s-node1 ~]# kubeadm join 172.31.41.31:6443 --token oemsyo.07mhud68obn97tt4 \ > --discovery-token-ca-cert-hash sha256:145b4063fd095d2322681f4eca73c482c31420e03d13a05e065f3ee83bb0d5ce [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
2台目についても実行する。
Masterノードでノードの状態を確認する。
[centos@k8s-master ~]$ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master Ready control-plane,master 2m18s v1.20.0 172.31.41.31 <none> CentOS Linux 7 (Core) 3.10.0-1160.6.1.el7.x86_64 docker://20.10.1 k8s-node1 Ready <none> 110s v1.20.0 172.31.35.10 <none> CentOS Linux 7 (Core) 3.10.0-1160.6.1.el7.x86_64 docker://20.10.1 k8s-node2 Ready <none> 105s v1.20.0 172.31.41.248 <none> CentOS Linux 7 (Core) 3.10.0-1160.6.1.el7.x86_64 docker://20.10.1
[centos@k8s-master ~]$ kubectl get po -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-6b8f6f78dc-cs4hz 1/1 Running 0 41s 192.168.36.65 k8s-node1 <none> <none> kube-system calico-node-6qknz 1/1 Running 0 41s 172.31.41.248 k8s-node2 <none> <none> kube-system calico-node-7bdxm 1/1 Running 0 41s 172.31.41.31 k8s-master <none> <none> kube-system calico-node-wz9gq 1/1 Running 0 41s 172.31.35.10 k8s-node1 <none> <none> kube-system coredns-74ff55c5b-jmnkj 1/1 Running 0 2m26s 192.168.235.193 k8s-master <none> <none> kube-system coredns-74ff55c5b-kv5jc 1/1 Running 0 2m26s 192.168.235.194 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 0 2m40s 172.31.41.31 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 0 2m40s 172.31.41.31 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 0 2m40s 172.31.41.31 k8s-master <none> <none> kube-system kube-proxy-297x9 1/1 Running 0 2m11s 172.31.41.248 k8s-node2 <none> <none> kube-system kube-proxy-7m24b 1/1 Running 0 2m26s 172.31.41.31 k8s-master <none> <none> kube-system kube-proxy-t4vxn 1/1 Running 0 2m16s 172.31.35.10 k8s-node1 <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 0 2m40s 172.31.41.31 k8s-master <none> <none>
クラスターの削除
続いてEKS-Dを試すため、クラスターを削除する。Masterノードで以下のコマンドを実行してノードを削除する。
kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets kubectl delete node k8s-node1 kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets kubectl delete node k8s-node2
全てのノードでkubeadm reset
を実行する。
kubeadm reset
全てのノードでkubelet、kubeadm、kubectlをアンインストールする。
yum remove -y kubelet kubeadm kubectl --disableexcludes=kubernetes
EKS-D
以下にしたがってEKS-Dを使う場合の手順を試す。
以下の作業は全てのノード上でrootユーザーで実施する。
Dockerに必要なパッケージをインストールする。先ほどはこの手順はなかったので、今は必須のパッケージではないのかもしれない。
yum -y update yum install -y yum-utils device-mapper-persistent-data lvm2 wget
Dockerはインストール済み。
Dockerのcgroupドライバーをsystemdにする。Dockerを再起動する。さっきはこれをやらなかったので警告が出ていた。
sed -i '/^ExecStart/ s/$/ --exec-opt native.cgroupdriver=systemd/' /usr/lib/systemd/system/docker.service systemctl daemon-reload systemctl restart docker
EKS-Dのインストール
CNIプラグインをインストールする。
mkdir -p /opt/cni/bin wget -q https://distro.eks.amazonaws.com/kubernetes-1-18/releases/1/artifacts/plugins/v0.8.7/cni-plugins-linux-amd64-v0.8.7.tar.gz tar zxf cni-plugins-linux-amd64-v0.8.7.tar.gz -C /opt/cni/bin/
AWS EKS-Dリポジトリからkubeadm、kubelet、およびkubectlをダウンロードする。
wget -q https://distro.eks.amazonaws.com/kubernetes-1-18/releases/1/artifacts/kubernetes/v1.18.9/bin/linux/amd64/kubeadm wget -q https://distro.eks.amazonaws.com/kubernetes-1-18/releases/1/artifacts/kubernetes/v1.18.9/bin/linux/amd64/kubelet wget -q https://distro.eks.amazonaws.com/kubernetes-1-18/releases/1/artifacts/kubernetes/v1.18.9/bin/linux/amd64/kubectl mv kubeadm kubelet kubectl /usr/bin/ chmod +x /usr/bin/kubeadm /usr/bin/kubelet /usr/bin/kubectl
kubeletの依存関係をインストールする。
yum -y install conntrack ebtables socat
kubeletに引数を渡して、cgroupドライバーをsystemdに変更する。Dockerが使用するcgroupドライバーと一致する必要がある。
cat <<EOF > /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS='--cgroup-driver=systemd' EOF
kubeadmとkubeletが必要とするディレクトリとファイルを作成する。
mkdir -p /etc/kubernetes/manifests mkdir -p /usr/lib/systemd/system/kubelet.service.d
kubeletのユニット定義ファイルを作成する。
cat <<EOF > /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf # Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_KUBEADM_ARGS \$KUBELET_EXTRA_ARGS EOF cat <<EOF > /usr/lib/systemd/system/kubelet.service [Unit] Description=kubelet: The Kubernetes Node Agent Documentation=https://kubernetes.io/docs/ Wants=network-online.target After=network-online.target [Service] ExecStart=/usr/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target EOF
システム起動時にkubeletが起動するようにする。
systemctl enable kubelet
コントロールプレーンに必要なDockerイメージをAmazonECRパブリックリポジトリからpullする。
docker pull public.ecr.aws/eks-distro/etcd-io/etcd:v3.4.14-eks-1-18-1 docker pull public.ecr.aws/eks-distro/kubernetes/pause:v1.18.9-eks-1-18-1 docker pull public.ecr.aws/eks-distro/kubernetes/kube-scheduler:v1.18.9-eks-1-18-1 docker pull public.ecr.aws/eks-distro/kubernetes/kube-proxy:v1.18.9-eks-1-18-1 docker pull public.ecr.aws/eks-distro/kubernetes/kube-apiserver:v1.18.9-eks-1-18-1 docker pull public.ecr.aws/eks-distro/kubernetes/kube-controller-manager:v1.18.9-eks-1-18-1 docker pull public.ecr.aws/eks-distro/coredns/coredns:v1.7.0-eks-1-18-1
kubeadmにハードコードされた値に合わせるため、イメージにタグを付与する。
docker tag public.ecr.aws/eks-distro/kubernetes/pause:v1.18.9-eks-1-18-1 public.ecr.aws/eks-distro/kubernetes/pause:3.2 docker tag public.ecr.aws/eks-distro/coredns/coredns:v1.7.0-eks-1-18-1 public.ecr.aws/eks-distro/kubernetes/coredns:1.6.7
イメージを確認する。
[root@k8s-master ~]# docker images | grep -e REPOSITORY -e ecr REPOSITORY TAG IMAGE ID CREATED SIZE public.ecr.aws/eks-distro/kubernetes/pause 3.2 ff45cda5b28a 2 weeks ago 702kB public.ecr.aws/eks-distro/kubernetes/pause v1.18.9-eks-1-18-1 ff45cda5b28a 2 weeks ago 702kB public.ecr.aws/eks-distro/kubernetes/kube-proxy v1.18.9-eks-1-18-1 7b3d7533dd46 2 weeks ago 580MB public.ecr.aws/eks-distro/kubernetes/kube-scheduler v1.18.9-eks-1-18-1 3f6c60b31475 2 weeks ago 504MB public.ecr.aws/eks-distro/kubernetes/kube-controller-manager v1.18.9-eks-1-18-1 b50f3c224c59 2 weeks ago 573MB public.ecr.aws/eks-distro/kubernetes/kube-apiserver v1.18.9-eks-1-18-1 a2ea61c746e1 2 weeks ago 583MB public.ecr.aws/eks-distro/etcd-io/etcd v3.4.14-eks-1-18-1 e77eead05c5e 2 weeks ago 498MB public.ecr.aws/eks-distro/coredns/coredns v1.7.0-eks-1-18-1 6dbf7f0180db 2 weeks ago 46.7MB public.ecr.aws/eks-distro/kubernetes/coredns 1.6.7 6dbf7f0180db 2 weeks ago 46.7MB
クラスターの作成
Masterノードでkubeadmの構成ファイルを作成する。
cat <<EOF > kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration networking: podSubnet: "192.168.0.0/16" etcd: local: imageRepository: public.ecr.aws/eks-distro/etcd-io imageTag: v3.4.14-eks-1-18-1 extraArgs: listen-peer-urls: "https://0.0.0.0:2380" listen-client-urls: "https://0.0.0.0:2379" imageRepository: public.ecr.aws/eks-distro/kubernetes kubernetesVersion: v1.18.9-eks-1-18-1 EOF
kubeadm init
を実行する。
[root@k8s-master ~]# kubeadm init --config kubeadm-config.yaml W1215 09:53:44.445954 16301 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.9-eks-1-18-1 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.41.31] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.41.31 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.41.31 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W1215 09:53:49.896879 16301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W1215 09:53:49.898664 16301 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 14.002391 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: fbov4q.kb1kujlc7zzjmpjj [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.31.41.31:6443 --token fbov4q.kb1kujlc7zzjmpjj \ --discovery-token-ca-cert-hash sha256:816d0342e4089c11af7a4109fe7096cc9d05421389469251a01619046a209fbf
一般ユーザーに戻ってkubeconfigを設定する。
sudo rm -rf $HOME/.kube mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectlが実行できることを確認する。
[centos@k8s-master ~]$ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 46s v1.18.9-eks-1-18-1
CNIをインストールしていないが、既にノードはReadyになっている。
(補足)
後でわかったが、これはCNIの削除が不完全だったためと推測。kubeadm reset
の後にノードを再起動ってCNIが作成したネットワークインターフェースとiptablesのルールがリセットされるようにするのと、/etc/cni/net.d/
の下にCalicoのファイルが残っているのでそれを削除すればおそらく大丈夫。
CNIをインストールする。
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
Workerノード上でrootユーザーで上記のkubeadm join
コマンドを実行する。
[root@k8s-node1 ~]# kubeadm join 172.31.41.31:6443 --token fbov4q.kb1kujlc7zzjmpjj \ > --discovery-token-ca-cert-hash sha256:816d0342e4089c11af7a4109fe7096cc9d05421389469251a01619046a209fbf W1215 09:55:33.026472 24428 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
2台目についても実行する。
ノードが追加されたことを確認する。
[centos@k8s-master ~]$ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master Ready master 5m51s v1.18.9-eks-1-18-1 172.31.41.31 <none> CentOS Linux 7 (Core) 3.10.0-1160.6.1.el7.x86_64 docker://20.10.1 k8s-node1 Ready <none> 4m18s v1.18.9-eks-1-18-1 172.31.35.10 <none> CentOS Linux 7 (Core) 3.10.0-1160.6.1.el7.x86_64 docker://20.10.1 k8s-node2 Ready <none> 4m13s v1.18.9-eks-1-18-1 172.31.41.248 <none> CentOS Linux 7 (Core) 3.10.0-1160.6.1.el7.x86_64 docker://20.10.1
[centos@k8s-master ~]$ kubectl get po -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-59877c7fb4-qz4tx 1/1 Running 0 115s 192.168.169.129 k8s-node2 <none> <none> kube-system calico-node-q8kjv 1/1 Running 0 115s 172.31.41.248 k8s-node2 <none> <none> kube-system calico-node-x6xrj 1/1 Running 0 115s 172.31.35.10 k8s-node1 <none> <none> kube-system calico-node-xvxzl 1/1 Running 0 115s 172.31.41.31 k8s-master <none> <none> kube-system coredns-8f7b4cf65-5769s 1/1 Running 0 9m25s 192.168.235.194 k8s-master <none> <none> kube-system coredns-8f7b4cf65-bcttj 1/1 Running 0 9m25s 192.168.235.193 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 0 9m42s 172.31.41.31 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 0 9m43s 172.31.41.31 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 0 9m43s 172.31.41.31 k8s-master <none> <none> kube-system kube-proxy-j9j8k 1/1 Running 0 9m25s 172.31.41.31 k8s-master <none> <none> kube-system kube-proxy-nfb86 1/1 Running 0 8m13s 172.31.35.10 k8s-node1 <none> <none> kube-system kube-proxy-w948b 1/1 Running 0 8m8s 172.31.41.248 k8s-node2 <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 0 9m42s 172.31.41.31 k8s-master <none> <none>
稼働確認
サンプルアプリケーションをインストールしてNodePortで公開する。
[centos@k8s-master ~]$ kubectl create deployment nginx --image=nginx deployment.apps/nginx created [centos@k8s-master ~]$ kubectl expose deployment nginx --type NodePort --port 80 service/nginx exposed [centos@k8s-master ~]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12m nginx NodePort 10.107.230.104 <none> 80:30772/TCP 5s
セキュリティグループに穴を開けて、アクセスできることを確認する。