EKSでKubeFedを試す

EKSでKubeFedを試してみたメモ。

コンポーネント バージョン
Kubernetesバージョン 1.18.9
プラットフォームバージョン eks.3
kubefedctl 0.6.1
kubefed 0.6.1
kubefedチャート 0.6.1

参考リンク

準備

ローカル端末にkubefedctlコマンドを導入する。リリースページからダウンロードして配置する。

$ kubefedctl version
kubefedctl version: version.Info{Version:"v0.6.1-1-g1eae3323", GitCommit:"1eae3323499765ee3f7a59e9fb7b6e7f214759c0", GitTreeState:"clean", BuildDate:"2021-01-25T16:45:24Z", GoVersion:"go1.15.3", Compiler:"gc", Platform:"darwin/amd64"}

クラスターを3つ、managementmember1member2を同じVPCに作成する。VPCやリージョンを分けてもよいが、VPCピアリングを作成するかどうかの違いだけと思われるので省略。

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: management
  region: ap-northeast-1
  version: "1.18"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: member1
  region: ap-northeast-1
  version: "1.18"

vpc:
  id: "vpc-069df5f127c927d4b"
  subnets:
    public:
      ap-northeast-1a:
          id: "subnet-09885490580d35e8b"
      ap-northeast-1c:
          id: "subnet-04a6c3c9d5475f527"
    private:
      ap-northeast-1a:
          id: "subnet-0d9342027be0bfba4"
      ap-northeast-1c:
          id: "subnet-02897ae1574ed897a"

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: member2
  region: ap-northeast-1
  version: "1.18"

vpc:
  id: "vpc-069df5f127c927d4b"
  subnets:
    public:
      ap-northeast-1a:
          id: "subnet-09885490580d35e8b"
      ap-northeast-1c:
          id: "subnet-04a6c3c9d5475f527"
    private:
      ap-northeast-1a:
          id: "subnet-0d9342027be0bfba4"
      ap-northeast-1c:
          id: "subnet-02897ae1574ed897a"

managedNodeGroups:
  - name: managed-ng-1
    minSize: 2
    maxSize: 2
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: default

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true
eksctl create cluster -f management.yaml
eksctl create cluster -f member1.yaml
eksctl create cluster -f member2.yaml

KubeFedの導入

HelmチャートでKubeFedを導入する。

helm repo add kubefed-charts https://raw.githubusercontent.com/kubernetes-sigs/kubefed/master/charts
helm repo update

チャートを確認する。

$ helm search repo kubefed
NAME                    CHART VERSION   APP VERSION     DESCRIPTION       
kubefed-charts/kubefed  0.6.1                           KubeFed helm chart

チャートからインストールする。

$ helm --namespace kube-federation-system upgrade -i kubefed kubefed-charts/kubefed --version=0.6.1 --create-namespace
Release "kubefed" does not exist. Installing it now.
NAME: kubefed
LAST DEPLOYED: Mon Mar 15 17:52:23 2021
NAMESPACE: kube-federation-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

確認する。

$ k get all -n kube-federation-system
NAME                                              READY   STATUS    RESTARTS   AGE
pod/kubefed-admission-webhook-7dff5dfcd4-bvcz2    1/1     Running   0          64s
pod/kubefed-controller-manager-7f8997d65f-hvjxq   1/1     Running   0          43s
pod/kubefed-controller-manager-7f8997d65f-w9q9v   1/1     Running   0          41s

NAME                                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/kubefed-admission-webhook                    ClusterIP   172.20.0.4       <none>        443/TCP    64s
service/kubefed-controller-manager-metrics-service   ClusterIP   172.20.224.221   <none>        9090/TCP   64s

NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kubefed-admission-webhook    1/1     1            1           64s
deployment.apps/kubefed-controller-manager   2/2     2            2           64s

NAME                                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/kubefed-admission-webhook-7dff5dfcd4    1         1         1       64s
replicaset.apps/kubefed-controller-manager-7f8997d65f   2         2         2       43s
replicaset.apps/kubefed-controller-manager-84fdcb4bf8   0         0         0       64s

このネームスペースにはNameラベルが必要。チャートからデプロイ時に付与される。

$ k get ns --show-labels
NAME                     STATUS   AGE     LABELS
default                  Active   41m     <none>
kube-federation-system   Active   5m25s   name=kube-federation-system
kube-node-lease          Active   41m     <none>
kube-public              Active   41m     <none>
kube-system              Active   41m     <none>

アドミッションウェブフックが登録されている。

$ k get mutatingwebhookconfigurations.admissionregistration.k8s.io
NAME                            WEBHOOKS   AGE
mutation.core.kubefed.io        1          72m
pod-identity-webhook            1          108m
vpc-resource-mutating-webhook   1          108m
$ k get validatingwebhookconfigurations.admissionregistration.k8s.io
NAME                              WEBHOOKS   AGE
validations.core.kubefed.io       3          72m
vpc-resource-validating-webhook   1          108m

クラスターの登録

クラスターに接続する。コンテキストを確認する。

$ k config get-contexts
CURRENT   NAME                                           CLUSTER                               AUTHINFO                                       NAMESPACE
          kind-kind                                      kind-kind                             kind-kind                                      
*         sotosugi@management.ap-northeast-1.eksctl.io   management.ap-northeast-1.eksctl.io   sotosugi@management.ap-northeast-1.eksctl.io   
          sotosugi@member1.ap-northeast-1.eksctl.io      member1.ap-northeast-1.eksctl.io      sotosugi@member1.ap-northeast-1.eksctl.io      
          sotosugi@production.ap-northeast-1.eksctl.io   production.ap-northeast-1.eksctl.io   sotosugi@production.ap-northeast-1.eksctl.io   
          sotosugi@staging.ap-northeast-1.eksctl.io      staging.ap-northeast-1.eksctl.io      sotosugi@staging.ap-northeast-1.eksctl.io      kube-system

以下のようなコマンドでクラスターを登録するが、eksctlが作成したコンテキストを上手く解釈できずエラーとなる。

kubefedctl join member1 \
  --cluster-context sotosugi@member1.ap-northeast-1.eksctl.io \
  --host-cluster-context sotosugi@management.ap-northeast-1.eksctl.io
$ kubefedctl join member1 \
>   --cluster-context sotosugi@member1.ap-northeast-1.eksctl.io \
>   --host-cluster-context sotosugi@management.ap-northeast-1.eksctl.io
F0315 18:06:17.087427   50344 join.go:127] Error: ServiceAccount "member1-sotosugi@management.ap-northeast-1.eksctl.io" is invalid: metadata.name: Invalid value: "member1-sotosugi@management.ap-northeast-1.eksctl.io": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

.kube/configを手動で調整する。

- context:
    cluster: management.ap-northeast-1.eksctl.io
    user: sotosugi@management.ap-northeast-1.eksctl.io
  name: sotosugi@management.ap-northeast-1.eksctl.io
- context:
    cluster: member1.ap-northeast-1.eksctl.io
    user: sotosugi@member1.ap-northeast-1.eksctl.io
  name: sotosugi@member1.ap-northeast-1.eksctl.io
- context:
    cluster: member2.ap-northeast-1.eksctl.io
    user: sotosugi@member2.ap-northeast-1.eksctl.io
  name: sotosugi@member2.ap-northeast-1.eksctl.io

上記のコンテキストを以下に直し、他の場所を整合性がとれるように直す。

- context:
    cluster: management
    user: sotosugi-management
  name: sotosugi-management
- context:
    cluster: member1
    user: sotosugi-member1
  name: sotosugi-member1
- context:
    cluster: member2
    user: sotosugi-member2
  name: sotosugi-member2

コンテキストを再確認する。

$ k config get-contexts
CURRENT   NAME                                           CLUSTER                               AUTHINFO                                       NAMESPACE
          kind-kind                                      kind-kind                             kind-kind                                      
          sotosugi-management                            management                            sotosugi-management                            
          sotosugi-member1                               member1                               sotosugi-member1                               
          sotosugi-member2                               member2                               sotosugi-member2                               
          sotosugi@production.ap-northeast-1.eksctl.io   production.ap-northeast-1.eksctl.io   sotosugi@production.ap-northeast-1.eksctl.io   
          sotosugi@staging.ap-northeast-1.eksctl.io      staging.ap-northeast-1.eksctl.io      sotosugi@staging.ap-northeast-1.eksctl.io      kube-system

これで再チャレンジする。ログ出力を追加する。

kubefedctl join member1 \
  --cluster-context sotosugi-member1 \
  --host-cluster-context sotosugi-management \
  --v=2
$ kubefedctl join member1 \
>   --cluster-context sotosugi-member1 \
>   --host-cluster-context sotosugi-management \
>   --v=2
I0315 18:22:33.415339   53014 join.go:160] Args and flags: name member1, host: sotosugi-management, host-system-namespace: kube-federation-system, kubeconfig: , cluster-context: sotosugi-member1, secret-name: , dry-run: false
I0315 18:22:33.789387   53014 join.go:241] Performing preflight checks.
I0315 18:22:33.900167   53014 join.go:247] Creating kube-federation-system namespace in joining cluster
I0315 18:22:33.947034   53014 join.go:254] Created kube-federation-system namespace in joining cluster
I0315 18:22:33.947065   53014 join.go:408] Creating service account in joining cluster: member1
I0315 18:22:33.968599   53014 join.go:418] Created service account: member1-sotosugi-management in joining cluster: member1
I0315 18:22:33.968658   53014 join.go:445] Creating cluster role and binding for service account: member1-sotosugi-management in joining cluster: member1
I0315 18:22:34.052268   53014 join.go:454] Created cluster role and binding for service account: member1-sotosugi-management in joining cluster: member1
I0315 18:22:34.052286   53014 join.go:814] Creating cluster credentials secret in host cluster
I0315 18:22:34.090902   53014 join.go:842] Using secret named: member1-sotosugi-management-token-5qd76
I0315 18:22:34.119917   53014 join.go:887] Created secret in host cluster named: member1-sp98w
I0315 18:22:34.173581   53014 join.go:282] Created federated cluster resource
kubefedctl join member2 \
  --cluster-context sotosugi-member2 \
  --host-cluster-context sotosugi-management \
  --v=2
$ kubefedctl join member2 \
>   --cluster-context sotosugi-member2 \
>   --host-cluster-context sotosugi-management \
>   --v=2
I0315 18:23:16.870401   53200 join.go:160] Args and flags: name member2, host: sotosugi-management, host-system-namespace: kube-federation-system, kubeconfig: , cluster-context: sotosugi-member2, secret-name: , dry-run: false
I0315 18:23:17.239271   53200 join.go:241] Performing preflight checks.
I0315 18:23:17.376232   53200 join.go:247] Creating kube-federation-system namespace in joining cluster
I0315 18:23:17.433407   53200 join.go:254] Created kube-federation-system namespace in joining cluster
I0315 18:23:17.433441   53200 join.go:408] Creating service account in joining cluster: member2
I0315 18:23:17.451068   53200 join.go:418] Created service account: member2-sotosugi-management in joining cluster: member2
I0315 18:23:17.451109   53200 join.go:445] Creating cluster role and binding for service account: member2-sotosugi-management in joining cluster: member2
I0315 18:23:17.523683   53200 join.go:454] Created cluster role and binding for service account: member2-sotosugi-management in joining cluster: member2
I0315 18:23:17.523701   53200 join.go:814] Creating cluster credentials secret in host cluster
I0315 18:23:17.551803   53200 join.go:842] Using secret named: member2-sotosugi-management-token-sdl8j
I0315 18:23:17.580312   53200 join.go:887] Created secret in host cluster named: member2-h7txt
I0315 18:23:17.637979   53200 join.go:282] Created federated cluster resource

登録されたクラスターを確認する。

$ kubectl -n kube-federation-system get kubefedclusters
NAME      AGE     READY
member1   2m38s   True
member2   115s    True

先ほどのログを見ると、ターゲットクラスターにもkube-federation-system Namespaceを作成し、ServiceAccountを作成して権限を与えている。

$ kubectx sotosugi-member1
Switched to context "sotosugi-member1".
$ k get sa -n kube-federation-system
NAME                          SECRETS   AGE
default                       1         9m4s
member1-sotosugi-management   1         9m4s

与えられている権限を確認すると、何でもできる権限を持っている。

$ k get clusterrolebinding | grep -e kubefed -e NAME
NAME                                                     ROLE                                                                 AGE
kubefed-controller-manager:member1-sotosugi-management   ClusterRole/kubefed-controller-manager:member1-sotosugi-management   20m
$ k get clusterrole kubefed-controller-manager:member1-sotosugi-management -o yaml | k neat
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kubefed-controller-manager:member1-sotosugi-management
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - get

ホストクラスターにはターゲットクラスターに作成されたServiceAccountのトークンが保存されている(トークンの中身を確認すると一致する)。

$ k get secret -n kube-federation-system
NAME                                     TYPE                                  DATA   AGE
default-token-52kmm                      kubernetes.io/service-account-token   3      45m
kubefed-admission-webhook-serving-cert   kubernetes.io/tls                     2      45m
kubefed-admission-webhook-token-5n8jj    kubernetes.io/service-account-token   3      45m
kubefed-controller-token-2529v           kubernetes.io/service-account-token   3      45m
member1-sp98w                            Opaque                                1      15m
member2-h7txt                            Opaque                                1      14m
sh.helm.release.v1.kubefed.v1            helm.sh/release.v1                    1      45m

KubeFedClusterリソースはこのSecretやAPIサーバーのエンドポイントや証明書などの情報が入っている。kubefedctlを使わなくても可能だと思われるが面倒そう。

$ kubectl -n kube-federation-system get kubefedclusters member1 -o yaml | k neat
apiVersion: core.kubefed.io/v1beta1
kind: KubeFedCluster
metadata:
  name: member1
  namespace: kube-federation-system
spec:
  apiEndpoint: https://XXXX.gr7.ap-northeast-1.eks.amazonaws.com
  caBundle: XXXX
  secretRef:
    name: member1-sp98w

動作確認

ホストクラスターにローカルNamespaceを作成する。

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: test
EOF

ホストクラスターにFederatedNamespaceを作成する。このときローカルNamespaceとFederatedNamespaceの名前は一致する必要がある。

cat <<EOF | kubectl apply -f -
apiVersion: types.kubefed.io/v1beta1
kind: FederatedNamespace
metadata:
  name: test
  namespace: test
spec:
  placement:
    clusters:
    - name: member1
    - name: member2
EOF

ターゲットクラスターで作成されたネームスペースを確認する。

$ k --context sotosugi-member1 get ns
NAME                     STATUS   AGE
default                  Active   71m
kube-federation-system   Active   27m
kube-node-lease          Active   71m
kube-public              Active   71m
kube-system              Active   71m
test                     Active   19s
$ k --context sotosugi-member1 get ns
NAME                     STATUS   AGE
default                  Active   52m
kube-federation-system   Active   26m
kube-node-lease          Active   52m
kube-public              Active   52m
kube-system              Active   52m
test                     Active   29s

FederatedDeploymentを作成する。

cat <<EOF | kubectl apply -f -
apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
  name: test
  namespace: test
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx
            name: nginx
  placement:
    clusters:
    - name: member2
    - name: member1
  overrides:
  - clusterName: member2
    clusterOverrides:
    - path: "/spec/replicas"
      value: 5
    - path: "/spec/template/spec/containers/0/image"
      value: "nginx:1.17.0-alpine"
    - path: "/metadata/annotations"
      op: "add"
      value:
        foo: bar
EOF

確認する。

$ k --context sotosugi-member1 -n test get deploy -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES   SELECTOR
test   3/3     3            3           112s   nginx        nginx    app=nginx
$ k --context sotosugi-member2 -n test get deploy -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES                SELECTOR
test   5/5     5            5           116s   nginx        nginx:1.17.0-alpine   app=nginx

最後に、KubeFedのカスタムリソースを確認する。

$ k api-resources | grep -e kubefed -e NAME
NAME                              SHORTNAMES   APIVERSION                            NAMESPACED   KIND
clusterpropagatedversions                      core.kubefed.io/v1alpha1              false        ClusterPropagatedVersion
federatedservicestatuses                       core.kubefed.io/v1alpha1              true         FederatedServiceStatus
federatedtypeconfigs              ftc          core.kubefed.io/v1beta1               true         FederatedTypeConfig
kubefedclusters                                core.kubefed.io/v1beta1               true         KubeFedCluster
kubefedconfigs                                 core.kubefed.io/v1beta1               true         KubeFedConfig
propagatedversions                             core.kubefed.io/v1alpha1              true         PropagatedVersion
dnsendpoints                                   multiclusterdns.kubefed.io/v1alpha1   true         DNSEndpoint
domains                                        multiclusterdns.kubefed.io/v1alpha1   true         Domain
ingressdnsrecords                              multiclusterdns.kubefed.io/v1alpha1   true         IngressDNSRecord
servicednsrecords                              multiclusterdns.kubefed.io/v1alpha1   true         ServiceDNSRecord
replicaschedulingpreferences      rsp          scheduling.kubefed.io/v1alpha1        true         ReplicaSchedulingPreference
federatedclusterroles                          types.kubefed.io/v1beta1              false        FederatedClusterRole
federatedconfigmaps               fcm          types.kubefed.io/v1beta1              true         FederatedConfigMap
federateddeployments              fdeploy      types.kubefed.io/v1beta1              true         FederatedDeployment
federatedingresses                fing         types.kubefed.io/v1beta1              true         FederatedIngress
federatedjobs                                  types.kubefed.io/v1beta1              true         FederatedJob
federatednamespaces               fns          types.kubefed.io/v1beta1              true         FederatedNamespace
federatedreplicasets              frs          types.kubefed.io/v1beta1              true         FederatedReplicaSet
federatedsecrets                               types.kubefed.io/v1beta1              true         FederatedSecret
federatedserviceaccounts          fsa          types.kubefed.io/v1beta1              true         FederatedServiceAccount
federatedservices                 fsvc         types.kubefed.io/v1beta1              true         FederatedService