MinIO を試す

MinIO を試す。EKS 用の手順もあるようだが、Upstream と書かれている手順を試す。

クラスターの作成

CLUSTER_NAME="minio"
MY_ARN=$(aws sts get-caller-identity --output text --query Arn)
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1
  version: "1.29"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true

accessConfig:
  bootstrapClusterCreatorAdminPermissions: false
  authenticationMode: API
  accessEntries:
    - principalARN: arn:aws:iam::${AWS_ACCOUNT_ID}:role/Admin
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
          accessScope:
            type: cluster
EOF
eksctl create cluster -f cluster.yaml

ノードを作成する。

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > m1.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: m1
    minSize: 3
    maxSize: 3
    desiredCapacity: 3
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m1.yaml

ノードを確認する。

$ k get node
NAME                                              STATUS   ROLES    AGE   VERSION
ip-10-0-105-238.ap-northeast-1.compute.internal   Ready    <none>   24m   v1.29.0-eks-5e0fdde
ip-10-0-117-206.ap-northeast-1.compute.internal   Ready    <none>   24m   v1.29.0-eks-5e0fdde
ip-10-0-86-177.ap-northeast-1.compute.internal    Ready    <none>   24m   v1.29.0-eks-5e0fdde

Pod を確認する。

$ k get po -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-cxt7p             2/2     Running   0          25m
kube-system   aws-node-dp2p4             2/2     Running   0          25m
kube-system   aws-node-vmj54             2/2     Running   0          25m
kube-system   coredns-676bf68468-bqbnp   1/1     Running   0          37m
kube-system   coredns-676bf68468-g846n   1/1     Running   0          37m
kube-system   kube-proxy-hkj5h           1/1     Running   0          25m
kube-system   kube-proxy-lms44           1/1     Running   0          25m
kube-system   kube-proxy-t87sp           1/1     Running   0          25m

MinIO のインストール

まずはデフォルト設定で入れてみる。MinIO Operator をデプロイする。

$ kubectl minio init
# Warning: 'patchesJson6902' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
namespace/minio-operator created
serviceaccount/minio-operator created
clusterrole.rbac.authorization.k8s.io/minio-operator-role created
clusterrolebinding.rbac.authorization.k8s.io/minio-operator-binding created
customresourcedefinition.apiextensions.k8s.io/tenants.minio.min.io created
customresourcedefinition.apiextensions.k8s.io/policybindings.sts.min.io created
customresourcedefinition.apiextensions.k8s.io/miniojobs.job.min.io created
service/operator created
service/sts created
deployment.apps/minio-operator created
serviceaccount/console-sa created
secret/console-sa-secret created
clusterrole.rbac.authorization.k8s.io/console-sa-role created
clusterrolebinding.rbac.authorization.k8s.io/console-sa-binding created
configmap/console-env created
service/console created
deployment.apps/console created
-----------------

To open Operator UI, start a port forward using this command:

kubectl minio proxy -n minio-operator

-----------------

Pod を確認する。

$ k get po -A
NAMESPACE        NAME                              READY   STATUS    RESTARTS   AGE
kube-system      aws-node-cxt7p                    2/2     Running   0          28m
kube-system      aws-node-dp2p4                    2/2     Running   0          28m
kube-system      aws-node-vmj54                    2/2     Running   0          28m
kube-system      coredns-676bf68468-bqbnp          1/1     Running   0          40m
kube-system      coredns-676bf68468-g846n          1/1     Running   0          40m
kube-system      kube-proxy-hkj5h                  1/1     Running   0          28m
kube-system      kube-proxy-lms44                  1/1     Running   0          28m
kube-system      kube-proxy-t87sp                  1/1     Running   0          28m
minio-operator   console-86878b559f-tkzts          1/1     Running   0          22s
minio-operator   minio-operator-54bf877d58-7rbx9   1/1     Running   0          22s
minio-operator   minio-operator-54bf877d58-mx64t   1/1     Running   0          22s

Operator コンソールにアクセスする。

$ kubectl minio proxy
Starting port forward of the Console UI.

To connect open a browser and go to http://localhost:9090

Current JWT to login: eyJhbGciOiJSUzI1NiIsImtpZCI6IlljTERycFVtNS1FdTBpMXBiYkZ0RDUyUUZIT1Fwdlk5MmtTTFI3bzlSY00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaW5pby1vcGVyYXRvciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjb25zb2xlLXNhLXNlY3JldCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjb25zb2xlLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNWMwODM0NjEtNzk3Yy00ZDc4LTlkZDgtNjUxNmYzYjJmNzU5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1pbmlvLW9wZXJhdG9yOmNvbnNvbGUtc2EifQ.esXqD_rdGqn8cnca2_Rr83anD1TWQiUY8J0o4_JEh7Kk-vLaTFo83l1wisrBhooNWxqFCo-5Ypc0MRG7lMHoRLo8Zq4mcPCQN1uElnlNLalwZtgkmcu2khaV6SmonNyr0i7tw7mXXfx6VOiM6fSFQZMoK0YwXZx1Dso_TWnZo1eWPmuxGfzpSkUCvIdqgtAq0N7a2YYxmf9yvgCySTreNQEN1xVt6G2lo__KVN2F0E0PBJJDofnLeRNz3u2hBtHYFgFgSbnWCjRMscxXT8s83JtoqTFbPck-ZVXKh8nWqXaRqY7rIoZq0lrTYz5piALpW-xlgdFLtjaw-dHDhycbcw

Forwarding from 0.0.0.0:9090 -> 9090

EBS CSI Driver の導入

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > addon.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

addons:
  - name: vpc-cni
    version: latest
    attachPolicyARNs:
      - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
    # serviceAccountRoleARN: arn:aws:iam::XXXXXXXXXXXX:role/eksctl-fully-private-addon-iamserviceaccount-Role1-LRQ0AZXOE60K
    configurationValues: |-
      env:
        WARM_IP_TARGET: "2"
        MINIMUM_IP_TARGET: "10"
    resolveConflicts: overwrite
  - name: coredns
    version: latest
  - name: kube-proxy
    version: latest
  - name: aws-ebs-csi-driver
    version: latest
    wellKnownPolicies:
      ebsCSIController: true
EOF
eksctl create addon -f addon.yaml
$ k get po -A
NAMESPACE        NAME                                  READY   STATUS    RESTARTS   AGE
kube-system      aws-node-8n8mq                        2/2     Running   0          4m39s
kube-system      aws-node-cf5b5                        2/2     Running   0          5m25s
kube-system      aws-node-dtvlm                        2/2     Running   0          5m2s
kube-system      coredns-5877997cb7-4hxql              1/1     Running   0          2m30s
kube-system      coredns-5877997cb7-8nf5z              1/1     Running   0          2m29s
kube-system      ebs-csi-controller-7cddb57f8d-9xrn2   5/6     Running   0          12s
kube-system      ebs-csi-controller-7cddb57f8d-hjk2w   5/6     Running   0          12s
kube-system      ebs-csi-node-btcwj                    3/3     Running   0          12s
kube-system      ebs-csi-node-cjnl7                    3/3     Running   0          12s
kube-system      ebs-csi-node-qd4tm                    3/3     Running   0          12s
kube-system      kube-proxy-2xm44                      1/1     Running   0          2m29s
kube-system      kube-proxy-k9fhs                      1/1     Running   0          2m26s
kube-system      kube-proxy-x4j2c                      1/1     Running   0          2m23s
minio-operator   console-86878b559f-tkzts              1/1     Running   0          20m
minio-operator   minio-operator-54bf877d58-7rbx9       1/1     Running   0          20m
minio-operator   minio-operator-54bf877d58-mx64t       1/1     Running   0          20m
$ k get storageclass
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  61m

テナントの作成

Operator コンソールでもできそうだが、ここでは kubectl で実施する。

Tenant のマニフェストを生成してみる。

$ kubectl minio tenant create minio1 \
  --capacity 16Gi \
  --servers 4 \
  --volumes 8 \
  --namespace minio-tenant-1 \
  --storage-class gp2 \
  --output
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
  creationTimestamp: null
  name: minio1
  namespace: minio-tenant-1
scheduler:
  name: ""
spec:
  certConfig:
    commonName: '*.minio1-hl.minio-tenant-1.svc.cluster.local'
    dnsNames:
    - minio1-ss-0-{0...3}.minio1-hl.minio-tenant-1.svc.cluster.local
    organizationName:
    - system:nodes
  configuration:
    name: minio1-env-configuration
  exposeServices: {}
  features:
    enableSFTP: false
  image: minio/minio:RELEASE.2024-02-09T21-25-16Z
  imagePullPolicy: IfNotPresent
  imagePullSecret: {}
  mountPath: /export
  podManagementPolicy: Parallel
  pools:
  - affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: v1.min.io/tenant
              operator: In
              values:
              - minio1
            - key: v1.min.io/pool
              operator: In
              values:
              - ""
          topologyKey: kubernetes.io/hostname
    name: ss-0
    resources: {}
    servers: 4
    volumeClaimTemplate:
      apiVersion: v1
      kind: persistentvolumeclaims
      metadata:
        creationTimestamp: null
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 2Gi
        storageClassName: gp2
      status: {}
    volumesPerServer: 2
  requestAutoCert: true
  serviceAccountName: minio1-sa
  users:
  - name: minio1-user-1
status:
  availableReplicas: 0
  certificates: {}
  currentState: ""
  pools: null
  revision: 0
  syncVersion: ""
  usage: {}

---
apiVersion: v1
data:
  config.env: ZXhwb3J0IE1JTklPX1JPT1RfUEFTU1dPUkQ9IklCU0hiN1JLQ1ZOYWpzOHo2VEt0bWNlZmppdzg4Y3JseEVhZm44anAiCmV4cG9ydCBNSU5JT19ST09UX1VTRVI9IjJDQk9aRVZRWlkyOFdBU0tTOVdMIgo=
kind: Secret
metadata:
  creationTimestamp: null
  name: minio1-env-configuration
  namespace: minio-tenant-1

---
apiVersion: v1
data:
  CONSOLE_ACCESS_KEY: NVBOMjQxUTNFTUU0WFNBUTNZWFE=
  CONSOLE_SECRET_KEY: bkVqa3RsWEFJYlJyQXZvT3dlMFVMQ003eHVWNEliRFowRVI3QjFObg==
kind: Secret
metadata:
  creationTimestamp: null
  name: minio1-user-1
  namespace: minio-tenant-1

apply する。

$ k create ns minio-tenant-1
namespace/minio-tenant-1 created
$ kubectl minio tenant create minio1 \
  --capacity 16Gi \
  --servers 4 \
  --volumes 8 \
  --namespace minio-tenant-1 \
  --storage-class gp2
W0308 18:58:11.797099   36063 warnings.go:70] unknown field "spec.pools[0].volumeClaimTemplate.metadata.creationTimestamp"

Tenant 'minio1' created in 'minio-tenant-1' Namespace

  Username: ET3U6W5UKQXFG7FDY1Q0
  Password: AAnut9ff6x6jwABk4qSsF0nHJpvChKo8TH99ZUaE
  Note: Copy the credentials to a secure location. MinIO will not display these again.

APPLICATION     SERVICE NAME    NAMESPACE       SERVICE TYPE    SERVICE PORT
MinIO           minio           minio-tenant-1  ClusterIP       443         
Console         minio1-console  minio-tenant-1  ClusterIP       9443        

Pod を確認する。

$ k get po -A
NAMESPACE        NAME                                  READY   STATUS    RESTARTS       AGE
kube-system      aws-node-8n8mq                        2/2     Running   2 (5d7h ago)   8d
kube-system      aws-node-cf5b5                        2/2     Running   2 (5d7h ago)   8d
kube-system      aws-node-dtvlm                        2/2     Running   2 (5d7h ago)   8d
kube-system      coredns-5877997cb7-4hxql              1/1     Running   1 (5d7h ago)   8d
kube-system      coredns-5877997cb7-8nf5z              1/1     Running   1 (5d7h ago)   8d
kube-system      ebs-csi-controller-7cddb57f8d-9xrn2   6/6     Running   6 (5d7h ago)   8d
kube-system      ebs-csi-controller-7cddb57f8d-hjk2w   6/6     Running   6 (5d7h ago)   8d
kube-system      ebs-csi-node-btcwj                    3/3     Running   3 (5d7h ago)   8d
kube-system      ebs-csi-node-cjnl7                    3/3     Running   3 (5d7h ago)   8d
kube-system      ebs-csi-node-qd4tm                    3/3     Running   3 (5d7h ago)   8d
kube-system      kube-proxy-2xm44                      1/1     Running   1 (5d7h ago)   8d
kube-system      kube-proxy-k9fhs                      1/1     Running   1 (5d7h ago)   8d
kube-system      kube-proxy-x4j2c                      1/1     Running   1 (5d7h ago)   8d
minio-operator   console-86878b559f-jxxvg              1/1     Running   0              5m1s
minio-operator   minio-operator-54bf877d58-8jrvl       1/1     Running   0              5m1s
minio-operator   minio-operator-54bf877d58-fm8t7       1/1     Running   0              5m1s
minio-tenant-1   minio1-ss-0-0                         2/2     Running   0              2m3s
minio-tenant-1   minio1-ss-0-1                         2/2     Running   0              2m2s
minio-tenant-1   minio1-ss-0-2                         2/2     Running   0              2m2s
minio-tenant-1   minio1-ss-0-3                         2/2     Running   0              2m2s

ポートフォワードしてコンソールにアクセスする。

$ k -n minio-tenant-1 get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
minio            ClusterIP   172.20.1.251    <none>        443/TCP    2m34s
minio1-console   ClusterIP   172.20.15.213   <none>        9443/TCP   2m34s
minio1-hl        ClusterIP   None            <none>        9000/TCP   2m33s
$ k -n minio-tenant-1 port-forward svc/minio1-console 9443:9443
Forwarding from 127.0.0.1:9443 -> 9443
Forwarding from [::1]:9443 -> 9443

PV/PVC を確認する。

$ k get pv,pvc -A
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-1ebcb2c9-051c-4800-8974-e0664ac25fa3   2Gi        RWO            Delete           Bound    minio-tenant-1/1-minio1-ss-0-3   gp2            <unset>                          4m30s
persistentvolume/pvc-2c60e4b0-3473-44ef-8bc7-a30043f5efcf   2Gi        RWO            Delete           Bound    minio-tenant-1/1-minio1-ss-0-0   gp2            <unset>                          4m30s
persistentvolume/pvc-2d7e0cd1-d9e6-4ea5-8ab3-00272dc54350   2Gi        RWO            Delete           Bound    minio-tenant-1/0-minio1-ss-0-0   gp2            <unset>                          4m30s
persistentvolume/pvc-535a3c5c-f7a8-4fe6-b7b2-ec2256e312d2   2Gi        RWO            Delete           Bound    minio-tenant-1/0-minio1-ss-0-2   gp2            <unset>                          4m30s
persistentvolume/pvc-5aa0edf5-423f-431e-8547-7bc01a00ca25   2Gi        RWO            Delete           Bound    minio-tenant-1/1-minio1-ss-0-2   gp2            <unset>                          4m30s
persistentvolume/pvc-84e16280-bb09-4cb3-a64e-1de7f4f8b469   2Gi        RWO            Delete           Bound    minio-tenant-1/0-minio1-ss-0-1   gp2            <unset>                          4m30s
persistentvolume/pvc-b19e0010-0f93-4dbe-bdc3-3a22be6795e8   2Gi        RWO            Delete           Bound    minio-tenant-1/0-minio1-ss-0-3   gp2            <unset>                          4m30s
persistentvolume/pvc-bc73c448-5ed2-4740-bc85-be7481126b0e   2Gi        RWO            Delete           Bound    minio-tenant-1/1-minio1-ss-0-1   gp2            <unset>                          4m30s

NAMESPACE        NAME                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
minio-tenant-1   persistentvolumeclaim/0-minio1-ss-0-0   Bound    pvc-2d7e0cd1-d9e6-4ea5-8ab3-00272dc54350   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/0-minio1-ss-0-1   Bound    pvc-84e16280-bb09-4cb3-a64e-1de7f4f8b469   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/0-minio1-ss-0-2   Bound    pvc-535a3c5c-f7a8-4fe6-b7b2-ec2256e312d2   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/0-minio1-ss-0-3   Bound    pvc-b19e0010-0f93-4dbe-bdc3-3a22be6795e8   2Gi        RWO            gp2            <unset>                 4m34s
minio-tenant-1   persistentvolumeclaim/1-minio1-ss-0-0   Bound    pvc-2c60e4b0-3473-44ef-8bc7-a30043f5efcf   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/1-minio1-ss-0-1   Bound    pvc-bc73c448-5ed2-4740-bc85-be7481126b0e   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/1-minio1-ss-0-2   Bound    pvc-5aa0edf5-423f-431e-8547-7bc01a00ca25   2Gi        RWO            gp2            <unset>                 4m35s
minio-tenant-1   persistentvolumeclaim/1-minio1-ss-0-3   Bound    pvc-1ebcb2c9-051c-4800-8974-e0664ac25fa3   2Gi        RWO            gp2            <unset>                 4m34s

AWS CLI を使ったアクセス

AWS CLI にクレデンシャルを設定する。ここでは、上述のコンソールユーザーのクレデンシャルを使用する。

$ aws configure --profile minio
AWS Access Key ID [None]: ET3U6W5UKQXFG7FDY1Q0
AWS Secret Access Key [None]: AAnut9ff6x6jwABk4qSsF0nHJpvChKo8TH99ZUaE
Default region name [None]: ap-northeast-1
Default output format [None]:

署名バージョンを指定する。

aws configure set s3.signature_version s3v4 --profile minio

.aws/config は以下のようになる。

[profile minio]
region = ap-northeast-1
s3 =
    signature_version = s3v4

別のターミナルでポートフォワードしておく。

$ k -n minio-tenant-1 port-forward svc/minio1-hl 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000

プロファイルを指定する。

export AWS_PROFILE=minio

エンドポイントを指定して AWS CLI を実行する。

$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 ls
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 mb s3://hoge-bucket
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
make_bucket: hoge-bucket
$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 ls
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
2024-03-21 18:38:51 hoge-bucket
$ echo hello > hello.txt
$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 cp hello.txt s3://hoge-bucket/
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
upload: ./hello.txt to s3://hoge-bucket/hello.txt
$ aws --no-verify-ssl --endpoint-url https://localhost:9000 s3 ls s3://hoge-bucket/
/opt/homebrew/Cellar/awscli/2.15.28/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py:1061: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
  warnings.warn(
2024-03-21 19:08:13          6 hello.txt

エラーがでているが、バケットの作成と表示、オブジェクトのコピーができた。

SFTP でのアクセス

テナントの設定を変更する。

k -n minio-tenant-1 edit tenant minio1
spec:
...
  features:
    enableSFTP: true # false から true に変更

Service のポートに 8022 が追加される。

$ k -n minio-tenant-1 get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
minio            ClusterIP   172.20.1.251    <none>        443/TCP             13d
minio1-console   ClusterIP   172.20.15.213   <none>        9443/TCP            13d
minio1-hl        ClusterIP   None            <none>        9000/TCP,8022/TCP   13d

これもポートフォワードする。

$ k -n minio-tenant-1 port-forward svc/minio1-hl 8022:8022
Forwarding from 127.0.0.1:8022 -> 8022
Forwarding from [::1]:8022 -> 8022

sftp でアクセスする。

$ sftp -P 8022 ET3U6W5UKQXFG7FDY1Q0@localhost
The authenticity of host '[localhost]:8022 ([::1]:8022)' can't be established.
ECDSA key fingerprint is SHA256:fzvIFWM20Ay8Nj4zo/K+gvu4blDaoHSf2p9fdcQA5JI.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[localhost]:8022' (ECDSA) to the list of known hosts.
ET3U6W5UKQXFG7FDY1Q0@localhost's password:
Connected to localhost.
sftp> ls
hoge-bucket
sftp> ls hoge-bucket
hoge-bucket/hello.txt
sftp>

2024 年 3 月の読書メモを書いてないけど読んだ本 (5 冊)

2024 年 2 月の読書メモを書いてないけど読んだ本 (12 冊)

2024 年 1 月の読書メモを書いてないけど読んだ本 (4 冊)

Portworx を試す

以下のドキュメントにしたがって、EKS で Portworx を試す。

IAM ポリシーの作成

後でノードにアタッチする Portworx 用の IAM ポリシーを作成する。

cat << EOF > portworx-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": [
                "ec2:AttachVolume",
                "ec2:ModifyVolume",
                "ec2:DetachVolume",
                "ec2:CreateTags",
                "ec2:CreateVolume",
                "ec2:DeleteTags",
                "ec2:DeleteVolume",
                "ec2:DescribeTags",
                "ec2:DescribeVolumeAttribute",
                "ec2:DescribeVolumesModifications",
                "ec2:DescribeVolumeStatus",
                "ec2:DescribeVolumes",
                "ec2:DescribeInstances",
                "autoscaling:DescribeAutoScalingGroups"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
EOF
aws iam create-policy --policy-name portworx-policy --policy-document file://portworx-policy.json

クラスターの作成

CLUSTER_NAME="portworx"
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1
  version: "1.26"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true
EOF
eksctl create cluster -f cluster.yaml

ノードを作成する。

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > m1.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: m1
    minSize: 3
    maxSize: 3
    desiredCapacity: 3
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
        - arn:aws:iam::${AWS_ACCOUNT_ID}:policy/portworx-policy
EOF
eksctl create nodegroup -f m1.yaml

Adminロールにも権限をつけておく。

CLUSTER_NAME="portworx"
USER_NAME="Admin:{{SessionName}}"
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
ROLE_ARN="arn:aws:iam::${AWS_ACCOUNT_ID}:role/Admin"
eksctl create iamidentitymapping --cluster ${CLUSTER_NAME} --arn ${ROLE_ARN} --username ${USER_NAME} --group system:masters

ノードを確認する。

$ k get node
NAME                                              STATUS   ROLES    AGE     VERSION
ip-10-0-104-77.ap-northeast-1.compute.internal    Ready    <none>   2m21s   v1.26.10-eks-e71965b
ip-10-0-110-111.ap-northeast-1.compute.internal   Ready    <none>   2m20s   v1.26.10-eks-e71965b
ip-10-0-76-90.ap-northeast-1.compute.internal     Ready    <none>   2m22s   v1.26.10-eks-e71965b

Pod を確認する。

$ k get po -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-6c862             1/1     Running   0          2m29s
kube-system   aws-node-rj6vf             1/1     Running   0          2m31s
kube-system   aws-node-tqxzf             1/1     Running   0          2m30s
kube-system   coredns-6cbf959cdb-pmwcc   1/1     Running   0          15m
kube-system   coredns-6cbf959cdb-xsm45   1/1     Running   0          15m
kube-system   kube-proxy-88gxp           1/1     Running   0          2m30s
kube-system   kube-proxy-khpmr           1/1     Running   0          2m29s
kube-system   kube-proxy-rmw4j           1/1     Running   0          2m31s

Portworx のインストール

Porworx Central でアカウントを作成し、マニフェストを生成する。

Namespace を作成し、生成されたマニフェストを適用する。

$ kubectl create ns portworx
namespace/portworx created
$ kubectl apply -f 'https://install.portworx.com/3.0?comp=pxoperator&kbver=1.26.10&ns=portworx'
serviceaccount/portworx-operator created
clusterrole.rbac.authorization.k8s.io/portworx-operator created
clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
deployment.apps/portworx-operator created
$ kubectl apply -f 'https://install.portworx.com/3.0?operator=true&mc=false&kbver=1.26.10&ns=portworx&oem=esse&user=270feccd-5901-405a-88e1-dba1f4b46359&b=true&iop=6&s=%22type%3Dgp3%2Csize%3D150%22&c=px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c&eks=true&stork=true&csi=true&mon=true&tel=true&st=k8s&promop=true'
storagecluster.core.libopenstorage.org/px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c created
secret/px-essential created

全ての Pod が起動するまでそこそこ時間がかかる。

$ k -n portworx get po   
NAME                                                    READY   STATUS    RESTARTS        AGE
autopilot-c9fb4b9b6-mcn9n                               1/1     Running   0               6m5s
portworx-api-d6pws                                      1/1     Running   0               6m1s
portworx-api-k9hwl                                      1/1     Running   0               6m1s
portworx-api-tk82c                                      1/1     Running   0               6m1s
portworx-kvdb-jjwdd                                     1/1     Running   0               2m58s
portworx-kvdb-wp4v9                                     1/1     Running   0               2m48s
portworx-operator-579774cc76-tpwfw                      1/1     Running   0               8m30s
portworx-pvc-controller-7cb6d6c596-gvfcs                1/1     Running   0               6m
portworx-pvc-controller-7cb6d6c596-pgb9g                1/1     Running   0               6m
portworx-pvc-controller-7cb6d6c596-smv5j                1/1     Running   0               6m
prometheus-px-prometheus-0                              2/2     Running   0               5m53s
px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-6zwdn   2/2     Running   3 (3m34s ago)   6m
px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-bxw86   2/2     Running   3 (3m32s ago)   6m
px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-v5rm9   2/2     Running   3 (3m28s ago)   6m
px-csi-ext-795877cd5-5lljv                              4/4     Running   0               6m5s
px-csi-ext-795877cd5-qhvdp                              4/4     Running   0               6m5s
px-csi-ext-795877cd5-x7c5j                              4/4     Running   0               6m5s
px-prometheus-operator-6c4db44757-wbw62                 1/1     Running   0               6m2s
px-telemetry-phonehome-5qzk9                            2/2     Running   0               2m39s
px-telemetry-phonehome-6szb5                            2/2     Running   0               2m39s
px-telemetry-phonehome-db5vj                            2/2     Running   0               2m39s
px-telemetry-registration-79b94c999b-f56kq              2/2     Running   0               2m39s
stork-559cd7f58b-6zhns                                  1/1     Running   0               6m10s
stork-559cd7f58b-bw958                                  1/1     Running   0               6m10s
stork-559cd7f58b-l799w                                  1/1     Running   0               6m10s
stork-scheduler-5947f85df5-m84c7                        1/1     Running   0               6m10s
stork-scheduler-5947f85df5-mfv5l                        1/1     Running   0               6m10s
stork-scheduler-5947f85df5-xkdvr                        1/1     Running   0               6m10s

ストレージノードのモニター

StorageNode が Online であることを確認する。

$ kubectl -n portworx get storagenode
NAME                                              ID                                     STATUS   VERSION           AGE
ip-10-0-104-77.ap-northeast-1.compute.internal    5783748c-f126-4ff3-9c9c-9f811567b91f   Online   3.0.4.0-1396ef3   6m16s
ip-10-0-110-111.ap-northeast-1.compute.internal   8760a747-977d-4447-b685-c89d2ae7fa28   Online   3.0.4.0-1396ef3   6m16s
ip-10-0-76-90.ap-northeast-1.compute.internal     a46b8dcc-cf64-4ffa-ae68-6cafba903387   Online   3.0.4.0-1396ef3   6m16s

describe してみる。

$ kubectl -n portworx describe storagenode
Name:         ip-10-0-104-77.ap-northeast-1.compute.internal
Namespace:    portworx
Labels:       controller-revision-hash=6c4cfcbc8c
              name=portworx
Annotations:  <none>
API Version:  core.libopenstorage.org/v1
Kind:         StorageNode
Metadata:
  Creation Timestamp:  2023-11-24T07:25:02Z
  Generation:          2
  Owner References:
    API Version:           core.libopenstorage.org/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  StorageCluster
    Name:                  px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c
    UID:                   8be97a19-2a12-4a28-a54b-4de47f928620
  Resource Version:        5932
  UID:                     58a52e81-de86-4236-ab76-f45aa481aad0
Spec:
  Cloud Storage:
  Version:  3.0.4.0-1396ef3
Status:
  Conditions:
    Last Transition Time:  2023-11-24T07:28:30Z
    Message:               node is kvdb member listening on 10.0.104.77
    Status:                Online
    Type:                  NodeKVDB
    Last Transition Time:  2023-11-24T07:28:14Z
    Status:                Online
    Type:                  NodeState
  Kernel Version:          5.10.198-187.748.amzn2.x86_64
  Network:
    Data IP:  10.0.104.77
    Mgmt IP:  10.0.104.77
  Node Attributes:
    Kvdb:            true
    Storage:         true
  Node UID:          5783748c-f126-4ff3-9c9c-9f811567b91f
  Operating System:  Amazon Linux 2
  Phase:             Online
  Storage:
    Total Size:  150Gi
    Used Size:   8063801099
Events:          <none>


Name:         ip-10-0-110-111.ap-northeast-1.compute.internal
Namespace:    portworx
Labels:       controller-revision-hash=6c4cfcbc8c
              name=portworx
Annotations:  <none>
API Version:  core.libopenstorage.org/v1
Kind:         StorageNode
Metadata:
  Creation Timestamp:  2023-11-24T07:25:02Z
  Generation:          2
  Owner References:
    API Version:           core.libopenstorage.org/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  StorageCluster
    Name:                  px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c
    UID:                   8be97a19-2a12-4a28-a54b-4de47f928620
  Resource Version:        5595
  UID:                     3322e854-134b-4152-b156-611b87e1be42
Spec:
  Cloud Storage:
  Version:  3.0.4.0-1396ef3
Status:
  Conditions:
    Last Transition Time:  2023-11-24T07:28:04Z
    Status:                Online
    Type:                  NodeState
  Kernel Version:          5.10.198-187.748.amzn2.x86_64
  Network:
    Data IP:  10.0.110.111
    Mgmt IP:  10.0.110.111
  Node Attributes:
    Kvdb:            false
    Storage:         false
  Node UID:          8760a747-977d-4447-b685-c89d2ae7fa28
  Operating System:  Amazon Linux 2
  Phase:             Online
  Storage:
    Total Size:  0
    Used Size:   0
Events:          <none>


Name:         ip-10-0-76-90.ap-northeast-1.compute.internal
Namespace:    portworx
Labels:       controller-revision-hash=6c4cfcbc8c
              name=portworx
Annotations:  <none>
API Version:  core.libopenstorage.org/v1
Kind:         StorageNode
Metadata:
  Creation Timestamp:  2023-11-24T07:25:02Z
  Generation:          2
  Owner References:
    API Version:           core.libopenstorage.org/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  StorageCluster
    Name:                  px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c
    UID:                   8be97a19-2a12-4a28-a54b-4de47f928620
  Resource Version:        5597
  UID:                     252c6501-e2e9-47a8-8845-0b2a1d7f4cc4
Spec:
  Cloud Storage:
  Version:  3.0.4.0-1396ef3
Status:
  Conditions:
    Last Transition Time:  2023-11-24T07:28:04Z
    Message:               node is kvdb leader listening on 10.0.76.90
    Status:                Online
    Type:                  NodeKVDB
    Last Transition Time:  2023-11-24T07:28:04Z
    Status:                Online
    Type:                  NodeState
  Kernel Version:          5.10.198-187.748.amzn2.x86_64
  Network:
    Data IP:  10.0.76.90
    Mgmt IP:  10.0.76.90
  Node Attributes:
    Kvdb:            true
    Storage:         true
  Node UID:          a46b8dcc-cf64-4ffa-ae68-6cafba903387
  Operating System:  Amazon Linux 2
  Phase:             Online
  Storage:
    Total Size:  150Gi
    Used Size:   8063801099
Events:          <none>

ステータスの確認

StorageCluster のステータスを確認する。

$ kubectl -n portworx exec px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-6zwdn  -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Healthy
Metering: Healthy
License: PX-Essential (lease renewal in 23h, 56m)
Node ID: a46b8dcc-cf64-4ffa-ae68-6cafba903387
        IP: 10.0.76.90
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE            REGION
        0       HIGH            raid0           150 GiB 7.5 GiB Online  ap-northeast-1a ap-northeast-1
        Local Storage Devices: 1 device
        Device  Path            Media Type              Size            Last-Scan
        0:1     /dev/nvme1n1    STORAGE_MEDIUM_NVME     150 GiB         24 Nov 23 07:27 UTC
        total                   -                       150 GiB
        Cache Devices:
         * No cache devices
        Kvdb Device:
        Device Path     Size
        /dev/nvme2n1    32 GiB
         * Internal kvdb on this node is using this dedicated kvdb device to store its data.
Cluster Summary
        Cluster ID: px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c
        Cluster UUID: 9ae75a0d-c91f-41b6-91cb-161ce9ac3a76
        Scheduler: kubernetes
        Total Nodes: 2 node(s) with storage (2 online), 1 node(s) without storage (1 online)
        IP              ID                                      SchedulerNodeName                               Auth            StorageNode     Used    Capacity        StatusStorageStatus    Version         Kernel                          OS
        10.0.76.90      a46b8dcc-cf64-4ffa-ae68-6cafba903387    ip-10-0-76-90.ap-northeast-1.compute.internal   Disabled        Yes             7.5 GiB 150 GiB         OnlineUp (This node)   3.0.4.0-1396ef3 5.10.198-187.748.amzn2.x86_64   Amazon Linux 2
        10.0.104.77     5783748c-f126-4ff3-9c9c-9f811567b91f    ip-10-0-104-77.ap-northeast-1.compute.internal  Disabled        Yes             7.5 GiB 150 GiB         OnlineUp               3.0.4.0-1396ef3 5.10.198-187.748.amzn2.x86_64   Amazon Linux 2
        10.0.110.111    8760a747-977d-4447-b685-c89d2ae7fa28    ip-10-0-110-111.ap-northeast-1.compute.internal Disabled        No              0 B     0 B             OnlineNo Storage       3.0.4.0-1396ef3 5.10.198-187.748.amzn2.x86_64   Amazon Linux 2
Global Storage Pool
        Total Used      :  15 GiB
        Total Capacity  :  300 GiB

ノードのボリュームとは別に、EBS ボリュームが 4 つ作られている。

$ kubectl -n portworx get storagecluster
NAME                                              CLUSTER UUID                           STATUS    VERSION   AGE
px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c   9ae75a0d-c91f-41b6-91cb-161ce9ac3a76   Running   3.0.4     10m
$ kubectl -n portworx get storagenode
NAME                                              ID                                     STATUS   VERSION           AGE
ip-10-0-104-77.ap-northeast-1.compute.internal    5783748c-f126-4ff3-9c9c-9f811567b91f   Online   3.0.4.0-1396ef3   8m54s
ip-10-0-110-111.ap-northeast-1.compute.internal   8760a747-977d-4447-b685-c89d2ae7fa28   Online   3.0.4.0-1396ef3   8m54s
ip-10-0-76-90.ap-northeast-1.compute.internal     a46b8dcc-cf64-4ffa-ae68-6cafba903387   Online   3.0.4.0-1396ef3   8m54s
$ kubectl -n portworx exec px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-6zwdn -- /opt/pwx/bin/pxctl cluster provision-status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
NODE ID                                 IP              HOSTNAME                                        NODE STATUS     POOL                                            POOL STATUS    IO_PRIORITY     SIZE    AVAILABLE       USED    PROVISIONED     ZONE            REGION          RACK
5783748c-f126-4ff3-9c9c-9f811567b91f    10.0.104.77     ip-10-0-104-77.ap-northeast-1.compute.internal  Up              0 ( 9485edb5-756e-49c3-8e68-64ae78e94f4b )      OnlineHIGH             150 GiB 142 GiB         7.5 GiB 0 B             ap-northeast-1c ap-northeast-1  default
a46b8dcc-cf64-4ffa-ae68-6cafba903387    10.0.76.90      ip-10-0-76-90.ap-northeast-1.compute.internal   Up              0 ( 5568ee00-9629-440a-8f50-79aef47fda0d )      OnlineHIGH             150 GiB 142 GiB         7.5 GiB 0 B             ap-northeast-1a ap-northeast-1  default

動作確認

ReadWriteOnce

ストレージクラスを確認する。

$ kubectl get storageclass
NAME                                 PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2 (default)                        kubernetes.io/aws-ebs           Delete          WaitForFirstConsumer   false                  25m
px-csi-db                            pxd.portworx.com                Delete          Immediate              true                   10m
px-csi-db-cloud-snapshot             pxd.portworx.com                Delete          Immediate              true                   10m
px-csi-db-cloud-snapshot-encrypted   pxd.portworx.com                Delete          Immediate              true                   10m
px-csi-db-encrypted                  pxd.portworx.com                Delete          Immediate              true                   10m
px-csi-db-local-snapshot             pxd.portworx.com                Delete          Immediate              true                   10m
px-csi-db-local-snapshot-encrypted   pxd.portworx.com                Delete          Immediate              true                   10m
px-csi-replicated                    pxd.portworx.com                Delete          Immediate              true                   10m
px-csi-replicated-encrypted          pxd.portworx.com                Delete          Immediate              true                   10m
px-db                                kubernetes.io/portworx-volume   Delete          Immediate              true                   10m
px-db-cloud-snapshot                 kubernetes.io/portworx-volume   Delete          Immediate              true                   10m
px-db-cloud-snapshot-encrypted       kubernetes.io/portworx-volume   Delete          Immediate              true                   10m
px-db-encrypted                      kubernetes.io/portworx-volume   Delete          Immediate              true                   10m
px-db-local-snapshot                 kubernetes.io/portworx-volume   Delete          Immediate              true                   10m
px-db-local-snapshot-encrypted       kubernetes.io/portworx-volume   Delete          Immediate              true                   10m
px-replicated                        kubernetes.io/portworx-volume   Delete          Immediate              true                   10m
px-replicated-encrypted              kubernetes.io/portworx-volume   Delete          Immediate              true                   10m
stork-snapshot-sc                    stork-snapshot                  Delete          Immediate              true                   10m

PVC を作成する。

cat << EOF > px-check-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
    name: px-check-pvc
spec:
    storageClassName: px-csi-db
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 2Gi
EOF
$ kubectl apply -f px-check-pvc.yaml
persistentvolumeclaim/px-check-pvc created

PVC を確認すると Pending になってしまった。

$ k get pvc -A
NAMESPACE   NAME           STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     px-check-pvc   Pending                                      px-csi-db      3m6s
$ k describe pvc px-check-pvc
Name:          px-check-pvc
Namespace:     default
StorageClass:  px-csi-db
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: pxd.portworx.com
               volume.kubernetes.io/storage-provisioner: pxd.portworx.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason                Age                  From                                                                              Message
  ----     ------                ----                 ----                                                                              -------
  Normal   Provisioning          75s (x8 over 3m23s)  pxd.portworx.com_px-csi-ext-795877cd5-qhvdp_8f2aa72a-b9db-4d3e-b7c1-d8b8e034ba08  External provisioner is provisioning volume for claim "default/px-check-pvc"
  Warning  ProvisioningFailed    75s (x8 over 3m22s)  pxd.portworx.com_px-csi-ext-795877cd5-qhvdp_8f2aa72a-b9db-4d3e-b7c1-d8b8e034ba08  failed to provision volume with StorageClass "px-csi-db": rpc error: code = Internal desc = Failed to create volume: could not find enough nodes to provision volume
  Normal   ExternalProvisioning  7s (x14 over 3m22s)  persistentvolume-controller                                                       waiting for a volume to be created, either by external provisioner "pxd.portworx.com" or manually created by system administrator
  Normal   ExternalProvisioning  5s (x16 over 3m23s)  persistentvolume-controller                                                       waiting for a volume to be created, either by external provisioner "pxd.portworx.com" or manually created by system administrator

この StorageClass は repl: "3" なのでノードが足りないのかもしれない。

$ k get sc px-csi-db -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    params/aggregation_level: Specifies the number of replication sets the volume
      can be aggregated from
    params/block_size: Block size
    params/docs: https://docs.portworx.com/scheduler/kubernetes/dynamic-provisioning.html
    params/fs: 'Filesystem to be laid out: none|xfs|ext4'
    params/io_profile: 'IO Profile can be used to override the I/O algorithm Portworx
      uses for the volumes: db|sequential|random|cms'
    params/journal: Flag to indicate if you want to use journal device for the volume's
      metadata. This will use the journal device that you used when installing Portworx.
      It is recommended to use a journal device to absorb PX metadata writes
    params/priority_io: 'IO Priority: low|medium|high'
    params/repl: 'Replication factor for the volume: 1|2|3'
    params/secure: 'Flag to create an encrypted volume: true|false'
    params/shared: 'Flag to create a globally shared namespace volume which can be
      used by multiple pods: true|false'
    params/sticky: Flag to create sticky volumes that cannot be deleted until the
      flag is disabled
  creationTimestamp: "2023-11-24T07:25:01Z"
  name: px-csi-db
  resourceVersion: "3999"
  uid: 06e9100b-e9b7-499b-ae5e-9a84999813da
parameters:
  io_profile: db_remote
  repl: "3"
provisioner: pxd.portworx.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

一旦削除。

$ k delete pvc px-check-pvc
persistentvolumeclaim "px-check-pvc" deleted

repl: "2" の別の StorageClass を使ってみる。

$ k get sc px-csi-replicated -oyaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: "2023-11-24T07:25:01Z"
  name: px-csi-replicated
  resourceVersion: "4003"
  uid: 0851edd9-bf1e-47f7-ba0e-32b81fa0b2e5
parameters:
  repl: "2"
provisioner: pxd.portworx.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
cat << EOF > px-check-pvc2.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
    name: px-check-pvc2
spec:
    storageClassName: px-csi-replicated
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 2Gi
EOF
$ kubectl apply -f px-check-pvc2.yaml
persistentvolumeclaim/px-check-pvc created

Bound になった。

$ k get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
px-check-pvc2   Bound    pvc-58c03f77-9c0b-4a60-a540-8c3896557f22   2Gi        RWO            px-csi-replicated   43s
$ k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS        REASON   AGE
pvc-58c03f77-9c0b-4a60-a540-8c3896557f22   2Gi        RWO            Delete           Bound    default/px-check-pvc2   px-csi-replicated            55s

EBS ボリュームは増えていない。既存のボリュームで作成済みのストレージプールから切り出されたと考えられる。

この PVC をマウントする Pod を実行する。

cat << EOF > pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: nginx
    name: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: px-check-pvc2
EOF
$ kubectl apply -f pod1.yaml
pod/pod1 created

確認する。

$ k get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          19s
$ k exec -it pod1 -- bash
root@pod1:/# ls -l /data
total 0
root@pod1:/# touch /data/test
root@pod1:/# ls -l /data
total 0
-rw-r--r-- 1 root root 0 Nov 24 11:22 test
root@pod1:/# exit
exit

別のノードに同じ PVC をマウントする Pod を作れるか確認する。

$ k get node
NAME                                              STATUS   ROLES    AGE    VERSION
ip-10-0-104-77.ap-northeast-1.compute.internal    Ready    <none>   4h5m   v1.26.10-eks-e71965b
ip-10-0-110-111.ap-northeast-1.compute.internal   Ready    <none>   4h5m   v1.26.10-eks-e71965b
ip-10-0-76-90.ap-northeast-1.compute.internal     Ready    <none>   4h5m   v1.26.10-eks-e71965b
$ k get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE                                            NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          98s   10.0.84.13   ip-10-0-76-90.ap-northeast-1.compute.internal   <none>           <none>

ノード名を指定して Pod を起動する。

cat << EOF > pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod2
  name: pod2
spec:
  nodeName: ip-10-0-104-77.ap-northeast-1.compute.internal
  containers:
  - image: nginx
    name: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: px-check-pvc2
EOF
$ k apply -f pod2.yaml
pod/pod2 created

スタックすることが確認できる。

$ k get pods
NAME   READY   STATUS              RESTARTS   AGE
pod1   1/1     Running             0          4m2s
pod2   0/1     ContainerCreating   0          48s
$ k describe pod pod2
...
Events:
  Type     Reason       Age               From     Message
  ----     ------       ----              ----     -------
  Warning  FailedMount  2s (x8 over 69s)  kubelet  MountVolume.SetUp failed for volume "pvc-58c03f77-9c0b-4a60-a540-8c3896557f22" : rpc error: code = Unavailable desc = failed  to attach volume: Non-shared volume is already attached on another node. Non-shared volumes can only be attached on one node at a time.

Pod は消しておく。

$ k delete pod pod1 pod2
pod "pod1" deleted
pod "pod2" deleted

PV/PVC も消しておく。

$ k delete pvc px-check-pvc2
persistentvolumeclaim "px-check-pvc2" deleted
$ k get pvc
No resources found in default namespace.
$ k get pv
No resources found

ReadWriteMany

sharedv4: "true" の StorageClass を作成する。

cat << EOF > portworx-rwx-rep2.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: portworx-rwx-rep2
provisioner: pxd.portworx.com
parameters:
  repl: "2"
  sharedv4: "true"
  sharedv4_svc_type: "ClusterIP"
reclaimPolicy: Retain
allowVolumeExpansion: true
EOF
$ k apply -f portworx-rwx-rep2.yaml
storageclass.storage.k8s.io/portworx-rwx-rep2 created

PVC を作成する。

cat << EOF > px-sharedv4-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: px-sharedv4-pvc
  annotations:
    volume.beta.kubernetes.io/storage-class: portworx-rwx-rep2
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
EOF
$ k apply -f px-sharedv4-pvc.yaml
persistentvolumeclaim/px-sharedv4-pvc created

PV/PVC を確認する。

$ k get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
px-sharedv4-pvc   Bound    pvc-43be38c0-a9fb-4c03-810a-e567929b10a9   1Gi        RWX            portworx-rwx-rep2   9s
$ k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS        REASON   AGE
pvc-43be38c0-a9fb-4c03-810a-e567929b10a9   1Gi        RWX            Retain           Bound    default/px-sharedv4-pvc   portworx-rwx-rep2            12s

Pod を作成する。

cat << EOF > pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod3
  name: pod3
spec:
  containers:
  - image: nginx
    name: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: px-sharedv4-pvc
EOF
$ k apply -f pod3.yaml
pod/pod3 created

Pod を確認する。ファイルを作っておく。

$ k get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE                                              NOMINATED NODE   READINESS GATES
pod3   1/1     Running   0          20s   10.0.99.200   ip-10-0-110-111.ap-northeast-1.compute.internal   <none>           <none>
$ k exec -it pod3 -- bash
root@pod3:/# echo hello > /data/test.txt
root@pod3:/# ls -l /data/
total 4
-rw-r--r-- 1 root root 6 Nov 24 11:37 test.txt
root@pod3:/# cat /data/test.txt
hello
root@pod3:/# exit
exit

2 つめの Pod をノード名を指定して起動する。

cat << EOF > pod4.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod4
  name: pod4
spec:
  nodeName: ip-10-0-104-77.ap-northeast-1.compute.internal
  containers:
  - image: nginx
    name: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: px-sharedv4-pvc
EOF
$ k apply -f pod4.yaml
pod/pod4 created

確認する。問題なく起動した。先ほど pod3 書き込んだファイルも確認できる。

$ k get po -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP            NODE                                              NOMINATED NODE   READINESS GATES
pod3   1/1     Running   0          3m18s   10.0.99.200   ip-10-0-110-111.ap-northeast-1.compute.internal   <none>           <none>
pod4   1/1     Running   0          5s      10.0.97.13    ip-10-0-104-77.ap-northeast-1.compute.internal    <none>           <none>
$ k exec -it pod4 -- bash
root@pod4:/# ls -l /data/
total 4
-rw-r--r-- 1 root root 6 Nov 24 11:37 test.txt
root@pod4:/# cat /data/test.txt
hello
root@pod4:/# echo hello2 >> /data/test.txt
root@pod4:/# cat /data/test.txt
hello
root@pod4:/# exit
exit

もう一度 pod3 からも確認する。両方の Pod から書き込めていることが確認できる。

$ k exec -it pod3 -- bash
root@pod3:/# cat /data/test.txt
hello
hello2
root@pod3:/# exit
exit

Cloud9 の権限を与える

他の人に Cloud9 環境への権限を与える方法のメモ。

ENV_ID=a46350721c354db29469c50050d95219
USER_ARN=arn:aws:sts::123456789012:assumed-role/Cloud9User/sotosugi-Isengard
aws cloud9 create-environment-membership --environment-id ${ENV_ID} --user-arn ${USER_ARN} --permissions read-write