以下のドキュメントにしたがって、EKS で Portworx を試す。
IAM ポリシーの作成
後でノードにアタッチする Portworx 用の IAM ポリシーを作成する。
cat << EOF > portworx-policy.json { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": [ "ec2:AttachVolume", "ec2:ModifyVolume", "ec2:DetachVolume", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DescribeTags", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:DescribeVolumes", "ec2:DescribeInstances", "autoscaling:DescribeAutoScalingGroups" ], "Resource": [ "*" ] } ] } EOF
aws iam create-policy --policy-name portworx-policy --policy-document file://portworx-policy.json
クラスターの作成
CLUSTER_NAME="portworx" cat << EOF > cluster.yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: ${CLUSTER_NAME} region: ap-northeast-1 version: "1.26" vpc: cidr: "10.0.0.0/16" availabilityZones: - ap-northeast-1a - ap-northeast-1c cloudWatch: clusterLogging: enableTypes: ["*"] iam: withOIDC: true EOF
eksctl create cluster -f cluster.yaml
ノードを作成する。
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account) cat << EOF > m1.yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: ${CLUSTER_NAME} region: ap-northeast-1 managedNodeGroups: - name: m1 minSize: 3 maxSize: 3 desiredCapacity: 3 privateNetworking: true iam: attachPolicyARNs: - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore - arn:aws:iam::${AWS_ACCOUNT_ID}:policy/portworx-policy EOF
eksctl create nodegroup -f m1.yaml
Adminロールにも権限をつけておく。
CLUSTER_NAME="portworx" USER_NAME="Admin:{{SessionName}}" AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account) ROLE_ARN="arn:aws:iam::${AWS_ACCOUNT_ID}:role/Admin" eksctl create iamidentitymapping --cluster ${CLUSTER_NAME} --arn ${ROLE_ARN} --username ${USER_NAME} --group system:masters
ノードを確認する。
$ k get node NAME STATUS ROLES AGE VERSION ip-10-0-104-77.ap-northeast-1.compute.internal Ready <none> 2m21s v1.26.10-eks-e71965b ip-10-0-110-111.ap-northeast-1.compute.internal Ready <none> 2m20s v1.26.10-eks-e71965b ip-10-0-76-90.ap-northeast-1.compute.internal Ready <none> 2m22s v1.26.10-eks-e71965b
Pod を確認する。
$ k get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-6c862 1/1 Running 0 2m29s kube-system aws-node-rj6vf 1/1 Running 0 2m31s kube-system aws-node-tqxzf 1/1 Running 0 2m30s kube-system coredns-6cbf959cdb-pmwcc 1/1 Running 0 15m kube-system coredns-6cbf959cdb-xsm45 1/1 Running 0 15m kube-system kube-proxy-88gxp 1/1 Running 0 2m30s kube-system kube-proxy-khpmr 1/1 Running 0 2m29s kube-system kube-proxy-rmw4j 1/1 Running 0 2m31s
Portworx のインストール
Porworx Central でアカウントを作成し、マニフェストを生成する。
Namespace を作成し、生成されたマニフェストを適用する。
$ kubectl create ns portworx namespace/portworx created $ kubectl apply -f 'https://install.portworx.com/3.0?comp=pxoperator&kbver=1.26.10&ns=portworx' serviceaccount/portworx-operator created clusterrole.rbac.authorization.k8s.io/portworx-operator created clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created deployment.apps/portworx-operator created $ kubectl apply -f 'https://install.portworx.com/3.0?operator=true&mc=false&kbver=1.26.10&ns=portworx&oem=esse&user=270feccd-5901-405a-88e1-dba1f4b46359&b=true&iop=6&s=%22type%3Dgp3%2Csize%3D150%22&c=px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c&eks=true&stork=true&csi=true&mon=true&tel=true&st=k8s&promop=true' storagecluster.core.libopenstorage.org/px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c created secret/px-essential created
全ての Pod が起動するまでそこそこ時間がかかる。
$ k -n portworx get po NAME READY STATUS RESTARTS AGE autopilot-c9fb4b9b6-mcn9n 1/1 Running 0 6m5s portworx-api-d6pws 1/1 Running 0 6m1s portworx-api-k9hwl 1/1 Running 0 6m1s portworx-api-tk82c 1/1 Running 0 6m1s portworx-kvdb-jjwdd 1/1 Running 0 2m58s portworx-kvdb-wp4v9 1/1 Running 0 2m48s portworx-operator-579774cc76-tpwfw 1/1 Running 0 8m30s portworx-pvc-controller-7cb6d6c596-gvfcs 1/1 Running 0 6m portworx-pvc-controller-7cb6d6c596-pgb9g 1/1 Running 0 6m portworx-pvc-controller-7cb6d6c596-smv5j 1/1 Running 0 6m prometheus-px-prometheus-0 2/2 Running 0 5m53s px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-6zwdn 2/2 Running 3 (3m34s ago) 6m px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-bxw86 2/2 Running 3 (3m32s ago) 6m px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-v5rm9 2/2 Running 3 (3m28s ago) 6m px-csi-ext-795877cd5-5lljv 4/4 Running 0 6m5s px-csi-ext-795877cd5-qhvdp 4/4 Running 0 6m5s px-csi-ext-795877cd5-x7c5j 4/4 Running 0 6m5s px-prometheus-operator-6c4db44757-wbw62 1/1 Running 0 6m2s px-telemetry-phonehome-5qzk9 2/2 Running 0 2m39s px-telemetry-phonehome-6szb5 2/2 Running 0 2m39s px-telemetry-phonehome-db5vj 2/2 Running 0 2m39s px-telemetry-registration-79b94c999b-f56kq 2/2 Running 0 2m39s stork-559cd7f58b-6zhns 1/1 Running 0 6m10s stork-559cd7f58b-bw958 1/1 Running 0 6m10s stork-559cd7f58b-l799w 1/1 Running 0 6m10s stork-scheduler-5947f85df5-m84c7 1/1 Running 0 6m10s stork-scheduler-5947f85df5-mfv5l 1/1 Running 0 6m10s stork-scheduler-5947f85df5-xkdvr 1/1 Running 0 6m10s
ストレージノードのモニター
StorageNode が Online であることを確認する。
$ kubectl -n portworx get storagenode NAME ID STATUS VERSION AGE ip-10-0-104-77.ap-northeast-1.compute.internal 5783748c-f126-4ff3-9c9c-9f811567b91f Online 3.0.4.0-1396ef3 6m16s ip-10-0-110-111.ap-northeast-1.compute.internal 8760a747-977d-4447-b685-c89d2ae7fa28 Online 3.0.4.0-1396ef3 6m16s ip-10-0-76-90.ap-northeast-1.compute.internal a46b8dcc-cf64-4ffa-ae68-6cafba903387 Online 3.0.4.0-1396ef3 6m16s
describe してみる。
$ kubectl -n portworx describe storagenode Name: ip-10-0-104-77.ap-northeast-1.compute.internal Namespace: portworx Labels: controller-revision-hash=6c4cfcbc8c name=portworx Annotations: <none> API Version: core.libopenstorage.org/v1 Kind: StorageNode Metadata: Creation Timestamp: 2023-11-24T07:25:02Z Generation: 2 Owner References: API Version: core.libopenstorage.org/v1 Block Owner Deletion: true Controller: true Kind: StorageCluster Name: px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c UID: 8be97a19-2a12-4a28-a54b-4de47f928620 Resource Version: 5932 UID: 58a52e81-de86-4236-ab76-f45aa481aad0 Spec: Cloud Storage: Version: 3.0.4.0-1396ef3 Status: Conditions: Last Transition Time: 2023-11-24T07:28:30Z Message: node is kvdb member listening on 10.0.104.77 Status: Online Type: NodeKVDB Last Transition Time: 2023-11-24T07:28:14Z Status: Online Type: NodeState Kernel Version: 5.10.198-187.748.amzn2.x86_64 Network: Data IP: 10.0.104.77 Mgmt IP: 10.0.104.77 Node Attributes: Kvdb: true Storage: true Node UID: 5783748c-f126-4ff3-9c9c-9f811567b91f Operating System: Amazon Linux 2 Phase: Online Storage: Total Size: 150Gi Used Size: 8063801099 Events: <none> Name: ip-10-0-110-111.ap-northeast-1.compute.internal Namespace: portworx Labels: controller-revision-hash=6c4cfcbc8c name=portworx Annotations: <none> API Version: core.libopenstorage.org/v1 Kind: StorageNode Metadata: Creation Timestamp: 2023-11-24T07:25:02Z Generation: 2 Owner References: API Version: core.libopenstorage.org/v1 Block Owner Deletion: true Controller: true Kind: StorageCluster Name: px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c UID: 8be97a19-2a12-4a28-a54b-4de47f928620 Resource Version: 5595 UID: 3322e854-134b-4152-b156-611b87e1be42 Spec: Cloud Storage: Version: 3.0.4.0-1396ef3 Status: Conditions: Last Transition Time: 2023-11-24T07:28:04Z Status: Online Type: NodeState Kernel Version: 5.10.198-187.748.amzn2.x86_64 Network: Data IP: 10.0.110.111 Mgmt IP: 10.0.110.111 Node Attributes: Kvdb: false Storage: false Node UID: 8760a747-977d-4447-b685-c89d2ae7fa28 Operating System: Amazon Linux 2 Phase: Online Storage: Total Size: 0 Used Size: 0 Events: <none> Name: ip-10-0-76-90.ap-northeast-1.compute.internal Namespace: portworx Labels: controller-revision-hash=6c4cfcbc8c name=portworx Annotations: <none> API Version: core.libopenstorage.org/v1 Kind: StorageNode Metadata: Creation Timestamp: 2023-11-24T07:25:02Z Generation: 2 Owner References: API Version: core.libopenstorage.org/v1 Block Owner Deletion: true Controller: true Kind: StorageCluster Name: px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c UID: 8be97a19-2a12-4a28-a54b-4de47f928620 Resource Version: 5597 UID: 252c6501-e2e9-47a8-8845-0b2a1d7f4cc4 Spec: Cloud Storage: Version: 3.0.4.0-1396ef3 Status: Conditions: Last Transition Time: 2023-11-24T07:28:04Z Message: node is kvdb leader listening on 10.0.76.90 Status: Online Type: NodeKVDB Last Transition Time: 2023-11-24T07:28:04Z Status: Online Type: NodeState Kernel Version: 5.10.198-187.748.amzn2.x86_64 Network: Data IP: 10.0.76.90 Mgmt IP: 10.0.76.90 Node Attributes: Kvdb: true Storage: true Node UID: a46b8dcc-cf64-4ffa-ae68-6cafba903387 Operating System: Amazon Linux 2 Phase: Online Storage: Total Size: 150Gi Used Size: 8063801099 Events: <none>
ステータスの確認
StorageCluster のステータスを確認する。
$ kubectl -n portworx exec px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-6zwdn -- /opt/pwx/bin/pxctl status Defaulted container "portworx" out of: portworx, csi-node-driver-registrar Status: PX is operational Telemetry: Healthy Metering: Healthy License: PX-Essential (lease renewal in 23h, 56m) Node ID: a46b8dcc-cf64-4ffa-ae68-6cafba903387 IP: 10.0.76.90 Local Storage Pool: 1 pool POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION 0 HIGH raid0 150 GiB 7.5 GiB Online ap-northeast-1a ap-northeast-1 Local Storage Devices: 1 device Device Path Media Type Size Last-Scan 0:1 /dev/nvme1n1 STORAGE_MEDIUM_NVME 150 GiB 24 Nov 23 07:27 UTC total - 150 GiB Cache Devices: * No cache devices Kvdb Device: Device Path Size /dev/nvme2n1 32 GiB * Internal kvdb on this node is using this dedicated kvdb device to store its data. Cluster Summary Cluster ID: px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c Cluster UUID: 9ae75a0d-c91f-41b6-91cb-161ce9ac3a76 Scheduler: kubernetes Total Nodes: 2 node(s) with storage (2 online), 1 node(s) without storage (1 online) IP ID SchedulerNodeName Auth StorageNode Used Capacity StatusStorageStatus Version Kernel OS 10.0.76.90 a46b8dcc-cf64-4ffa-ae68-6cafba903387 ip-10-0-76-90.ap-northeast-1.compute.internal Disabled Yes 7.5 GiB 150 GiB OnlineUp (This node) 3.0.4.0-1396ef3 5.10.198-187.748.amzn2.x86_64 Amazon Linux 2 10.0.104.77 5783748c-f126-4ff3-9c9c-9f811567b91f ip-10-0-104-77.ap-northeast-1.compute.internal Disabled Yes 7.5 GiB 150 GiB OnlineUp 3.0.4.0-1396ef3 5.10.198-187.748.amzn2.x86_64 Amazon Linux 2 10.0.110.111 8760a747-977d-4447-b685-c89d2ae7fa28 ip-10-0-110-111.ap-northeast-1.compute.internal Disabled No 0 B 0 B OnlineNo Storage 3.0.4.0-1396ef3 5.10.198-187.748.amzn2.x86_64 Amazon Linux 2 Global Storage Pool Total Used : 15 GiB Total Capacity : 300 GiB
ノードのボリュームとは別に、EBS ボリュームが 4 つ作られている。
$ kubectl -n portworx get storagecluster NAME CLUSTER UUID STATUS VERSION AGE px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c 9ae75a0d-c91f-41b6-91cb-161ce9ac3a76 Running 3.0.4 10m $ kubectl -n portworx get storagenode NAME ID STATUS VERSION AGE ip-10-0-104-77.ap-northeast-1.compute.internal 5783748c-f126-4ff3-9c9c-9f811567b91f Online 3.0.4.0-1396ef3 8m54s ip-10-0-110-111.ap-northeast-1.compute.internal 8760a747-977d-4447-b685-c89d2ae7fa28 Online 3.0.4.0-1396ef3 8m54s ip-10-0-76-90.ap-northeast-1.compute.internal a46b8dcc-cf64-4ffa-ae68-6cafba903387 Online 3.0.4.0-1396ef3 8m54s
$ kubectl -n portworx exec px-cluster-1bac3950-3d44-4359-bb9a-dd4d4f1ae82c-6zwdn -- /opt/pwx/bin/pxctl cluster provision-status Defaulted container "portworx" out of: portworx, csi-node-driver-registrar NODE ID IP HOSTNAME NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED ZONE REGION RACK 5783748c-f126-4ff3-9c9c-9f811567b91f 10.0.104.77 ip-10-0-104-77.ap-northeast-1.compute.internal Up 0 ( 9485edb5-756e-49c3-8e68-64ae78e94f4b ) OnlineHIGH 150 GiB 142 GiB 7.5 GiB 0 B ap-northeast-1c ap-northeast-1 default a46b8dcc-cf64-4ffa-ae68-6cafba903387 10.0.76.90 ip-10-0-76-90.ap-northeast-1.compute.internal Up 0 ( 5568ee00-9629-440a-8f50-79aef47fda0d ) OnlineHIGH 150 GiB 142 GiB 7.5 GiB 0 B ap-northeast-1a ap-northeast-1 default
動作確認
ReadWriteOnce
ストレージクラスを確認する。
$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 25m px-csi-db pxd.portworx.com Delete Immediate true 10m px-csi-db-cloud-snapshot pxd.portworx.com Delete Immediate true 10m px-csi-db-cloud-snapshot-encrypted pxd.portworx.com Delete Immediate true 10m px-csi-db-encrypted pxd.portworx.com Delete Immediate true 10m px-csi-db-local-snapshot pxd.portworx.com Delete Immediate true 10m px-csi-db-local-snapshot-encrypted pxd.portworx.com Delete Immediate true 10m px-csi-replicated pxd.portworx.com Delete Immediate true 10m px-csi-replicated-encrypted pxd.portworx.com Delete Immediate true 10m px-db kubernetes.io/portworx-volume Delete Immediate true 10m px-db-cloud-snapshot kubernetes.io/portworx-volume Delete Immediate true 10m px-db-cloud-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 10m px-db-encrypted kubernetes.io/portworx-volume Delete Immediate true 10m px-db-local-snapshot kubernetes.io/portworx-volume Delete Immediate true 10m px-db-local-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 10m px-replicated kubernetes.io/portworx-volume Delete Immediate true 10m px-replicated-encrypted kubernetes.io/portworx-volume Delete Immediate true 10m stork-snapshot-sc stork-snapshot Delete Immediate true 10m
PVC を作成する。
cat << EOF > px-check-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: px-check-pvc spec: storageClassName: px-csi-db accessModes: - ReadWriteOnce resources: requests: storage: 2Gi EOF
$ kubectl apply -f px-check-pvc.yaml
persistentvolumeclaim/px-check-pvc created
PVC を確認すると Pending になってしまった。
$ k get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default px-check-pvc Pending px-csi-db 3m6s $ k describe pvc px-check-pvc Name: px-check-pvc Namespace: default StorageClass: px-csi-db Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: pxd.portworx.com volume.kubernetes.io/storage-provisioner: pxd.portworx.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 75s (x8 over 3m23s) pxd.portworx.com_px-csi-ext-795877cd5-qhvdp_8f2aa72a-b9db-4d3e-b7c1-d8b8e034ba08 External provisioner is provisioning volume for claim "default/px-check-pvc" Warning ProvisioningFailed 75s (x8 over 3m22s) pxd.portworx.com_px-csi-ext-795877cd5-qhvdp_8f2aa72a-b9db-4d3e-b7c1-d8b8e034ba08 failed to provision volume with StorageClass "px-csi-db": rpc error: code = Internal desc = Failed to create volume: could not find enough nodes to provision volume Normal ExternalProvisioning 7s (x14 over 3m22s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pxd.portworx.com" or manually created by system administrator Normal ExternalProvisioning 5s (x16 over 3m23s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pxd.portworx.com" or manually created by system administrator
この StorageClass は repl: "3"
なのでノードが足りないのかもしれない。
$ k get sc px-csi-db -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: params/aggregation_level: Specifies the number of replication sets the volume can be aggregated from params/block_size: Block size params/docs: https://docs.portworx.com/scheduler/kubernetes/dynamic-provisioning.html params/fs: 'Filesystem to be laid out: none|xfs|ext4' params/io_profile: 'IO Profile can be used to override the I/O algorithm Portworx uses for the volumes: db|sequential|random|cms' params/journal: Flag to indicate if you want to use journal device for the volume's metadata. This will use the journal device that you used when installing Portworx. It is recommended to use a journal device to absorb PX metadata writes params/priority_io: 'IO Priority: low|medium|high' params/repl: 'Replication factor for the volume: 1|2|3' params/secure: 'Flag to create an encrypted volume: true|false' params/shared: 'Flag to create a globally shared namespace volume which can be used by multiple pods: true|false' params/sticky: Flag to create sticky volumes that cannot be deleted until the flag is disabled creationTimestamp: "2023-11-24T07:25:01Z" name: px-csi-db resourceVersion: "3999" uid: 06e9100b-e9b7-499b-ae5e-9a84999813da parameters: io_profile: db_remote repl: "3" provisioner: pxd.portworx.com reclaimPolicy: Delete volumeBindingMode: Immediate
一旦削除。
$ k delete pvc px-check-pvc persistentvolumeclaim "px-check-pvc" deleted
repl: "2"
の別の StorageClass を使ってみる。
$ k get sc px-csi-replicated -oyaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2023-11-24T07:25:01Z" name: px-csi-replicated resourceVersion: "4003" uid: 0851edd9-bf1e-47f7-ba0e-32b81fa0b2e5 parameters: repl: "2" provisioner: pxd.portworx.com reclaimPolicy: Delete volumeBindingMode: Immediate
cat << EOF > px-check-pvc2.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: px-check-pvc2 spec: storageClassName: px-csi-replicated accessModes: - ReadWriteOnce resources: requests: storage: 2Gi EOF
$ kubectl apply -f px-check-pvc2.yaml
persistentvolumeclaim/px-check-pvc created
Bound になった。
$ k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE px-check-pvc2 Bound pvc-58c03f77-9c0b-4a60-a540-8c3896557f22 2Gi RWO px-csi-replicated 43s
$ k get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-58c03f77-9c0b-4a60-a540-8c3896557f22 2Gi RWO Delete Bound default/px-check-pvc2 px-csi-replicated 55s
EBS ボリュームは増えていない。既存のボリュームで作成済みのストレージプールから切り出されたと考えられる。
この PVC をマウントする Pod を実行する。
cat << EOF > pod1.yaml apiVersion: v1 kind: Pod metadata: labels: run: pod1 name: pod1 spec: containers: - image: nginx name: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /data volumes: - name: data persistentVolumeClaim: claimName: px-check-pvc2 EOF
$ kubectl apply -f pod1.yaml
pod/pod1 created
確認する。
$ k get pods NAME READY STATUS RESTARTS AGE pod1 1/1 Running 0 19s $ k exec -it pod1 -- bash root@pod1:/# ls -l /data total 0 root@pod1:/# touch /data/test root@pod1:/# ls -l /data total 0 -rw-r--r-- 1 root root 0 Nov 24 11:22 test root@pod1:/# exit exit
別のノードに同じ PVC をマウントする Pod を作れるか確認する。
$ k get node NAME STATUS ROLES AGE VERSION ip-10-0-104-77.ap-northeast-1.compute.internal Ready <none> 4h5m v1.26.10-eks-e71965b ip-10-0-110-111.ap-northeast-1.compute.internal Ready <none> 4h5m v1.26.10-eks-e71965b ip-10-0-76-90.ap-northeast-1.compute.internal Ready <none> 4h5m v1.26.10-eks-e71965b $ k get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod1 1/1 Running 0 98s 10.0.84.13 ip-10-0-76-90.ap-northeast-1.compute.internal <none> <none>
ノード名を指定して Pod を起動する。
cat << EOF > pod2.yaml apiVersion: v1 kind: Pod metadata: labels: run: pod2 name: pod2 spec: nodeName: ip-10-0-104-77.ap-northeast-1.compute.internal containers: - image: nginx name: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /data volumes: - name: data persistentVolumeClaim: claimName: px-check-pvc2 EOF
$ k apply -f pod2.yaml
pod/pod2 created
スタックすることが確認できる。
$ k get pods NAME READY STATUS RESTARTS AGE pod1 1/1 Running 0 4m2s pod2 0/1 ContainerCreating 0 48s $ k describe pod pod2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedMount 2s (x8 over 69s) kubelet MountVolume.SetUp failed for volume "pvc-58c03f77-9c0b-4a60-a540-8c3896557f22" : rpc error: code = Unavailable desc = failed to attach volume: Non-shared volume is already attached on another node. Non-shared volumes can only be attached on one node at a time.
Pod は消しておく。
$ k delete pod pod1 pod2 pod "pod1" deleted pod "pod2" deleted
PV/PVC も消しておく。
$ k delete pvc px-check-pvc2 persistentvolumeclaim "px-check-pvc2" deleted $ k get pvc No resources found in default namespace. $ k get pv No resources found
ReadWriteMany
sharedv4: "true"
の StorageClass を作成する。
cat << EOF > portworx-rwx-rep2.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: portworx-rwx-rep2 provisioner: pxd.portworx.com parameters: repl: "2" sharedv4: "true" sharedv4_svc_type: "ClusterIP" reclaimPolicy: Retain allowVolumeExpansion: true EOF
$ k apply -f portworx-rwx-rep2.yaml
storageclass.storage.k8s.io/portworx-rwx-rep2 created
PVC を作成する。
cat << EOF > px-sharedv4-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: px-sharedv4-pvc annotations: volume.beta.kubernetes.io/storage-class: portworx-rwx-rep2 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF
$ k apply -f px-sharedv4-pvc.yaml
persistentvolumeclaim/px-sharedv4-pvc created
PV/PVC を確認する。
$ k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE px-sharedv4-pvc Bound pvc-43be38c0-a9fb-4c03-810a-e567929b10a9 1Gi RWX portworx-rwx-rep2 9s $ k get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-43be38c0-a9fb-4c03-810a-e567929b10a9 1Gi RWX Retain Bound default/px-sharedv4-pvc portworx-rwx-rep2 12s
Pod を作成する。
cat << EOF > pod3.yaml apiVersion: v1 kind: Pod metadata: labels: run: pod3 name: pod3 spec: containers: - image: nginx name: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /data volumes: - name: data persistentVolumeClaim: claimName: px-sharedv4-pvc EOF
$ k apply -f pod3.yaml
pod/pod3 created
Pod を確認する。ファイルを作っておく。
$ k get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod3 1/1 Running 0 20s 10.0.99.200 ip-10-0-110-111.ap-northeast-1.compute.internal <none> <none> $ k exec -it pod3 -- bash root@pod3:/# echo hello > /data/test.txt root@pod3:/# ls -l /data/ total 4 -rw-r--r-- 1 root root 6 Nov 24 11:37 test.txt root@pod3:/# cat /data/test.txt hello root@pod3:/# exit exit
2 つめの Pod をノード名を指定して起動する。
cat << EOF > pod4.yaml apiVersion: v1 kind: Pod metadata: labels: run: pod4 name: pod4 spec: nodeName: ip-10-0-104-77.ap-northeast-1.compute.internal containers: - image: nginx name: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /data volumes: - name: data persistentVolumeClaim: claimName: px-sharedv4-pvc EOF
$ k apply -f pod4.yaml
pod/pod4 created
確認する。問題なく起動した。先ほど pod3 書き込んだファイルも確認できる。
$ k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod3 1/1 Running 0 3m18s 10.0.99.200 ip-10-0-110-111.ap-northeast-1.compute.internal <none> <none> pod4 1/1 Running 0 5s 10.0.97.13 ip-10-0-104-77.ap-northeast-1.compute.internal <none> <none> $ k exec -it pod4 -- bash root@pod4:/# ls -l /data/ total 4 -rw-r--r-- 1 root root 6 Nov 24 11:37 test.txt root@pod4:/# cat /data/test.txt hello root@pod4:/# echo hello2 >> /data/test.txt root@pod4:/# cat /data/test.txt hello root@pod4:/# exit exit
もう一度 pod3 からも確認する。両方の Pod から書き込めていることが確認できる。
$ k exec -it pod3 -- bash root@pod3:/# cat /data/test.txt hello hello2 root@pod3:/# exit exit