728x90
반응형
버전 rook-1.7.5
k8s 1.22
주요 프로세스
- ceph-mon: Cluster monitor로 active / failed node 확인하는 역할을 수행하며 ceph storage cluster map의 master copy를 유지
- ceph-mds: Metadata server로 inode와 디렉토리들의 메타데이터(filesystem의 디렉토리 및 파일이름, RADOS cluster에 저장된 object로의 매핑정보) 를 저장
- ceph-osd: Object storage devices. 실제 파일 내용을 저장하고 OSD의 상태를 확인해서 monitor에 알려주는 역할도 수행
- ceph-rgw: RESTful gateways. Object storage layer를 외부에 노출시키기 위한 인터페이스
storageclass 생성
1. 문제 : sc 만들지 않아서 pv, pvc pending 나옴
2. 해결 : cephfs, cephblock 생성
문제: sc 만들지 않아서 pv, pvc pending 나옴
# airflow의 예시
$ cat airflowlog-pv-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: log-airflow-0
namespace: airflow #-webinar
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: rook-ceph-block
$ k describe pvc log-airflow-0 -n airflow
Name: log-airflow-0
Namespace: airflow
StorageClass: rook-ceph-block
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 3m53s (x342 over 89m) persistentvolume-controller storageclass.storage
# storage class 만들면 해결됨
해결 : cephfs, cephblock 생성
- kubectl create -f deploy/examples/csi/rbd/storageclass.yaml
$ cd ~/deploy/rook-ceph/rook-1.7.5/cluster/examples/kubernetes/ceph/csi
$ ll
total 8
drwxrwxr-x 2 bigdata bigdata 4096 Jan 24 14:52 cephfs
drwxrwxr-x 2 bigdata bigdata 4096 Jan 24 14:52 rbd
drwxrwxr-x 4 bigdata bigdata 31 Jan 24 14:52 template
$ kubectl create -f deploy/examples/csi/rbd/storageclass.yaml
$ kubectl create -f deploy/examples/csi/cephfs/storageclass.yaml
$ k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 6m57s
rook-ceph-delete-bucket rook-ceph.ceph.rook.io/bucket Delete Immediate false 8d
rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 4m57s
cephfilesystem 생성
1. 문제 : cdphfs의 경우 pvc가 만들어지지 않고 Pending되는 상황이 나옴
2. 해결 : rook-ceph-mds가 안만들어져서 그럼
1. 문제 : cdphfs의 경우 pvc가 만들어지지 않고 Pending되는 상황이 나옴
$ k get pvc -n sts-ns
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-data-disk Bound pvc-4973968b-dfa5-498b-8db8-df90d4dd02f8 10Gi RWO rook-ceph-block 21h
spark-driver-localdir-pvc Pending rook-cephfs 20h
spark-driver-pvc Pending rook-cephfs 20h
spark-exec-localdir-pvc Pending rook-cephfs 20h
spark-exec-pvc Pending rook-cephfs 20h
$k describe pvc spark-driver-pvc -n sts-ns
Name: spark-driver-pvc
Namespace: sts-ns
StorageClass: rook-cephfs
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 57s (x345 over 21h) rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-689686b44-8swtg_1bc39aa6-b4b2-4e97-b905-f259e677a1e4 External provisioner is provisioning volume for claim "sts-ns/spark-driver-pvc"
Normal ExternalProvisioning 7s (x5044 over 21h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "rook-ceph.cephfs.csi.ceph.com" or manually created by system administrator
해결 : rook-ceph-mds가 안만들어져서 그럼
cephfilesystem 생성
$ k get cephfilesystem -n rook-ceph
No resources found in rook-ceph namespace.
$ k create -f filesystem.yaml
cephfilesystem.ceph.rook.io/myfs created
$ k get cephfilesystem -n rook-ceph
NAME ACTIVEMDS AGE PHASE
myfs 1 20s Ready
$k get all -n rook-ceph | grep myfs
rook-ceph-mds-myfs-a-7f85df9757-6xd2k 1/1 Running 0 5m34s
rook-ceph-mds-myfs-b-676d6bc468-xq8xw 1/1 Running 0 5m33s
기존에 pending되어 있던 pvc 삭제 후 재생성
$ k get pvc -n sts-ns
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-data-disk Bound pvc-4973968b-dfa5-498b-8db8-df90d4dd02f8 10Gi RWO rook-ceph-block 21h
spark-driver-localdir-pvc Pending rook-cephfs 21h
spark-driver-pvc Pending rook-cephfs 21h
spark-exec-localdir-pvc Pending rook-cephfs 21h
spark-exec-pvc Pending rook-cephfs 21h
$ k delete pvc spark-driver-localdir-pvc spark-driver-pvc spark-exec-localdir-pvc spark-exec-pvc -n sts-ns
persistentvolumeclaim "spark-driver-localdir-pvc" deleted
persistentvolumeclaim "spark-driver-pvc" deleted
persistentvolumeclaim "spark-exec-localdir-pvc" deleted
persistentvolumeclaim "spark-exec-pvc" deleted
$ k get pvc -n sts-ns
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-data-disk Bound pvc-4973968b-dfa5-498b-8db8-df90d4dd02f8 10Gi RWO rook-ceph-block 21h
spark-driver-localdir-pvc Bound pvc-b04572c9-ac92-41e5-a1d2-579ae0d64f30 12Gi RWX rook-cephfs 2s
spark-driver-pvc Bound pvc-d788ccb2-9604-4044-bafd-21836ab3e28c 12Gi RWX rook-cephfs 2s
spark-exec-localdir-pvc Bound pvc-ad893d61-76dc-49a5-8e81-e247f74042ed 15Gi RWX rook-cephfs 2s
spark-exec-pvc Bound pvc-825c2d90-c288-457a-bec0-ded637e67879 15Gi RWX
728x90
반응형
'Storage > Ceph' 카테고리의 다른 글
Ceph) ceph 상태 확인 명령어 (0) | 2022.03.14 |
---|---|
kubernetes) rook-ceph dashboard orch 설정 (0) | 2022.03.14 |
kubernetes)rook-ceph mgr 2 설정_안됨 (0) | 2022.03.14 |
Kubernetes) ceph dashboard login error (0) | 2022.03.14 |
ceph) MinioClient 설치 (0) | 2022.02.21 |