728x90
반응형
standby active를 확인할 수 잇음
sh-4.4$ ceph status || ceph -w cluster: id: 9cc2dec6-5cbf-49c3-abdf-1eaa15ec54e2 health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 3h) mgr: a(active, since 13m) mds: 1/1 daemons up, 1 hot standby osd: 5 osds: 5 up (since 3h), 5 in (since 6w) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 1/1 healthy pools: 11 pools, 177 pgs objects: 16.62k objects, 1.2 GiB usage: 4.0 GiB used, 5.0 TiB / 5.0 TiB avail pgs: 177 active+clean io: client: 2.6 KiB/s rd, 10 KiB/s wr, 3 op/s rd, 2 op/s wr |
2. Ceph df
disk free
sh-4.4$ ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED ssd 5.0 TiB 5.0 TiB 4.0 GiB 4.0 GiB 0.08 TOTAL 5.0 TiB 5.0 TiB 4.0 GiB 4.0 GiB 0.08 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 0 B 0 0 B 0 1.6 TiB my-store.rgw.buckets.index 2 8 5.7 MiB 33 17 MiB 0 1.6 TiB my-store.rgw.control 3 8 0 B 8 0 B 0 1.6 TiB my-store.rgw.buckets.non-ec 4 8 0 B 0 0 B 0 1.6 TiB my-store.rgw.meta 5 8 5.9 KiB 22 232 KiB 0 1.6 TiB my-store.rgw.log 6 8 1.3 MiB 341 5.7 MiB 0 1.6 TiB .rgw.root 7 8 3.9 KiB 16 180 KiB 0 1.6 TiB my-store.rgw.buckets.data 8 32 14 MiB 15.75k 210 MiB 0 1.6 TiB replicapool 9 32 999 MiB 402 2.9 GiB 0.06 1.6 TiB myfs-metadata 10 32 2.2 MiB 28 6.7 MiB 0 1.6 TiB myfs-data0 11 32 40 MiB 14 119 MiB 0 1.6 TiB $ ceph osd df ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS 3 ssd 1.00000 1.00000 1024 GiB 1011 MiB 848 MiB 3.7 MiB 159 MiB 1023 GiB 0.10 1.23 118 up 4 ssd 1.00000 1.00000 1024 GiB 771 MiB 646 MiB 5.5 MiB 119 MiB 1023 GiB 0.07 0.94 111 up 0 ssd 1.00000 1.00000 1024 GiB 729 MiB 561 MiB 3.7 MiB 164 MiB 1023 GiB 0.07 0.89 101 up 1 ssd 1.00000 1.00000 1024 GiB 800 MiB 662 MiB 5.0 MiB 133 MiB 1023 GiB 0.08 0.98 91 up 2 ssd 1.00000 1.00000 1024 GiB 784 MiB 643 MiB 3.4 MiB 137 MiB 1023 GiB 0.07 0.96 110 up TOTAL 5.0 TiB 4.0 GiB 3.3 GiB 21 MiB 713 MiB 5.0 TiB 0.08 MIN/MAX VAR: 0.89/1.23 STDDEV: 0.01 |
3. ceph pg dump
Check placement group stats
sh-4.4$ ceph pg dump version 553 stamp 2022-03-14T06:04:09.014640+0000 last_osdmap_epoch 0 last_pg_scan 0 PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG DISK_LOG STATE STATE_STAMP VERSION REPORTED UP UP_PRIMARY ACTING ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP LAST_DEEP_SCRUB DEEP_SCRUB_STAMP SNAPTRIMQ_LEN 11.1f 0 0 0 0 0 0 0 0 15 15 active+clean 2022-03-14T02:07:43.503877+0000 226'15 348:357 [3,0,1] 3 [3,0,1] 3 226'15 2022-03-13T10:01:06.343555+0000 226'15 2022-03-12T00:20:02.028276+0000 0 (생략) 10.1d 2 0 0 0 0 0 84 2 15 15 active+clean 2022-03-14T02:07:34.049634+0000 226'15 348:385 [4,2,1] 4 [4,2,1] 4 226'15 2022-03-13T09:19:12.249732+0000 226'15 2022-03-12T07:49:51.157528+0000 0 OSD_STAT USED AVAIL USED_RAW TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM 4 764 MiB 1023 GiB 764 MiB 1024 GiB [0,1,2,3] 111 42 3 1020 MiB 1023 GiB 1020 MiB 1024 GiB [0,1,2,4] 118 37 2 779 MiB 1023 GiB 779 MiB 1024 GiB [0,1,3,4] 110 36 0 722 MiB 1023 GiB 722 MiB 1024 GiB [1,2,3,4] 101 34 1 796 MiB 1023 GiB 796 MiB 1024 GiB [0,2,3,4] 91 28 sum 4.0 GiB 5.0 TiB 4.0 GiB 5.0 TiB * NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilization. See http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics for further details. dumped all |
4. ceph osd tree
View the CRUSH map
OSD의 상태를 볼 수 있는 걸로, CRUSH 알고리즘 관련 설정값을 볼 수 있는데, 아직은 봐도 뭔소린지 모르겠다.
$ ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 5.00000 root default -9 1.00000 host pretedkim02 3 ssd 1.00000 osd.3 up 1.00000 1.00000 -11 1.00000 host pretedkim03 4 ssd 1.00000 osd.4 up 1.00000 1.00000 -3 1.00000 host pretedkim04 0 ssd 1.00000 osd.0 up 1.00000 1.00000 -7 1.00000 host pretedkim05 1 ssd 1.00000 osd.1 up 1.00000 1.00000 -5 1.00000 host pretedkim06 2 ssd 1.00000 osd.2 up 1.00000 1.00000 |
5. ceph osd create || ceph osd rm
OSD를 생성,삭제 -> 이건 위험해서 안함
6. ceph osd pool create || ceph osd pool delete
storage pool: 생성, 삭제 -> 이건 위험해서 안함
7. ceph osd repair
sh-4.4$ ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 pretedkim04 729M 1023G 0 1638 3 105 exists,up 1 pretedkim05 800M 1023G 0 0 0 0 exists,up 2 pretedkim06 784M 1023G 0 0 0 0 exists,up 3 pretedkim02 1011M 1023G 1 10.9k 0 0 exists,up 4 pretedkim03 771M 1023G 0 0 0 0 exists,up sh-4.4$ ceph osd repair 0 instructed osd(s) 0 to repair sh-4.4$ ceph osd repair 01 instructed osd(s) 1 to repair |
8. ceph tell osd.* bench
Benchmark an OSD default, the test writes 1 GB in total in 4-MB increments.
$ ceph tell osd.* bench osd.0: { "bytes_written": 1073741824, "blocksize": 4194304, "elapsed_sec": 13.690375413, "bytes_per_sec": 78430414.916190296, "iops": 18.699268082664084 } osd.1: { "bytes_written": 1073741824, "blocksize": 4194304, "elapsed_sec": 24.235326259000001, "bytes_per_sec": 44304822.329398453, "iops": 10.563092787122358 } (생략) |
9. ceph osd crush reweight
CRUSH 설정 변경 -> 무서워서 안해봄
10. ceph auth list
List cluster keys
$ ceph auth list mds.myfs-a key: AQAnDQJi/80YJxAAzEkUdaV4E4WCThtqhuogZA== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow * mds.myfs-b key: AQAoDQJiDGnWDhAAtEFRF4rAGbDtgI8PtmdvfQ== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow * osd.0 key: AQBrxvBhjQcBBBAAX7a3TFP+feCLa7fJXvIN0Q== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: AQBsxvBhmn24EBAADmwsXfwhrmnxg0Y8QIfvYA== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.2 key: AQBsxvBhv2LIKBAA/oEUxXlicZ/MgRhR6jDLVA== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.3 key: AQC5xvBhfQsrHhAAiOXETcWlj6vtXEz5oez3Og== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.4 key: AQD+xvBhtBe1CxAAyT6HeOXglm47PXi82bJbHQ== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQCoxfBhM6mWIRAAHEkhVoxHmLbg1e2oQl8JNg== caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: AQAGxvBh6r+jCxAAosZvt5QUuUv8zI/zsRzrOg== caps: [mon] allow profile bootstrap-mds client.bootstrap-mgr key: AQAGxvBh0cmjCxAAJ/5EK5JBXsE5S+La0FyerQ== caps: [mon] allow profile bootstrap-mgr client.bootstrap-osd key: AQAGxvBhK9OjCxAA+MUG0R5gOdUmPyKYtur/VA== caps: [mon] allow profile bootstrap-osd client.bootstrap-rbd key: AQAGxvBhj9yjCxAAOyVaODaRudmv9okcx3Tapw== caps: [mon] allow profile bootstrap-rbd client.bootstrap-rbd-mirror key: AQAGxvBh9eWjCxAALcE97ODrWzc4AVnk/LvaoQ== caps: [mon] allow profile bootstrap-rbd-mirror client.bootstrap-rgw key: AQAGxvBhRe+jCxAAgupnkdlZSf+8jU45Uvt/Qg== caps: [mon] allow profile bootstrap-rgw client.crash key: AQBlxvBhVLm7CBAAEXaTmjFdEZRk6zMD9i35Og== caps: [mgr] allow rw caps: [mon] allow profile crash client.csi-cephfs-node key: AQBkxvBhbamyNxAATZrGiKWdickkge5kLFBP3Q== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs *=* client.csi-cephfs-provisioner key: AQBkxvBhW/thLBAASBeVwD5xCHqpBC1c7G2iVw== caps: [mgr] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQBkxvBhW03nIBAADKMnRuwyn53uF1bT/SUIbg== caps: [mgr] allow rw caps: [mon] profile rbd caps: [osd] profile rbd client.csi-rbd-provisioner key: AQBkxvBhEudzFhAAalvHNc0HS0rKk9OcImbLoQ== caps: [mgr] allow rw caps: [mon] profile rbd caps: [osd] profile rbd client.rbd-mirror-peer key: AQBlxvBh6gRhKRAAaX+0w4dTgvi1qJDFbDgcWw== caps: [mon] profile rbd-mirror-peer caps: [osd] profile rbd client.rgw.my.store.a key: AQAWzvBhS+2cKhAAgRDyRz+o1A3nySr/ywcR9A== caps: [mon] allow rw caps: [osd] allow rwx mgr.a key: AQBlxvBhLNIxNBAA/seCXG2xcDbWS38z731gSQ== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * mgr.b key: AQBFyS5iYuH+KhAAeK/8+CDKk1svWJ8P63CsRA== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * installed auth entries: |
참고
https://www.redhat.com/en/blog/10-commands-every-ceph-administrator-should-know
728x90
반응형
'Storage > Ceph' 카테고리의 다른 글
Ceph)rgw instance 늘리기_ephemeral storage (0) | 2022.04.04 |
---|---|
Ceph)osd resource limit 설정_계산공식 (0) | 2022.04.04 |
kubernetes) rook-ceph dashboard orch 설정 (0) | 2022.03.14 |
kubernetes)rook-ceph mgr 2 설정_안됨 (0) | 2022.03.14 |
Kubernetes) ceph dashboard login error (0) | 2022.03.14 |