Ceph块存储报错解决

luc6poqe.png
这两个错误信息都是在Ceph存储集群中关于存储池配置的警告或错误:

  1. POOL_PG_NUM_NOT_POWER_OF_TWO: 1 pool(s) have non-power-of-two pg_num
    这个错误表示至少有一个存储池的PG(Placement Group)数量不是2的幂次方。在Ceph设计中,出于性能优化考虑,推荐将PG数量设置为2的幂,例如:8, 16, 32, 64等。这是因为Ceph内部数据结构和算法在PG数量为2的幂时表现更好。要解决这个问题,你需要调整受影响的存储池的PG数量至合适的2的幂次方数值。
  2. POOL_APP_NOT_ENABLED: application not enabled on 1 pool(s)
    这个错误表示至少有一个存储池尚未启用应用程序层。在Ceph中,启用应用程序层是为了让存储池支持特定的应用场景,比如CephFS文件系统或RadosGW网关等。如果希望使用某个应用功能,你需要在对应的存储池上启用相应的应用程序层。

解决方法:

  • 对于第一个问题,使用ceph osd pool set <pool_name> pg_num <new_pg_num>命令调整存储池的PG数量为2的幂次方。
  • 对于第二个问题,首先确定你需要哪个应用程序层,然后启用它。例如,如果你想启用CephFS,可以执行类似如下命令:

    ceph osd pool application enable <pool_name> cephfs

    具体的命令取决于你想启用的应用程序及其配置要求。

务必在进行这些操作前备份数据,并充分了解这些变更对现有部署的影响。同时,查阅Ceph官方文档以获取更准确的操作指南。

[ceph-admin@ceph-node1 cluster]$ ceph osd lspools
1 cephfs_data
2 cephfs_metadata
[ceph-admin@ceph-node1 cluster]$ ceph osd pool create block_data 60
Error ERANGE:  pg_num 60 size 3 would mean 756 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)
[ceph-admin@ceph-node1 cluster]$ ceph osd pool create block_data 30
pool 'block_data' created
[ceph-admin@ceph-node1 cluster]$ ceph osd lspools
1 cephfs_data
2 cephfs_metadata
3 block_data
[ceph-admin@ceph-node1 cluster]$ rbd create myimage -s 10G --image-feature layering -p block_data
[ceph-admin@ceph-node1 cluster]$ rbd ls -p block_data
myimage
[ceph-admin@ceph-node1 cluster]$ rbd info myimage -p block_data
rbd image 'myimage':
    size 10 GiB in 2560 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 10b6d0776517
    block_name_prefix: rbd_data.10b6d0776517
    format: 2
    features: layering
    op_features: 
    flags: 
    create_timestamp: Thu Mar 28 09:29:01 2024
    access_timestamp: Thu Mar 28 09:29:01 2024
    modify_timestamp: Thu Mar 28 09:29:01 2024
[ceph-admin@ceph-node1 cluster]$ sudo rbd map myimage -p block_data
/dev/rbd0
[ceph-admin@ceph-node1 cluster]$ rbd showmapped
id pool       namespace image   snap device    
0  block_data           myimage -    /dev/rbd0 
[ceph-admin@ceph-node1 cluster]$ sudo  mkfs.xfs /dev/rbd0
Discarding blocks...Done.
meta-data=/dev/rbd0              isize=512    agcount=16, agsize=163840 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[ceph-admin@ceph-node1 cluster]$ mount /dev/rbd0 /mnt
mount: only root can do that
[ceph-admin@ceph-node1 cluster]$ sudo mount /dev/rbd0 /mnt
[ceph-admin@ceph-node1 cluster]$ df -TH
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  494M     0  494M   0% /dev
tmpfs          tmpfs     510M   60M  451M  12% /dev/shm
tmpfs          tmpfs     510M   41M  469M   9% /run
tmpfs          tmpfs     510M     0  510M   0% /sys/fs/cgroup
/dev/sda3      xfs        20G  5.3G   14G  28% /
/dev/sda1      xfs       312M  160M  152M  52% /boot
tmpfs          tmpfs     102M   50k  102M   1% /run/user/0
/dev/rbd0      xfs        11G   35M   11G   1% /mnt
[ceph-admin@ceph-node1 cluster]$ ceph osd pool get rbd pg_num
Error ENOENT: unrecognized pool 'rbd'
[ceph-admin@ceph-node1 cluster]$ ceph -s
  cluster:
    id:     de9f4b5e-7b68-448d-b1d7-db1f233a1249
    health: HEALTH_WARN
            1 pool(s) have non-power-of-two pg_num
            application not enabled on 1 pool(s)
 
  services:
    mon: 1 daemons, quorum ceph-node1 (age 19h)
    mgr: ceph-node1(active, since 19h)
    mds: cephfs:1 {0=ceph-node1=up:active} 2 up:standby
    osd: 3 osds: 3 up (since 19h), 3 in (since 19h)
 
  data:
    pools:   3 pools, 222 pgs
    objects: 46 objects, 14 MiB
    usage:   3.0 GiB used, 27 GiB / 30 GiB avail
    pgs:     222 active+clean
 
[ceph-admin@ceph-node1 cluster]$ ceph osd pool application enable block_data rbd
enabled application 'rbd' on pool 'block_data'
[ceph-admin@ceph-node1 cluster]$ ceph -s
  cluster:
    id:     de9f4b5e-7b68-448d-b1d7-db1f233a1249
    health: HEALTH_WARN
            1 pool(s) have non-power-of-two pg_num
 
  services:
    mon: 1 daemons, quorum ceph-node1 (age 19h)
    mgr: ceph-node1(active, since 19h)
    mds: cephfs:1 {0=ceph-node1=up:active} 2 up:standby
    osd: 3 osds: 3 up (since 19h), 3 in (since 19h)
 
  data:
    pools:   3 pools, 222 pgs
    objects: 46 objects, 14 MiB
    usage:   3.0 GiB used, 27 GiB / 30 GiB avail
    pgs:     222 active+clean
 
[ceph-admin@ceph-node1 cluster]$ ceph osd lspools
1 cephfs_data
2 cephfs_metadata
3 block_data
[ceph-admin@ceph-node1 cluster]$ ceph osd pool get cephfs_data pg_num
pg_num: 128
[ceph-admin@ceph-node1 cluster]$ ceph osd pool get cephfs_metadata pg_num
pg_num: 64
[ceph-admin@ceph-node1 cluster]$ ceph osd pool get block_data pg_num
pg_num: 30
[ceph-admin@ceph-node1 cluster]$ ceph osd pool set block_data pg_num 64
Error ERANGE: pool id 3 pg_num 64 size 3 would mean 768 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)
[ceph-admin@ceph-node1 cluster]$ ceph osd pool get cephfs_data pg_num
pg_num: 128
[ceph-admin@ceph-node1 cluster]$ ceph osd pool get cephfs_metadata pg_num
pg_num: 64
[ceph-admin@ceph-node1 cluster]$ ceph osd pool get block_data pg_num
pg_num: 30
[ceph-admin@ceph-node1 cluster]$ ceph osd pool set block_data pg_num 32
set pool 3 pg_num to 32
[ceph-admin@ceph-node1 cluster]$ ceph -s
  cluster:
    id:     de9f4b5e-7b68-448d-b1d7-db1f233a1249
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-node1 (age 19h)
    mgr: ceph-node1(active, since 19h)
    mds: cephfs:1 {0=ceph-node1=up:active} 2 up:standby
    osd: 3 osds: 3 up (since 19h), 3 in (since 19h)
 
  data:
    pools:   3 pools, 224 pgs
    objects: 46 objects, 12 MiB
    usage:   3.0 GiB used, 27 GiB / 30 GiB avail
    pgs:     224 active+clean
 
[ceph-admin@ceph-node1 cluster]$ 
无标签
打赏
评论区
头像