赞存块存储内核模式的使用方法(rbd挂载)
要使用赞存块存储的内核模式,客户端服务器必须与赞存Public网络处于同一网段,并且相互之间能ping通。
- 新搭建了一台服务器,管理网络IP地址为192.168.40.105,业务网络IP地址为172.16.6.105,hostname为nko105。
[root@nko105 ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 0c:c4:7a:08:9c:8e brd ff:ff:ff:ff:ff:ff
inet 192.168.40.105/24 brd 192.168.40.255 scope global noprefixroute eno1
valid_lft forever preferred_lft forever
inet6 fe80::ccda:1cb1:2a7e:461/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: eno2: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 0c:c4:7a:08:9c:8f brd ff:ff:ff:ff:ff:ff
inet 172.16.6.105/24 brd 172.16.6.255 scope global noprefixroute eno2
valid_lft forever preferred_lft forever
inet6 fe80::ec4:7aff:fe08:9c8f/64 scope link
valid_lft forever preferred_lft forever
[root@nko105 ~]#
- 在赞存的MON节点上,设置可以免密登录nko105,并添加主机名解析。
[root@nkocs01 ~]# ssh-copy-id root@172.16.6.105 # 设置免密登录
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@172.16.6.105's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@172.16.6.105'"
and check to make sure that only the key(s) you wanted were added.
[root@nkocs01 ~]# ssh 'root@172.16.6.105' # 测试
Last login: Mon Dec 20 15:13:06 2021 from 172.17.3.125
[root@nko105 ~]# exit
[root@nkocs01 ceph]# echo -e "172.16.6.105\tnko105" /etc/hosts # 添加主机名解析
- 为nko105安装ceph,并将配置文件拷贝到nko105。由于在使用ceph-deploy安装ceph时会自动调用yum进行软件安装,需要nko105能够连接到互联网。
[root@nkocs01 ~]# cd /etc/ceph/
[root@nkocs01 ceph]# ceph-deploy install nko105 # 为nko105安装ceph
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /usr/bin/ceph-deploy install nko105
.
.
.
[nko105][DEBUG ] Complete!
[nko105][INFO ] Running command: ceph --version
[nko105][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
[root@nkocs01 ceph]# ceph-deploy config push nko105 # 拷贝配置文件
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /usr/bin/ceph-deploy config push nko105
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : push
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['nko105']
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.config][DEBUG ] Pushing config to nko105
[nko105][DEBUG ] connected to host: nko105
[nko105][DEBUG ] detect platform information from remote host
[nko105][DEBUG ] detect machine type
[nko105][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@nkocs01 ceph]#
- 在赞存中创建一个ceph用户client.rbd,它拥有访问rbd存储池的权限,然后再为client.rbd用户添加秘钥。
[root@nkocs01 ceph]# ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=blockdata'
[client.rbd]
key = AQBtLsBhUwf+NxAAIuwQXyfoPYhgg4OVlFba1g==
[root@nkocs01 ceph]# ceph auth get-or-create client.rbd | ssh root@172.16.6.105 tee /etc/ceph/ceph.client.rbd.keyring
[client.rbd]
key = AQBtLsBhUwf+NxAAIuwQXyfoPYhgg4OVlFba1g==
- 到这一步,nko105应该准备好充当ceph的客户端了。通过提供用户名和秘钥在nko105上检查ceph集群的状态:
[root@nko105 ~]# cd /etc/ceph
[root@nko105 ceph]# cat ceph.client.rbd.keyring >> keyring
### 由于我们没有默认用户client.admin,必须提供用户名来连接ceph集群
[root@nko105 ceph]# ceph -s --name client.rbd
cluster:
id: ff7d7a9e-dd92-459b-9276-ef62767d98f9
health: HEALTH_WARN
noout,nobackfill,noscrub,nodeep-scrub flag(s) set
295 large omap objects
services:
mon: 3 daemons, quorum nkocs01,nkocs02,nkocs03
mgr: nkocs01(active), standbys: nkocs03, nkocs02
mds: fs-2/2/2 up {0=nkocs01-0=up:active,1=nkocs01-1=up:active}, 2 up:standby-replay, 2 up:standby
osd: 24 osds: 24 up, 24 in
flags noout,nobackfill,noscrub,nodeep-scrub
rgw: 2 daemons active
tcmu-runner: 4 daemons active
data:
pools: 10 pools, 1480 pgs
objects: 2.52M objects, 1.52TiB
usage: 4.80TiB used, 142TiB / 147TiB avail
pgs: 1480 active+clean
io:
client: 11.7KiB/s rd, 4.71KiB/s wr, 10op/s rd, 3op/s wr
[root@nko105 ceph]#
- 在赞存管理界面上创建一个名为kernel1的内核模式块存储(rbd镜像)。
在nko105上检查该rbd镜像。
[root@nko105 ceph]# rbd -p blockdata ls
rbd-blktest_dev
rbd-block
rbd-kernel1 # 新创建的内核模式块存储
rbd-vm1
rbd-vm2
rbd-vm3
[root@nko105 ceph]#
### 检查rbd镜像的细节
[root@nko105 ceph]# rbd --image rbd-kernel1 -p blockdata info
rbd image 'rbd-kernel1':
size 500 GB in 128000 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.0c1aa36fc16cc5
format: 2
features: layering, exclusive-lock
flags:
[root@nko105 ceph]#
- 在nko105上将内核模式块存储(rbd镜像)映射为本地设备,格式化并挂载。
[root@nko105 ceph]# rbd map rbd-kernel1 -p blockdata
/dev/rbd0
[root@nko105 ceph]# rbd showmapped
id pool image snap device
0 blockdata rbd-kernel1 - /dev/rbd0
[root@nko105 ceph]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 1.8T 0 part
├─centos-root 253:0 0 1.4T 0 lvm /
├─centos-swap 253:1 0 15.8G 0 lvm [SWAP]
└─centos-home 253:2 0 400G 0 lvm /home
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 4M 0 part
├─sdb2 8:18 0 4G 0 part
├─sdb3 8:19 0 1.8T 0 part
├─sdb5 8:21 0 250M 0 part
├─sdb6 8:22 0 250M 0 part
├─sdb7 8:23 0 110M 0 part
├─sdb8 8:24 0 286M 0 part
└─sdb9 8:25 0 2.5G 0 part
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part
sdd 8:48 0 1.8T 0 disk
├─sdd1 8:49 0 4M 0 part
├─sdd2 8:50 0 4G 0 part
├─sdd3 8:51 0 1.8T 0 part
├─sdd5 8:53 0 250M 0 part
├─sdd6 8:54 0 250M 0 part
├─sdd7 8:55 0 110M 0 part
├─sdd8 8:56 0 286M 0 part
└─sdd9 8:57 0 2.5G 0 part
sde 8:64 0 1.8T 0 disk
└─sde1 8:65 0 1.8T 0 part
sdf 8:80 0 1.8T 0 disk
└─sdf1 8:81 0 1.8T 0 part
rbd0 252:0 0 500G 0 disk
[root@nko105 ceph]# mkfs.ext4 -m0 /dev/rbd0 # 格式化
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
32768000 inodes, 131072000 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2279604224
4000 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@nko105 ceph]# mkdir /mnt/rbd0
[root@nko105 ceph]#
[root@nko105 ceph]# mount /dev/rbd0 /mnt/rbd0/ # 挂载rbd
[root@nko105 ceph]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9.6M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/centos-root 1.5T 2.2G 1.5T 1% /
/dev/sda1 1014M 154M 861M 16% /boot
/dev/mapper/centos-home 400G 33M 400G 1% /home
tmpfs 3.2G 0 3.2G 0% /run/user/0
/dev/rbd0 493G 73M 492G 1% /mnt/rbd0
[root@nko105 ceph]# dd if=/dev/zero of=/mnt/rbd0/dd.dat bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 85.1725 s, 126 MB/s # 受限于网络,只有126MB/s
[root@nko105 ceph]#