Ceph学习笔记2-在Kolla-Ansible中使用Ceph后端存储
环境说明
- 使用
Kolla-Ansible
请参考《使用Kolla-Ansible
在CentOS
7
单节点上部署OpenStack
Pike
》; - 部署
Ceph
服务请参考《Ceph
学习笔记1
-Mimic
版本多节点部署》。
配置Ceph
- 以
osdev
用户登录:
$ ssh osdev@osdev01 $ cd /opt/ceph/deploy/
创建Pool
创建镜像Pool
- 用于保存
Glance
镜像:
$ ceph osd pool create images 32 32 pool 'images' created
创建卷Pool
- 用于保存
Cinder
的卷:
$ ceph osd pool create volumes 32 32 pool 'volumes' created
- 用于保存
Cinder
的卷备份:
$ ceph osd pool create backups 32 32 pool 'backups' created
创建虚拟机Pool
- 用于保存虚拟机系统卷:
$ ceph osd pool create vms 32 32 pool 'vms' created
查看Pool
$ ceph osd lspools 1 .rgw.root 2 default.rgw.control 3 default.rgw.meta 4 default.rgw.log 6 rbd 8 images 9 volumes 10 backups 11 vms
创建用户
查看用户
- 查看所有用户:
$ ceph auth list installed auth entries: mds.osdev01 key: AQCabn5b18tHExAAkZ6Aq3IQ4/aqYEBBey5O3Q== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx mds.osdev02 key: AQCbbn5bcq4yJRAAUfhoqPNfyp2m/ORu/7vHBA== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx mds.osdev03 key: AQCcbn5bTAIdORAApGu9NJvC3AmS+L3EWXLMdw== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx osd.0 key: AQCyJH5bG2ZBHRAAsDaLHcoOxv/mLCHwITA7JQ== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: AQDTJH5bjvQ8HxAA4cyLttvZwiqFq1srFoSXWg== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.2 key: AQD9JH5bbPi6IRAA7DbwaCh5JBaa6RfWPoe9VQ== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ== caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: AQA1In5boIRwGBAAgj5OccvTGYkuB+btlgL0BQ== caps: [mon] allow profile bootstrap-mds client.bootstrap-mgr key: AQA1In5bS6pwGBAA379v3LXJrdURLmA1gnTaLQ== caps: [mon] allow profile bootstrap-mgr client.bootstrap-osd key: AQA1In5bnMpwGBAAXohUfa4rGS0Rd2weMl4dPg== caps: [mon] allow profile bootstrap-osd client.bootstrap-rbd key: AQA1In5buelwGBAANQSalrSzH3yslSc4rYPu1g== caps: [mon] allow profile bootstrap-rbd client.bootstrap-rgw key: AQA1In5b0ghxGBAAIGK3WmBSkKZMnSEfvnEQow== caps: [mon] allow profile bootstrap-rgw client.rgw.osdev01 key: AQDZbn5b6aChEBAAzRuX4UWlxyws+aX1i+D26Q== caps: [mon] allow rw caps: [osd] allow rwx client.rgw.osdev02 key: AQDabn5bypCDJBAAt18L5ppG5lEg6NkGQLYs5w== caps: [mon] allow rw caps: [osd] allow rwx client.rgw.osdev03 key: AQDbbn5bbEVNNBAArX+/AKQu9q3hCRn/05Ya3A== caps: [mon] allow rw caps: [osd] allow rwx mgr.osdev01 key: AQDPIn5beqPTORAAEzcX3fMCCclLR2RiPyvugw== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * mgr.osdev02 key: AQDRIn5bLRVqDxAA/yWXO8pX6fQynJNyCcoNww== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * mgr.osdev03 key: AQDSIn5bGyrhHxAAvtAEOveovRxmdDlF45i2Cg== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow *
- 查看指定用户:
$ ceph auth get client.admin exported keyring for client.admin [client.admin] key = AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ== caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *"
创建Glance用户
- 创建
glance
用户,并授予images
存储池访问权限:
$ ceph auth get-or-create client.glance [client.glance] key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ== $ ceph auth caps client.glance mon 'allow r' osd 'allow rwx pool=images' updated caps for client.glance
- 查看并保存
glance
用户的KeyRing
文件:
$ ceph auth get client.glance exported keyring for client.glance [client.glance] key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ== caps mon = "allow r" caps osd = "allow rwx pool=images" $ ceph auth get client.glance -o /opt/ceph/deploy/ceph.client.glance.keyring exported keyring for client.glance
创建Cinder用户
- 创建
cinder-volume
用户,并授予volumes
存储池访问权限:
$ ceph auth get-or-create client.cinder-volume [client.cinder-volume] key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA== $ ceph auth caps client.cinder-volume mon 'allow r' osd 'allow rwx pool=volumes' updated caps for client.cinder-volume
- 查看并保存
cinder-volume
用户的KeyRing
文件:
$ ceph auth get client.cinder-volume exported keyring for client.cinder-volume [client.cinder-volume] key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA== caps mon = "allow r" caps osd = "allow rwx pool=volumes" $ ceph auth get client.cinder-volume -o /opt/ceph/deploy/ceph.client.cinder-volume.keyring exported keyring for client.cinder-volume
- 创建
cinder-backup
用户,并授予volumes
和backups
存储池访问权限:
$ ceph auth get-or-create client.cinder-backup [client.cinder-backup] key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og== $ ceph auth caps client.cinder-backup mon 'allow r' osd 'allow rwx pool=volumes, allow rwx pool=backups' updated caps for client.cinder-backup
- 查看并保存
cinder-backup
用户的KeyRing
文件:
$ ceph auth get client.cinder-backup exported keyring for client.cinder-backup [client.cinder-backup] key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og== caps mon = "allow r" caps osd = "allow rwx pool=volumes, allow rwx pool=backups" $ ceph auth get client.cinder-backup -o /opt/ceph/deploy/ceph.client.cinder-backup.keyring exported keyring for client.cinder-backup
创建Nova用户
- 创建
nova
用户,并授予vms
存储池的访问权限:
$ ceph auth get-or-create client.nova [client.nova] key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A== $ ceph auth caps client.nova mon 'allow r' osd 'allow rwx pool=vms' updated caps for client.nova
- 查看并保存
nova
用户的KeyRing
文件:
$ ceph auth get client.nova exported keyring for client.nova [client.nova] key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A== caps mon = "allow r" caps osd = "allow rwx pool=vms" $ ceph auth get client.nova -o /opt/ceph/deploy/ceph.client.nova.keyring exported keyring for client.nova
配置Kolla-Ansible
- 以
root
用户身份登录osdev01
部署节点,并设置好环境变量:
$ ssh root@osdev01 $ export KOLLA_ROOT=/opt/kolla $ cd ${KOLLA_ROOT}/myconfig
全局配置
- 编辑
globals.yml
,禁止部署Ceph
:
enable_ceph: "no"
- 开启
Cinder
服务,并开启Glance
、Cinder
和Nova
的后端Ceph
功能:
enable_cinder: "yes" glance_backend_ceph: "yes" cinder_backend_ceph: "yes" nova_backend_ceph: "yes"
配置Glance
- 配置
Glance
使用glance
用户使用Ceph
的images
存储池:
$ mkdir -pv config/glance mkdir: 已创建目录 "config/glance" $ vi config/glance/glance-api.conf [glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf
- 新增
Glance
的Ceph
客户端配置和glance
用户的KeyRing
文件:
$ vi config/glance/ceph.conf [global] fsid = 383237bd-becf-49d5-9bd6-deb0bc35ab2a mon_initial_members = osdev01, osdev02, osdev03 mon_host = 172.29.101.166,172.29.101.167,172.29.101.168 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx $ cp -v /opt/ceph/deploy/ceph.client.glance.keyring config/glance/ceph.client.glance.keyring "/opt/ceph/deploy/ceph.client.glance.keyring" -> "config/glance/ceph.client.glance.keyring"
配置Cinder
- 配置
Cinder
卷服务使用Ceph
的cinder-volume
用户使用volumes
存储池,Cinder
卷备份服务使用Ceph
的cinder-backup
用户使用backups
存储池:
$ mkdir -pv config/cinder/ mkdir: 已创建目录 "config/cinder/" $ vi config/cinder/cinder-volume.conf [DEFAULT] enabled_backends=rbd-1 [rbd-1] rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=cinder-volume backend_host=rbd:volumes rbd_pool=volumes volume_backend_name=rbd-1 volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_secret_uuid = {{ cinder_rbd_secret_uuid }} $ vi config/cinder/cinder-backup.conf [DEFAULT] backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user=cinder-backup backup_ceph_chunk_size = 134217728 backup_ceph_pool=backups backup_driver = cinder.backup.drivers.ceph backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
- 新增
Cinder
的卷服务和卷备份服务的Ceph
客户端配置和KeyRing
文件:
$ cp config/glance/ceph.conf config/cinder/ceph.conf $ mkdir -pv config/cinder/cinder-backup/ config/cinder/cinder-volume/ mkdir: 已创建目录 "config/cinder/cinder-backup/" mkdir: 已创建目录 "config/cinder/cinder-volume/" $ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-backup/ceph.client.cinder-volume.keyring "/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-volume.keyring" $ cp -v /opt/ceph/deploy/ceph.client.cinder-backup.keyring config/cinder/cinder-backup/ceph.client.cinder-backup.keyring "/opt/ceph/deploy/ceph.client.cinder-backup.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-backup.keyring" $ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-volume/ceph.client.cinder-volume.keyring "/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-volume/ceph.client.cinder.keyring"
配置Nova
- 配置
Nova
使用Ceph
的nova
用户使用vms
存储池:
$ vi config/nova/nova-compute.conf [libvirt] images_rbd_pool=vms images_type=rbd images_rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=nova
- 新增Nova的
Ceph
客户端配置和nova
用户的KeyRing
文件:
$ cp -v config/glance/ceph.conf config/nova/ceph.conf "config/glance/ceph.conf" -> "config/nova/ceph.conf" $ cp -v /opt/ceph/deploy/ceph.client.nova.keyring config/nova/ceph.client.nova.keyring "/opt/ceph/deploy/ceph.client.nova.keyring" -> "config/nova/ceph.client.nova.keyring" $ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/nova/ceph.client.cinder.keyring "/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/nova/ceph.client.cinder.keyring"
部署测试
开始部署
- 编辑部署脚本
osdev.sh
:
#!/bin/bash set -uexv usage() { echo -e "usage : \n$0 <action>" echo -e " \$1 action" } if [ $# -lt 1 ]; then usage exit 1 fi ${KOLLA_ROOT}/kolla-ansible/tools/kolla-ansible --configdir ${KOLLA_ROOT}/myconfig --passwords ${KOLLA_ROOT}/myconfig/passwords.yml --inventory ${KOLLA_ROOT}/myconfig/mynodes.conf $1
- 增加可执行权限:
$ chmod a+x osdev.sh
- 部署
OpenStack
集群:
$ ./osdev.sh bootstrap-servers $ ./osdev.sh prechecks $ ./osdev.sh pull $ ./osdev.sh deploy $ ./osdev.sh post-deploy # ./osdev.sh "destroy --yes-i-really-really-mean-it"
- 查看部署的服务概况:
$ openstack service list +----------------------------------+-------------+----------------+ | ID | Name | Type | +----------------------------------+-------------+----------------+ | 304c9c5073f14f4a97ca1c3cf5e1b49e | neutron | network | | 46de4440a5cf4a5697fa94b2d0424ba9 | heat | orchestration | | 60b46b491ce7403aaec0c064384dde49 | heat-cfn | cloudformation | | 7726ab5d41c5450d954f073f1a9aff28 | cinderv2 | volumev2 | | 7a4bd5fc12904cc7b5c3810412f98c57 | gnocchi | metric | | 7ae6f98018fb4d509e862e45ebf10145 | glance | image | | a0ec333149284c09ac0e157753205fd6 | nova | compute | | b15e90c382864723945b15c37d3317a6 | placement | placement | | b5eaa49c50d64316b583eb1c0c4f9ce2 | cinderv3 | volumev3 | | c6474640f5d9424da0ec51c70c1e6e01 | nova_legacy | compute_legacy | | db27eb8524be4db3be12b9dd0dab16b8 | keystone | identity | | edf5c8b894a74a69b65bb49d8e014fff | cinder | volume | +----------------------------------+-------------+----------------+ $ openstack volume service list +------------------+-------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-------------------+------+---------+-------+----------------------------+ | cinder-scheduler | osdev02 | nova | enabled | up | 2018-08-27T11:33:27.000000 | | cinder-volume | rbd:volumes@rbd-1 | nova | enabled | up | 2018-08-27T11:33:18.000000 | | cinder-backup | osdev02 | nova | enabled | up | 2018-08-27T11:33:17.000000 | +------------------+-------------------+------+---------+-------+----------------------------+
初始化环境
- 查看初始的
RBD
存储池情况,全部是空的:
$ rbd -p images ls $ rbd -p volumes ls $ rbd -p vms ls
- 设置环境变量,并初始化
OpenStack
环境:
$ . ${KOLLA_ROOT}/myconfig/admin-openrc.sh $ ${KOLLA_ROOT}/myconfig/init-runonce
- 查看新增的镜像信息:
$ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 293b25bb-30be-4839-b4e2-1dba3c43a56a | cirros | active | +--------------------------------------+--------+--------+ $ openstack image show 293b25bb-30be-4839-b4e2-1dba3c43a56a +------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | checksum | 443b7623e27ecf03dc9e01ee93f67afe | | container_format | bare | | created_at | 2018-08-27T11:25:29Z | | disk_format | qcow2 | | file | /v2/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/file | | id | 293b25bb-30be-4839-b4e2-1dba3c43a56a | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | 68ada1726a864e2081a56be0a2dca3a0 | | properties | locations='[{u'url': u'rbd://383237bd-becf-49d5-9bd6-deb0bc35ab2a/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/snap', u'metadata': {}}]', os_type='linux' | | protected | False | | schema | /v2/schemas/image | | size | 12716032 | | status | active | | tags | | | updated_at | 2018-08-27T11:25:30Z | | virtual_size | None | | visibility | public | +------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
- 查看
RBD
存储池的变化,可见镜像被存储在images
存储池中,并且有一个快照:
$ rbd -p images ls 293b25bb-30be-4839-b4e2-1dba3c43a56a $ rbd -p volumes ls $ rbd -p vms ls $ rbd -p images info 293b25bb-30be-4839-b4e2-1dba3c43a56a rbd image '293b25bb-30be-4839-b4e2-1dba3c43a56a': size 12 MiB in 2 objects order 23 (8 MiB objects) id: 178f4008d95 block_name_prefix: rbd_data.178f4008d95 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Mon Aug 27 19:25:29 2018 $ rbd -p images snap list 293b25bb-30be-4839-b4e2-1dba3c43a56a SNAPID NAME SIZE TIMESTAMP 6 snap 12 MiB Mon Aug 27 19:25:30 2018
创建虚拟机
- 创建一个虚拟机:
$ openstack server create --image cirros --flavor m1.tiny --key-name mykey --nic net-id=9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 demo1 +-------------------------------------+-----------------------------------------------+ | Field | Value | +-------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | 65cVBJ7S6yaD | | config_drive | | | created | 2018-08-27T11:29:03Z | | flavor | m1.tiny (1) | | hostId | | | id | 309f1364-4d58-413d-a865-dfc37ff04308 | | image | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) | | key_name | mykey | | name | demo1 | | progress | 0 | | project_id | 68ada1726a864e2081a56be0a2dca3a0 | | properties | | | security_groups | name='default' | | status | BUILD | | updated | 2018-08-27T11:29:03Z | | user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 | | volumes_attached | | +-------------------------------------+-----------------------------------------------+ $ openstack server show 309f1364-4d58-413d-a865-dfc37ff04308 +-------------------------------------+----------------------------------------------------------+ | Field | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | osdev03 | | OS-EXT-SRV-ATTR:hypervisor_hostname | osdev03 | | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2018-08-27T11:29:16.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | demo-net=10.0.0.11 | | config_drive | | | created | 2018-08-27T11:29:03Z | | flavor | m1.tiny (1) | | hostId | 4e345dd9f770f63f80d3eafe97c20d97746e890b2971a8398e26db86 | | id | 309f1364-4d58-413d-a865-dfc37ff04308 | | image | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) | | key_name | mykey | | name | demo1 | | progress | 0 | | project_id | 68ada1726a864e2081a56be0a2dca3a0 | | properties | | | security_groups | name='default' | | status | ACTIVE | | updated | 2018-08-27T11:29:16Z | | user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 | | volumes_attached | | +-------------------------------------+----------------------------------------------------------+
- 可见虚拟机在
vms
存储池中创建了一个卷:
$ rbd -p images ls 293b25bb-30be-4839-b4e2-1dba3c43a56a $ rbd -p volumes ls $ rbd -p backups ls $ rbd -p vms ls 309f1364-4d58-413d-a865-dfc37ff04308_disk
- 登录虚拟机所在节点,可以看到虚拟机的系统卷使用的是在
vms
中创建的这个卷,从进程参数可以看出qemu
直接使用的是Ceph
的librbd
库访问的RBD
块设备:
$ ssh osdev@osdev03 $ sudo docker exec -it nova_libvirt virsh list Id Name State ---------------------------------------------------- 1 instance-00000001 running $ sudo docker exec -it nova_libvirt virsh dumpxml 1 ... <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <auth username='nova'> <secret type='ceph' uuid='2ea5db42-c8f1-4601-927c-3c64426907aa'/> </auth> <source protocol='rbd' name='vms/309f1364-4d58-413d-a865-dfc37ff04308_disk'> <host name='172.29.101.166' port='6789'/> <host name='172.29.101.167' port='6789'/> <host name='172.29.101.168' port='6789'/> </source> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> ... $ ps -aux | grep qemu 42436 2678909 4.6 0.0 1341144 171404 ? Sl 19:29 0:08 /usr/libexec/qemu-kvm -name guest=instance-00000001,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-instance-00000001/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client-IBRS,ss=on,hypervisor=on,tsc_adjust=on,avx512f=on,avx512dq=on,clflushopt=on,clwb=on,avx512cd=on,avx512bw=on,avx512vl=on,pku=on,stibp=on,pdpe1gb=on -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 309f1364-4d58-413d-a865-dfc37ff04308 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.2,serial=74bf926c-70b7-03df-b211-d21d6016081a,uuid=309f1364-4d58-413d-a865-dfc37ff04308,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-instance-00000001/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object secret,id=virtio-disk0-secret0,data=zNy84nlNYigA4vjbuOxcGQa1/hh8w28i/WoJbO1Xsl4=,keyid=masterKey0,iv=OhX+FApyFyq2XLWq0ff/Ew==,format=base64 -drive file=rbd:vms/309f1364-4d58-413d-a865-dfc37ff04308_disk:id=nova:auth_supported=cephx\;none:mon_host=172.29.101.166\:6789\;172.29.101.167\:6789\;172.29.101.168\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=79,id=hostnet0,vhost=on,vhostfd=80 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:04:e8:e9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0,logfile=/var/lib/nova/instances/309f1364-4d58-413d-a865-dfc37ff04308/console.log,logappend=off -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.29.101.168:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on $ ldd /usr/libexec/qemu-kvm | grep -e ceph -e rbd librbd.so.1 => /lib64/librbd.so.1 (0x00007fde38815000) libceph-common.so.0 => /usr/lib64/ceph/libceph-common.so.0 (0x00007fde28247000)
创建卷
- 创建一个卷:
$ openstack volume create --size 1 volume1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-08-27T11:33:52.000000 | | description | None | | encrypted | False | | id | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 | | migration_status | None | | multiattach | False | | name | volume1 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 | +---------------------+--------------------------------------+
- 查看存储池状态,可以看到新建的卷被放在
volumes
存储池:
$ rbd -p images ls 293b25bb-30be-4839-b4e2-1dba3c43a56a $ rbd -p volumes ls volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 $ rbd -p backups ls $ rbd -p vms ls 309f1364-4d58-413d-a865-dfc37ff04308_disk
创建备份
- 创建一个卷备份,可以看到是创建在
backups
存储池中:
$ openstack volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | f2321578-88d5-4337-b93c-798855b817ce | | name | None | +-------+--------------------------------------+ $ openstack volume backup list +--------------------------------------+------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+------+-------------+-----------+------+ | f2321578-88d5-4337-b93c-798855b817ce | None | None | available | 1 | +--------------------------------------+------+-------------+-----------+------+ $ openstack volume backup show f2321578-88d5-4337-b93c-798855b817ce +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | availability_zone | nova | | container | backups | | created_at | 2018-08-27T11:39:40.000000 | | data_timestamp | 2018-08-27T11:39:40.000000 | | description | None | | fail_reason | None | | has_dependent_backups | False | | id | f2321578-88d5-4337-b93c-798855b817ce | | is_incremental | False | | name | None | | object_count | 0 | | size | 1 | | snapshot_id | None | | status | available | | updated_at | 2018-08-27T11:39:46.000000 | | volume_id | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 | +-----------------------+--------------------------------------+ $ rbd -p backups ls volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
- 在此创建一个备份,发现
backups
存储池并无变化,仅仅是在原有的备份卷中增加一个快照:
$ volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 07132063-9bdb-4391-addd-a791dae2cfea | | name | None | +-------+--------------------------------------+ $ rbd -p backups ls volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base $ rbd -p backups snap list volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base SNAPID NAME SIZE TIMESTAMP 4 backup.f2321578-88d5-4337-b93c-798855b817ce.snap.1535369984.08 1 GiB Mon Aug 27 19:39:46 2018 5 backup.07132063-9bdb-4391-addd-a791dae2cfea.snap.1535370126.76 1 GiB Mon Aug 27 19:42:08 2018
连接卷
- 把新增的卷链接到之前创建的虚拟机中:
$ openstack server add volume demo1 volume1 $ openstack volume show volume1 +--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'309f1364-4d58-413d-a865-dfc37ff04308', u'attachment_id': u'fb4d9ec0-8a33-4ed0-8845-09e6f17aac81', u'attached_at': u'2018-08-27T11:44:51.000000', u'host_name': u'osdev03', u'volume_id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9', u'device': u'/dev/vdb', u'id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-08-27T11:33:52.000000 | | description | None | | encrypted | False | | id | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 | | migration_status | None | | multiattach | False | | name | volume1 | | os-vol-host-attr:host | rbd:volumes@rbd-1#rbd-1 | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 68ada1726a864e2081a56be0a2dca3a0 | | properties | attached_mode='rw' | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | in-use | | type | None | | updated_at | 2018-08-27T11:44:52.000000 | | user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 | +--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
- 到虚拟机所在节点查看其
libvirt
上参数的变化,发现新增了一个RBD
磁盘:
$ sudo docker exec -it nova_libvirt virsh dumpxml 1 ... <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <auth username='nova'> <secret type='ceph' uuid='2ea5db42-c8f1-4601-927c-3c64426907aa'/> </auth> <source protocol='rbd' name='vms/309f1364-4d58-413d-a865-dfc37ff04308_disk'> <host name='172.29.101.166' port='6789'/> <host name='172.29.101.167' port='6789'/> <host name='172.29.101.168' port='6789'/> </source> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none' discard='unmap'/> <auth username='cinder-volume'> <secret type='ceph' uuid='3fa55f7c-b556-4095-9253-b908d5408ec8'/> </auth> <source protocol='rbd' name='volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9'> <host name='172.29.101.166' port='6789'/> <host name='172.29.101.167' port='6789'/> <host name='172.29.101.168' port='6789'/> </source> <target dev='vdb' bus='virtio'/> <serial>3ccca300-bee3-4b5a-b89b-32e6b8b806d9</serial> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> ...
- 为虚拟机创建一个浮动
IP
,使用SSH
登陆进去:
$ openstack console url show demo1 +-------+-------------------------------------------------------------------------------------+ | Field | Value | +-------+-------------------------------------------------------------------------------------+ | type | novnc | | url | http://172.29.101.167:6080/vnc_auto.html?token=9f835216-1c53-41ae-849a-44a85429a334 | +-------+-------------------------------------------------------------------------------------+ $ openstack floating ip create public1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2018-08-27T11:49:02Z | | description | | | fixed_ip_address | None | | floating_ip_address | 192.168.162.52 | | floating_network_id | ff69b3ff-c2c4-4474-a7ba-952fa99df919 | | id | 2aa86075-9c62-49f5-84ac-e7b6353c9591 | | name | 192.168.162.52 | | port_id | None | | project_id | 68ada1726a864e2081a56be0a2dca3a0 | | qos_policy_id | None | | revision_number | 0 | | router_id | None | | status | DOWN | | subnet_id | None | | tags | [] | | updated_at | 2018-08-27T11:49:02Z | +---------------------+--------------------------------------+ $ openstack server add floating ip demo1 192.168.162.52 $ openstack server list +--------------------------------------+-------+--------+------------------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------+--------+------------------------------------+--------+---------+ | 309f1364-4d58-413d-a865-dfc37ff04308 | demo1 | ACTIVE | demo-net=10.0.0.11, 192.168.162.52 | cirros | m1.tiny | +--------------------------------------+-------+--------+------------------------------------+--------+---------+ $ ssh root@osdev02 $ ip netns qrouter-65759e60-6e20-41cc-a79c-fc492232b127 (id: 1) qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 (id: 0) $ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ping 192.168.162.50 $ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ping 10.0.0.9 (用户名"cirros",密码"gocubsgo") $ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ssh cirros@192.168.162.52 $ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ssh cirros@10.0.0.11 $ sudo passwd root Changing password for root New password: Bad password: too weak Retype password: Password for root changed by root $ su - Password:
- 创建分区并写入测试文件,最后卸载分区:
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 1G 0 disk |-vda1 253:1 0 1015M 0 part / `-vda15 253:15 0 8M 0 part vdb 253:16 0 1G 0 disk # mkfs.ext4 /dev/vdb mke2fs 1.42.12 (29-Aug-2014) Creating filesystem with 262144 4k blocks and 65536 inodes Filesystem UUID: ede8d366-bfbc-4b9a-9d3f-306104f410d7 Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done # mount /dev/vdb /mnt # df -h Filesystem Size Used Available Use% Mounted on /dev 240.1M 0 240.1M 0% /dev /dev/vda1 978.9M 23.9M 914.1M 3% / tmpfs 244.2M 0 244.2M 0% /dev/shm tmpfs 244.2M 92.0K 244.1M 0% /run /dev/vdb 975.9M 1.3M 907.4M 0% /mnt # echo "hello openstack, volume test." > /mnt/ceph_rbd_test # umount /mnt # df -h Filesystem Size Used Available Use% Mounted on /dev 240.1M 0 240.1M 0% /dev /dev/vda1 978.9M 23.9M 914.1M 3% / tmpfs 244.2M 0 244.2M 0% /dev/shm tmpfs 244.2M 92.0K 244.1M 0% /run
断开卷
- 断开卷,同时查看虚拟机内部变化:
$ openstack server remove volume demo1 volume1 # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 1G 0 disk |-vda1 253:1 0 1015M 0 part / `-vda15 253:15 0 8M 0 part
- 在宿主机映射和挂载
RBD
卷,并查看之前虚拟机内部创建的文件,完全相同:
$ rbd showmapped id pool image snap device 0 rbd rbd_test - /dev/rbd0 $ rbd feature disable volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 object-map fast-diff deep-flatten $ rbd map volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 /dev/rbd1 $ mkdir /mnt/volume1 $ mount /dev/rbd1 /mnt/volume1/ $ cat /mnt/volume1/ ceph_rbd_test lost+found/ $ cat /mnt/volume1/ceph_rbd_test hello openstack, volume test.
参考文档

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
Spark Parquet file split
在实际使用 spark + parquet 的时候, 遇到了两个不解的地方: 我们只有一个 parquet 文件(小于 hdfs block size), 但是 spark 在某个 stage 生成了4个 tasks 来处理. 4个 tasks 中只有一个 task 处理了所有数据, 其他几个都没有处理数据. 这两个问题牵涉到对于 parquet spark 是如何来进行切分 partitions, 以及每个 partition 要处理哪部分数据的. 先说结论, spark 中, parquet 是 splitable 的, 代码见ParquetFileFormat#isSplitable. 那会不会把数据切碎? 答案是不会, 因为是以 spark row group 为最小单位切分 parquet 的, 这也会导致一些 partitions 会没有数据, 极端情况下, 只有一个 row group 的话, partitions 再多, 也只会一个有数据. 接下来开始我们的源码之旅: 处理流程 1. 根据 parquet 按文件大小切块生成 partitions: 在 FileSour...
- 下一篇
如何在Java中生成比特币钱包地址
让我们通过学习比特币(Bitcoin)如何实施该技术的各个方面来工作,好吗?该技术包括以下几个方面: 比特币地址bitcoin address是用来发送和接收比特币的。 交易transaction是比特币从一个地址转移到另一个地址。 几个交易被分组成一个区块block。一个区块被处理,因此它可以被提交到比特币网络中。这个过程被称为挖矿mining。 区块被收集在区块链blockchain中,并由网络中的节点共享。 警告的提示——这里的代码仅用于学习。如果你试图将比特币发送到由该代码生成的地址,你可能会损失金钱。 什么是比特币地址? 比特币地址是一个随机查找的十六进制字符串,在比特币网络中用于发送和接收比特币。它是公私不对称ECDSA密钥的公共部分。相应的私钥用于签署比特币交易,作为交易时来自你的确认和证明。 从技术上讲,比特币地址是从ECDSA密钥的公共部分生成的,使用SHA-256和RIPEMD-160进行hash,如下文所述,处理得到的结果hash,最后使用Base58校验编码对密钥进行编码。 让我们看看如何使用JCE(java加密扩展),Bouncy Castle(RIPEMD-...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- Windows10,CentOS7,CentOS8安装Nodejs环境
- CentOS8安装MyCat,轻松搞定数据库的读写分离、垂直分库、水平分库
- Docker安装Oracle12C,快速搭建Oracle学习环境
- SpringBoot2编写第一个Controller,响应你的http请求并返回结果
- SpringBoot2整合Thymeleaf,官方推荐html解决方案
- 设置Eclipse缩进为4个空格,增强代码规范
- CentOS8安装Docker,最新的服务器搭配容器使用
- Docker使用Oracle官方镜像安装(12C,18C,19C)
- Springboot2将连接池hikari替换为druid,体验最强大的数据库连接池
- Hadoop3单机部署,实现最简伪集群