openstack 与 ceph (osd 部署)

当 monitor 运行后, 你需要添加 OSD, 你的集群只有在获得足够的 OSD 数量用于复制对象时候才可能获得 active + clean 状态
例如 osd pool size = 2, 那么至少需要 2 个 OSD,

在启动 MONITOR 后,你的集群具有默认的 CURSH MAP, 当前 CURSH MAP 并没有 CEPH OSD 进程映射到 CEPH 节点

CEPH 提供 CEPH-DISK 工具, 可以准备一次磁盘, 分区或者目录用于 CEPH, ceph-disk 工具通过增加 INDEX 创建 OSD ID, 另外 ceph-disk
将会增加新的 OSD 到 CURSH MAP

目标

为每个电脑中的每个独立的磁盘都创建一个对应的 OSD,  当前数据备份副本数量为 3

1. 系统初始化

安装 Centos 7.1 版本操作系统
在 /etc/hosts 定义集群中所有主机名及 ip 地址
保证时间同步
确保 iptables, selinux 都处于关闭状态

2. 准备磁盘

把打算用于创建 ceph 存储的磁盘进行分区, 并执行格式化, 参考该脚本, 并在每个节点上执行

#!/bin/bash
LANG=en_US
disk=`fdisk -l | grep ^Disk  | grep sectors | grep sd | grep -v sda | awk -F[:\ ]  '{print $2}' | sort`
yum install -y hdparm 
for partition in  $disk
do
  dd if=/dev/zero of=$partition bs=1M count=100
  parted -s $partition mklabel gpt
  parted $partition mkpart primary xfs 1 100%
  hdparm -z "$partition"1
  mkfs.xfs -f -i size=512  "$partition"1
done

3. 创建osd

每个独立的物理硬盘, 都已经完成格式化,
我们修改 /etc/fstab 把磁盘挂载到对应的目录中, 成为 CEPH 集群存储中的一部分
我们都会为每个独立的 磁盘创建一个独立的 OSD 与其对应,

#!/bin/bash
LANG=en_US
num=0
for ip in 240.30.128.55 240.30.128.56 240.30.128.57 240.30.128.73 240.30.128.74 240.30.128.75 240.30.128.76
do
        diskpart=`ssh $ip "fdisk -l  | grep GPT | grep -v sda" | awk '{print $1}' | sort`
        for partition in $diskpart
        do
                        ssh $ip "ceph osd create"
                        ssh $ip "mkdir /var/lib/ceph/osd/ceph-$num"
                        ssh $ip "echo $partition  /var/lib/ceph/osd/ceph-$num   xfs defaults 0 0 >> /etc/fstab"
                        let num++
        done
        ssh $ip "mount -a"
done

重启, 验证挂载是否正常

4. ceph 配置文件

[global]
fsid = dc4f91c1-8792-4948-b68f-2fcea75f53b9
mon initial members = hh-yun-ceph-cinder015-128055, hh-yun-ceph-cinder017-128057, hh-yun-ceph-cinder024-128074
mon host = 10.199.128.55, 10.199.128.57, 10.199.128.74
public network = 192.168.209.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
filestore xattr use omap = true
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1

5. 初始化 OSD 数据目录



#!/bin/bash
LANG=en_US
num=0

for ip in 240.30.128.55 240.30.128.56 240.30.128.57 240.30.128.73 240.30.128.74 240.30.128.75 240.30.128.76
do
        diskpart=`ssh $ip "fdisk -l  | grep GPT | grep -v sda" | awk '{print $1}' | sort`
        for partition in $diskpart
        do
                ssh $ip "ceph-osd -i $num --mkfs --mkkey --osd-uuid dc4f91c1-8792-4948-b68f-2fcea75f53b9"
                let num++
        done
done

检测结果

[root@hh-yun-ceph-cinder015-128055 tmp]# ls /var/lib/ceph/osd/ceph*
/var/lib/ceph/osd/ceph-0:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
/var/lib/ceph/osd/ceph-1:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
/var/lib/ceph/osd/ceph-2:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
/var/lib/ceph/osd/ceph-3:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
/var/lib/ceph/osd/ceph-4:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
/var/lib/ceph/osd/ceph-5:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
/var/lib/ceph/osd/ceph-6:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
/var/lib/ceph/osd/ceph-7:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
/var/lib/ceph/osd/ceph-8:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
/var/lib/ceph/osd/ceph-9:
ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami

6. 注册 OSD 认证密钥

#!/bin/bash
LANG=en_US
num=0
for ip in 240.30.128.55 240.30.128.56 240.30.128.57 240.30.128.73 240.30.128.74 240.30.128.75 240.30.128.76
do
        diskpart=`ssh $ip "fdisk -l  | grep GPT | grep -v sda" | awk '{print $1}' | sort`
        for partition in $diskpart
        do
                ssh $ip "ceph auth add osd.$num osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-$num/keyring"
                let num++
        done
done

参考执行结果

[root@hh-yun-ceph-cinder015-128055 tmp]# ./authosd.sh
added key for osd.0
added key for osd.1
added key for osd.2
added key for osd.3
added key for osd.4
added key for osd.5
added key for osd.6
added key for osd.7
added key for osd.8
added key for osd.9
added key for osd.10
added key for osd.11
added key for osd.12
added key for osd.13
added key for osd.14
added key for osd.15
added key for osd.16
added key for osd.17
added key for osd.18
added key for osd.19
added key for osd.20
added key for osd.21
added key for osd.22
added key for osd.23
added key for osd.24
added key for osd.25
added key for osd.26
added key for osd.27
added key for osd.28
added key for osd.29
added key for osd.30
added key for osd.31
added key for osd.32
added key for osd.33
added key for osd.34
added key for osd.35
added key for osd.36
added key for osd.37
added key for osd.38
added key for osd.39
added key for osd.40
added key for osd.41
added key for osd.42
added key for osd.43
added key for osd.44
added key for osd.45
added key for osd.46
added key for osd.47
added key for osd.48
added key for osd.49
added key for osd.50
added key for osd.51
added key for osd.52
added key for osd.53
added key for osd.54
added key for osd.55
added key for osd.56
added key for osd.57
added key for osd.58
added key for osd.59
added key for osd.60
added key for osd.61
added key for osd.62
added key for osd.63
added key for osd.64
added key for osd.65
added key for osd.66
added key for osd.67
added key for osd.68
added key for osd.69

7. 添加 ceph 节点到 CURSH MAP 中, 并把 ceph 节点放置 ROOT 节点下

#!/bin/bash
for host in hh-yun-ceph-cinder015-128055 hh-yun-ceph-cinder016-128056 hh-yun-ceph-cinder017-128057 hh-yun-ceph-cinder023-128073 hh-yun-ceph-cinder024-128074 hh-yun-ceph-cinder025-128075 hh-yun-ceph-cinder026-128076
do
  ceph osd crush add-bucket $host host
  ceph osd crush move $host root=default
done

参考执行结果

[root@hh-yun-ceph-cinder015-128055 tmp]# ./hostmap.sh
added bucket hh-yun-ceph-cinder015-128055 type host to crush map
moved item id -2 name 'hh-yun-ceph-cinder015-128055' to location {root=default} in crush map
added bucket hh-yun-ceph-cinder016-128056 type host to crush map
moved item id -3 name 'hh-yun-ceph-cinder016-128056' to location {root=default} in crush map
added bucket hh-yun-ceph-cinder017-128057 type host to crush map
moved item id -4 name 'hh-yun-ceph-cinder017-128057' to location {root=default} in crush map
added bucket hh-yun-ceph-cinder023-128073 type host to crush map
moved item id -5 name 'hh-yun-ceph-cinder023-128073' to location {root=default} in crush map
added bucket hh-yun-ceph-cinder024-128074 type host to crush map
moved item id -6 name 'hh-yun-ceph-cinder024-128074' to location {root=default} in crush map
added bucket hh-yun-ceph-cinder025-128075 type host to crush map
moved item id -7 name 'hh-yun-ceph-cinder025-128075' to location {root=default} in crush map
added bucket hh-yun-ceph-cinder026-128076 type host to crush map
moved item id -8 name 'hh-yun-ceph-cinder026-128076' to location {root=default} in crush map

8. 管理 crush map

增加 OSD 到 CURSH MAP, 然后你就可以接收数据, 你同样可以重新编译 CURSH MAP, 添加 OSD 到磁盘, 添加主机到 CURSH MAP, 为磁盘添加设备

分配权重, 重新进行编译设定即可

#!/bin/bash
LANG=en_US
num=0
for ip in 240.30.128.55 240.30.128.56 240.30.128.57 240.30.128.73 240.30.128.74 240.30.128.75 240.30.128.76
do
        diskpart=`ssh $ip "fdisk -l  | grep GPT | grep -v sda" | awk '{print $1}' | sort`
        for partition in $diskpart
        do
                hostname=`ssh $ip hostname -s`
                ceph osd crush add osd.$num 1.0 root=default host=$hostname
                let num++
        done
done

9. 启动 osd

#!/bin/bash
LANG=en_US
num=0
for ip in 240.30.128.55 240.30.128.56 240.30.128.57 240.30.128.73 240.30.128.74 240.30.128.75 240.30.128.76
do
        diskpart=`ssh $ip "fdisk -l  | grep GPT | grep -v sda" | awk '{print $1}' | sort`
        for partition in $diskpart
        do
                ssh $ip "touch /var/lib/ceph/osd/ceph-$num/sysvinit"
                ssh $ip "/etc/init.d/ceph start osd.$num"
                let num++
        done
done

10. 校验状态

[root@hh-yun-ceph-cinder017-128057 ~]# ceph osd tree
# id    weight  type name       up/down reweight
-1      70      root default
-2      10              host hh-yun-ceph-cinder015-128055
0       1                       osd.0   up      1
1       1                       osd.1   up      1
2       1                       osd.2   up      1
3       1                       osd.3   up      1
4       1                       osd.4   up      1
5       1                       osd.5   up      1
6       1                       osd.6   up      1
7       1                       osd.7   up      1
8       1                       osd.8   up      1
9       1                       osd.9   up      1
-3      10              host hh-yun-ceph-cinder016-128056
10      1                       osd.10  up      1
11      1                       osd.11  up      1
12      1                       osd.12  up      1
13      1                       osd.13  up      1
14      1                       osd.14  up      1
15      1                       osd.15  up      1
16      1                       osd.16  up      1
17      1                       osd.17  up      1
18      1                       osd.18  up      1
19      1                       osd.19  up      1
-4      10              host hh-yun-ceph-cinder017-128057
20      1                       osd.20  up      1
21      1                       osd.21  up      1
22      1                       osd.22  up      1
23      1                       osd.23  up      1
24      1                       osd.24  up      1
25      1                       osd.25  up      1
26      1                       osd.26  up      1
27      1                       osd.27  up      1
28      1                       osd.28  up      1
29      1                       osd.29  up      1
-5      10              host hh-yun-ceph-cinder023-128073
30      1                       osd.30  up      1
31      1                       osd.31  up      1
32      1                       osd.32  up      1
33      1                       osd.33  up      1
34      1                       osd.34  up      1
35      1                       osd.35  up      1
36      1                       osd.36  up      1
37      1                       osd.37  up      1
38      1                       osd.38  up      1
39      1                       osd.39  up      1
-6      10              host hh-yun-ceph-cinder024-128074
40      1                       osd.40  up      1
41      1                       osd.41  up      1
42      1                       osd.42  up      1
43      1                       osd.43  up      1
44      1                       osd.44  up      1
45      1                       osd.45  up      1
46      1                       osd.46  up      1
47      1                       osd.47  up      1
48      1                       osd.48  up      1
49      1                       osd.49  up      1
-7      10              host hh-yun-ceph-cinder025-128075
50      1                       osd.50  up      1
51      1                       osd.51  up      1
52      1                       osd.52  up      1
53      1                       osd.53  up      1
54      1                       osd.54  up      1
55      1                       osd.55  up      1
56      1                       osd.56  up      1
57      1                       osd.57  up      1
58      1                       osd.58  up      1
59      1                       osd.59  up      1
-8      10              host hh-yun-ceph-cinder026-128076
60      1                       osd.60  up      1
61      1                       osd.61  up      1
62      1                       osd.62  up      1
63      1                       osd.63  up      1
64      1                       osd.64  up      1
65      1                       osd.65  up      1
66      1                       osd.66  up      1
67      1                       osd.67  up      1
68      1                       osd.68  up      1
69      1                       osd.69  up      1
[root@hh-yun-ceph-cinder015-128055 tmp]# ceph -s
    cluster dc4f91c1-8792-4948-b68f-2fcea75f53b9
     health HEALTH_WARN too few pgs per osd (2 < min 20)
     monmap e1: 3 mons at {hh-yun-ceph-cinder015-128055=240.30.128.55:6789/0,hh-yun-ceph-cinder017-128057=240.30.128.57:6789/0,hh-yun-ceph-cinder024-128074=240.30.128.74:6789/0}, election epoch 8, quorum 0,1,2 hh-yun-ceph-cinder015-128055,hh-yun-ceph-cinder017-128057,hh-yun-ceph-cinder024-128074
     osdmap e226: 70 osds: 70 up, 70 in
      pgmap v265: 192 pgs, 3 pools, 0 bytes data, 0 objects
            74632 MB used, 254 TB / 254 TB avail
                 192 active+clean
优秀的个人博客,低调大师

微信关注我们

原文链接:https://yq.aliyun.com/articles/70843

转载内容版权归作者及来源网站所有!

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

相关文章

发表评论

资源下载

更多资源
优质分享Android(本站安卓app)

优质分享Android(本站安卓app)

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

Mario,低调大师唯一一个Java游戏作品

Mario,低调大师唯一一个Java游戏作品

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Oracle Database,又名Oracle RDBMS

Oracle Database,又名Oracle RDBMS

Oracle Database,又名Oracle RDBMS,或简称Oracle。是甲骨文公司的一款关系数据库管理系统。它是在数据库领域一直处于领先地位的产品。可以说Oracle数据库系统是目前世界上流行的关系数据库管理系统,系统可移植性好、使用方便、功能强,适用于各类大、中、小、微机环境。它是一种高效率、可靠性好的、适应高吞吐量的数据库方案。

Apache Tomcat7、8、9(Java Web服务器)

Apache Tomcat7、8、9(Java Web服务器)

Tomcat是Apache 软件基金会(Apache Software Foundation)的Jakarta 项目中的一个核心项目,由Apache、Sun 和其他一些公司及个人共同开发而成。因为Tomcat 技术先进、性能稳定,而且免费,因而深受Java 爱好者的喜爱并得到了部分软件开发商的认可,成为目前比较流行的Web 应用服务器。