您现在的位置是:首页 > 文章详情

Rook快速上手——Ceph三位一体存储

日期:2020-01-08点击:853

快速上手

官网地址:https://rook.io/

项目地址:https://github.com/rook/rook

安装集群

准备osd存储介质

硬盘符号 大小 作用
sdb 50GB OSD Data
sdc 50GB OSD Data
sdd 50GB OSD Data
sde 50GB OSD Metadata

> 安装前使用命令lvm lvs,lvm vgslvm pvs检查上述硬盘是否已经被使用,若已经使用需要删除,且确保硬盘上不存在分区和文件系统

确保开启内核rbd模块并安装lvm2

modprobe rbd yum install -y lvm2 

安装operator

git clone --single-branch --branch release-1.2 https://github.com/rook/rook.git cd rook/cluster/examples/kubernetes/ceph kubectl create -f common.yaml kubectl create -f operator.yaml 

安装ceph集群

--- apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: ceph/ceph:v14.2.5 allowUnsupported: false dataDirHostPath: /var/lib/rook skipUpgradeChecks: false mon: count: 3 allowMultiplePerNode: true mgr: modules: - name: pg_autoscaler enabled: true dashboard: enabled: true ssl: true monitoring: enabled: false rulesNamespace: rook-ceph network: hostNetwork: false rbdMirroring: workers: 0 annotations: resources: removeOSDsIfOutAndSafeToRemove: false useAllNodes: false useAllDevices: false config: nodes: - name: "minikube" devices: - name: "sdb" - name: "sdc" - name: "sdd" config: storeType: bluestore metadataDevice: "sde" databaseSizeMB: "1024" journalSizeMB: "1024" osdsPerDevice: "1" disruptionManagement: managePodBudgets: false osdMaintenanceTimeout: 30 manageMachineDisruptionBudgets: false machineDisruptionBudgetNamespace: openshift-machine-api 

安装命令行工具

kubectl create -f toolbox.yaml 

在toolbox中使用命令ceph -s查看集群状态

> 在重装ceph集群时需要清理rook数据目录(默认:/var/lib/rook)

为ceph-dashboard服务添加ingress路由

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: rook-ceph-mgr-dashboard namespace: rook-ceph annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" nginx.ingress.kubernetes.io/server-snippet: | proxy_ssl_verify off; spec: tls: - hosts: - rook-ceph.minikube.local secretName: rook-ceph.minikube.local rules: - host: rook-ceph.minikube.local http: paths: - path: / backend: serviceName: rook-ceph-mgr-dashboard servicePort: https-dashboard 

获取访问dashboard所需的admin账号密码

kubectl get secret rook-ceph-dashboard-password -n rook-ceph -o jsonpath='{.data.password}'|base64 -d 

将域名rook-ceph.minikube.local加入/etc/hosts后通过浏览器访问

https://rook-ceph.minikube.local/

使用rbd存储

创建rbd存储池

--- apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: osd replicated: size: 3 

> 由于仅有一个节点和三个OSD,因此采用osd作为故障域

创建完成后在rook-ceph-tools中使用指令ceph osd pool ls可以看到新建了以下存储池

  • replicapool

以rbd为存储介质创建storageclass

--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block provisioner: rook-ceph.rbd.csi.ceph.com parameters: clusterID: rook-ceph pool: replicapool imageFormat: "2" imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph csi.storage.k8s.io/fstype: ext4 reclaimPolicy: Delete 

使用statefulset测试通过storageclass挂载rbd存储

--- kind: StatefulSet apiVersion: apps/v1 metadata: name: storageclass-rbd-test namespace: default labels: app: storageclass-rbd-test spec: replicas: 2 selector: matchLabels: app: storageclass-rbd-test template: metadata: labels: app: storageclass-rbd-test spec: restartPolicy: Always containers: - name: storageclass-rbd-test imagePullPolicy: IfNotPresent volumeMounts: - name: data mountPath: /data image: 'centos:7' args: - 'sh' - '-c' - 'sleep 3600' volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: rook-ceph-block 

使用cephfs存储

创建mds服务与cephfs文件系统

--- apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs namespace: rook-ceph spec: metadataPool: failureDomain: osd replicated: size: 3 dataPools: - failureDomain: osd replicated: size: 3 preservePoolsOnDelete: true metadataServer: activeCount: 1 activeStandby: true placement: annotations: resources: 

创建完成后在rook-ceph-tools中使用指令ceph osd pool ls可以看到新建了以下存储池

  • myfs-metadata
  • myfs-data0

以cephfs为存储介质创建storageclass

--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-cephfs provisioner: rook-ceph.cephfs.csi.ceph.com parameters: clusterID: rook-ceph fsName: myfs pool: myfs-data0 csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph reclaimPolicy: Delete mountOptions: 

使用deployment测试通过storageclass挂载cephfs共享存储

--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: data-storageclass-cephfs-test namespace: default labels: app: storageclass-cephfs-test spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: csi-cephfs volumeMode: Filesystem --- kind: Deployment apiVersion: apps/v1 metadata: name: storageclass-cephfs-test namespace: default labels: app: storageclass-cephfs-test spec: replicas: 2 selector: matchLabels: app: storageclass-cephfs-test template: metadata: labels: app: storageclass-cephfs-test spec: restartPolicy: Always containers: - name: storageclass-cephfs-test imagePullPolicy: IfNotPresent volumeMounts: - name: data mountPath: /data image: 'centos:7' args: - 'sh' - '-c' - 'sleep 3600' volumes: - name: data persistentVolumeClaim: claimName: data-storageclass-cephfs-test 

使用s3存储

创建对象存储网关

--- apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: my-store namespace: rook-ceph spec: metadataPool: failureDomain: osd replicated: size: 3 dataPool: failureDomain: osd replicated: size: 3 preservePoolsOnDelete: false gateway: type: s3 sslCertificateRef: port: 80 securePort: instances: 1 placement: annotations: resources: 

创建完成后在rook-ceph-tools中使用指令ceph osd pool ls可以看到新建了以下存储池

  • .rgw.root
  • my-store.rgw.buckets.data
  • my-store.rgw.buckets.index
  • my-store.rgw.buckets.non-ec
  • my-store.rgw.control
  • my-store.rgw.log
  • my-store.rgw.meta

为ceph-rgw服务添加ingress路由

--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: rook-ceph-rgw namespace: rook-ceph annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/tls-acme: "true" spec: tls: - hosts: - rook-ceph-rgw.minikube.local secretName: rook-ceph-rgw.minikube.local rules: - host: rook-ceph-rgw.minikube.local http: paths: - path: / backend: serviceName: rook-ceph-rgw-my-store servicePort: http 

将域名rook-ceph-rgw.minikube.local加入/etc/hosts后通过浏览器访问

https://rook-ceph-rgw.minikube.local/

使用S3用户

添加对象存储用户

--- apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: my-user namespace: rook-ceph spec: store: my-store displayName: "my display name" 

创建对象存储用户的同时会生成以{{.metadata.namespace}}-object-user-{{.spec.store}}-{{.metadata.name}}为命名规则的secret,其中保存了该S3用户的AccessKey和SecretKey

获取AccessKey

kubectl get secret rook-ceph-object-user-my-store-my-user -n rook-ceph -o jsonpath='{.data.AccessKey}'|base64 -d 

获取SecretKey

kubectl get secret rook-ceph-object-user-my-store-my-user -n rook-ceph -o jsonpath='{.data.SecretKey}'|base64 -d 

根据上述步骤获取到的信息,使用S3客户端进行连接即可使用该S3用户

使用S3存储桶

创建以s3为存储的storageclass

--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-delete-bucket provisioner: ceph.rook.io/bucket reclaimPolicy: Delete parameters: objectStoreName: my-store objectStoreNamespace: rook-ceph region: default 

> 目前不支持以s3存储创建pvc,仅可用于创建存储桶

为storageclass创建对应的存储桶资源申请

apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: ceph-delete-bucket spec: generateBucketName: ceph-bkt storageClassName: rook-ceph-delete-bucket 

存储桶创建后会生成与桶资源申请同名的secret,其中保存着用于连接该存储桶的AccessKey和SecretKey

获取AccessKey

kubectl get secret ceph-delete-bucket -n rook-ceph -o jsonpath='{.data.AWS_ACCESS_KEY_ID}'|base64 -d 

获取SecretKey

kubectl get secret ceph-delete-bucket -n rook-ceph -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}'|base64 -d 

> 使用该方式获取的s3用户已经做了配额限制只能使用一个存储桶

以上就是对于rook ceph的三位一体(rbd,cephfs,s3)简单上手体验,相比较ceph-deploy和ceph-ansible而言更加地简单方便,适合新手上手体验ceph,稳定性如何需要时间观察,暂不推荐用于生产环境

原文链接:https://my.oschina.net/u/3390908/blog/3154960
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章