## 下载rook项目
git clone <https://github.com/rook/rook.git>
## 安装 rook operator
cd cluster/examples/kubernetes/ceph
kubectl apply -f common.yaml
kubectl apply -f operator.yaml
# verify the rook-ceph-operator is in the `Running` state before proceeding
kubectl -n rook-ceph get pod
这里为了保证Ceph只在启用了 ceph-osd=enabled 的node上运行
kubectl label node txz-data0 ceph-osd=enabled ceph-mon=enabled
kubectl label node txz-data1 ceph-osd=enabled ceph-mon=enabled
kubectl label node txz-store ceph-osd=enabled ceph-mon=enabled
cluster.yaml 配置如下
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
# For the latest ceph images, see <https://hub.docker.com/r/ceph/ceph/tags>
image: ceph/ceph:v14.2.7
dataDirHostPath: /var/lib/rook # 这里的路径用于存放ceph的数据
mon: # ceph 的monitor节点
count: 3
dashboard:
enabled: true # 开启mgr dashboard
storage: # 这里使用的是 host 模式, 即通过host挂载到ceph中
useAllNodes: false # 是否使用集群的所有节点进行部署
useAllDevices: false
placement: # 这里说明只允许在指定的node上运行
mon:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ceph-mon
operator: In
values:
- enabled
osd:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ceph-osd
operator: In
values:
- enabled
最后执行 kubectl apply -f cluster.yaml
Ceph 偶尔会报告 Physical Group 需要修复,可以在工具箱 Pod 中完成:
$ ceph health detail
HEALTH_ERR
1 pgs inconsistent; 2 scrub errors
pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors