<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                # 用Helm托管安裝Ceph集群并提供后端存儲 本文翻譯自Ceph[官方文檔](http://docs.ceph.com/docs/master/start/kube-helm/),括號內的內容為注釋。 ## 安裝 [ceph-helm ](https://github.com/ceph/ceph-helm/)項目可讓你在Kubernetes 環境以托管方式部署Ceph . 本文檔假定Kubernetes 環境已經可用。 ## 當前的限制 * Public網絡和Cluster網絡必須是同一個網絡 * 如果 storage class 用戶標識不是admin, 則必須在Ceph集群中手動創建用戶并在Kubernetes中創建其secret * ceph-mgr只能運行1個replica ## 安裝并使用Helm 可以按照此說明[instructions](https://github.com/kubernetes/helm/blob/master/docs/install.md)安裝Helm。 Helm通過從本地讀取Kubernetes配置文件來查找Kubernetes集群; 確保文件已下載和且helm客戶端可以訪問。 Kubernetes群集必須配置并運行Tiller服務器,并且須將本地Helm客戶端網絡可達。查看[init](https://github.com/kubernetes/helm/blob/master/docs/helm/helm_init.md)的Helm文檔獲取幫助。要在本地運行Tiller并將Helm連接到它,請運行如下命令(此命令會在Kubernetes集群部署一個tiller實例): ```bash $ helm init ``` ceph-helm項目默認使用本地的Helm repo來存儲charts。要啟動本地Helm repo服務器,請運行: ```bash $ helm serve & $ helm repo add local http://localhost:8879/charts ``` ## 添加Ceph-Helm charts到本地repo ```bash $ git clone https://github.com/ceph/ceph-helm $ cd ceph-helm/ceph $ make ``` ## 配置Ceph集群 創建一個包含Ceph配置的ceph-overrides.yaml文件。這個文件可能存在于任何地方,本文檔默認此文件在用戶的home目錄中。 ```bash $ cat ~/ceph-overrides.yaml ``` ```yaml network: public: 172.21.0.0/20 cluster: 172.21.0.0/20 osd_devices: - name: dev-sdd device: /dev/sdd zap: "1" - name: dev-sde device: /dev/sde zap: "1" storageclass: name: ceph-rbd pool: rbd user_id: k8s ``` **注意** 如果未設置日志(journal)設備,它將與device設備同位置。另ceph-helm/ceph/ceph/values.yaml文件包含所有可配置的選項。 ## 創建Ceph 集群的namespace 默認情況下,ceph-helm組件在Kubernetes的ceph namespace中運行。如果要自定義,請自定義namespace的名稱,默認namespace請運行: ```bash $ kubectl create namespace ceph ``` ## 配置RBAC權限 Kubernetes> = v1.6使RBAC成為默認的admission controller。ceph-helm要為每個組件提供RBAC角色和權限: ```bash $ kubectl create -f ~/ceph-helm/ceph/rbac.yaml ``` rbac.yaml文件假定Ceph集群將部署在ceph命名空間中。 ## 給Kubelet節點打標簽 需要設置以下標簽才能部署Ceph集群: ``` ceph-mon=enabled ceph-mgr=enabled ceph-osd=enabled ceph-osd-device-<name>=enabled ``` ceph-osd-device-標簽是基于我們的ceph-overrides.yaml中定義的osd_devices名稱值創建的。從我們下面的例子中,我們將得到以下兩個標簽:ceph-osd-device-dev-sdb和ceph-osd-device-dev-sdc。 每個 Ceph Monitor節點: ```bash $ kubectl label node <nodename> ceph-mon=enabled ceph-mgr=enabled ``` 每個 OSD node節點: ```bash $ kubectl label node <nodename> ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled ``` ## Ceph 部署 運行helm install命令來部署Ceph: ```bash $ helm install --name=ceph local/ceph --namespace=ceph -f ~/ceph-overrides.yaml NAME: ceph LAST DEPLOYED: Wed Oct 18 22:25:06 2017 NAMESPACE: ceph STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE ceph-keystone-user-rgw Opaque 7 1s ==> v1/ConfigMap NAME DATA AGE ceph-bin-clients 2 1s ceph-bin 24 1s ceph-etc 1 1s ceph-templates 5 1s ==> v1/Service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE ceph-mon None <none> 6789/TCP 1s ceph-rgw 10.101.219.239 <none> 8088/TCP 1s ==> v1beta1/DaemonSet NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE ceph-mon 3 3 0 3 0 ceph-mon=enabled 1s ceph-osd-dev-sde 3 3 0 3 0 ceph-osd-device-dev-sde=enabled,ceph-osd=enabled 1s ceph-osd-dev-sdd 3 3 0 3 0 ceph-osd-device-dev-sdd=enabled,ceph-osd=enabled 1s ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ceph-mds 1 1 1 0 1s ceph-mgr 1 1 1 0 1s ceph-mon-check 1 1 1 0 1s ceph-rbd-provisioner 2 2 2 0 1s ceph-rgw 1 1 1 0 1s ==> v1/Job NAME DESIRED SUCCESSFUL AGE ceph-mgr-keyring-generator 1 0 1s ceph-mds-keyring-generator 1 0 1s ceph-osd-keyring-generator 1 0 1s ceph-rgw-keyring-generator 1 0 1s ceph-mon-keyring-generator 1 0 1s ceph-namespace-client-key-generator 1 0 1s ceph-storage-keys-generator 1 0 1s ==> v1/StorageClass NAME TYPE ceph-rbd ceph.com/rbd ``` helm install的輸出顯示了將要部署的不同類型的資源。 將使用ceph-rbd-provisioner Pod創建ceph.com/rbd類型的名為ceph-rbd的StorageClass。這允許創建PVC時自動提供RBD。第一次掛載時,RBD設備將被格式化(format)。所有RBD設備都將使用ext4文件系統。ceph.com/rbd不支持fsType選項。默認情況下,RBD將使用鏡像格式2和鏡像分層特性。可以在values文件中覆蓋以下storageclass的默認值: ```yaml storageclass: name: ceph-rbd pool: rbd user_id: k8s user_secret_name: pvc-ceph-client-key image_format: "2" image_features: layering ``` 使用下面的命令檢查所有Pod是否正常運行。這可能需要幾分鐘時間: ```bash $ kubectl -n ceph get pods NAME READY STATUS RESTARTS AGE ceph-mds-3804776627-976z9 0/1 Pending 0 1m ceph-mgr-3367933990-b368c 1/1 Running 0 1m ceph-mon-check-1818208419-0vkb7 1/1 Running 0 1m ceph-mon-cppdk 3/3 Running 0 1m ceph-mon-t4stn 3/3 Running 0 1m ceph-mon-vqzl0 3/3 Running 0 1m ceph-osd-dev-sdd-6dphp 1/1 Running 0 1m ceph-osd-dev-sdd-6w7ng 1/1 Running 0 1m ceph-osd-dev-sdd-l80vv 1/1 Running 0 1m ceph-osd-dev-sde-6dq6w 1/1 Running 0 1m ceph-osd-dev-sde-kqt0r 1/1 Running 0 1m ceph-osd-dev-sde-lp2pf 1/1 Running 0 1m ceph-rbd-provisioner-2099367036-4prvt 1/1 Running 0 1m ceph-rbd-provisioner-2099367036-h9kw7 1/1 Running 0 1m ceph-rgw-3375847861-4wr74 0/1 Pending 0 1m ``` **注意** 因為我們沒有用ceph-rgw = enabled或ceph-mds = enabled 給節點打標簽(ceph對象存儲特性需要ceph-rgw,cephfs特性需要ceph-mds),因此MDS和RGW Pod都處于pending狀態,一旦其他Pod都在運行狀態,請用如下命令從某個MON節點檢查Ceph的集群狀態: ```bash $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph -s cluster: id: e8f9da03-c2d2-4ad3-b807-2a13d0775504 health: HEALTH_OK services: mon: 3 daemons, quorum mira115,mira110,mira109 mgr: mira109(active) osd: 6 osds: 6 up, 6 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 644 MB used, 5555 GB / 5556 GB avail pgs: ``` ## 配置一個POD以便從Ceph申請使用一個持久卷 為?/ ceph-overwrite.yaml中定義的k8s用戶創建一個密鑰環,并將其轉換為base64: ```bash $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- bash # ceph auth get-or-create-key client.k8s mon 'allow r' osd 'allow rwx pool=rbd' | base64 QVFCLzdPaFoxeUxCRVJBQUVEVGdHcE9YU3BYMVBSdURHUEU0T0E9PQo= # exit ``` 編輯ceph namespace中存在的用戶secret: ```bash $ kubectl -n ceph edit secrets/pvc-ceph-client-key ``` 將base64值復制到key位置的值并保存:: ```yaml apiVersion: v1 data: key: QVFCLzdPaFoxeUxCRVJBQUVEVGdHcE9YU3BYMVBSdURHUEU0T0E9PQo= kind: Secret metadata: creationTimestamp: 2017-10-19T17:34:04Z name: pvc-ceph-client-key namespace: ceph resourceVersion: "8665522" selfLink: /api/v1/namespaces/ceph/secrets/pvc-ceph-client-key uid: b4085944-b4f3-11e7-add7-002590347682 type: kubernetes.io/rbd ``` 我們創建一個在default namespace中使用RBD的Pod。將用戶secret從ceph namespace復制到default namespace: ```bash $ kubectl -n ceph get secrets/pvc-ceph-client-key -o json | jq '.metadata.namespace = "default"' | kubectl create -f - secret "pvc-ceph-client-key" created $ kubectl get secrets NAME TYPE DATA AGE default-token-r43wl kubernetes.io/service-account-token 3 61d pvc-ceph-client-key kubernetes.io/rbd 1 20s ``` 創建并初始化RBD池: ```bash $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph osd pool create rbd 256 pool 'rbd' created $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd pool init rbd ``` **重要** 重要的 Kubernetes使用RBD內核模塊將RBD映射到主機。Luminous需要CRUSH_TUNABLES 5(Jewel)。這些可調參數的最小內核版本是4.5。如果您的內核不支持這些可調參數,請運行ceph osd crush tunables hammer。 **重要** 由于RBD映射到主機系統上。主機需要能夠解析由kube-dns服務管理的ceph-mon.ceph.svc.cluster.local名稱。要獲得kube-dns服務的IP地址,運行kubectl -n kube-system get svc/kube-dns。 創建一個PVC: ```bash $ cat pvc-rbd.yaml ``` ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi storageClassName: ceph-rbd ``` ```bash $ kubectl create -f pvc-rbd.yaml persistentvolumeclaim "ceph-pvc" created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE ceph-pvc Bound pvc-1c2ada50-b456-11e7-add7-002590347682 20Gi RWO ceph-rbd 3s ``` 檢查集群上是否已創建RBD: ```bash $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd ls kubernetes-dynamic-pvc-1c2e9442-b456-11e7-9bd2-2a4159ce3915 $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd info kubernetes-dynamic-pvc-1c2e9442-b456-11e7-9bd2-2a4159ce3915 rbd image 'kubernetes-dynamic-pvc-1c2e9442-b456-11e7-9bd2-2a4159ce3915': size 20480 MB in 5120 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.10762ae8944a format: 2 features: layering flags: create_timestamp: Wed Oct 18 22:45:59 2017 ``` 創建一個使用此PVC的Pod: ```bash $ cat pod-with-rbd.yaml ``` ```yaml kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: busybox image: busybox command: - sleep - "3600" volumeMounts: - mountPath: "/mnt/rbd" name: vol1 volumes: - name: vol1 persistentVolumeClaim: claimName: ceph-pvc ``` ```bash $ kubectl create -f pod-with-rbd.yaml pod "mypod" created ``` 檢查Pod: ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE mypod 1/1 Running 0 17s $ kubectl exec mypod -- mount | grep rbd /dev/rbd0 on /mnt/rbd type ext4 (rw,relatime,stripe=1024,data=ordered) ``` ## 日志 可以通過kubectl logs [-f]命令訪問OSD和Monitor日志。Monitors有多個日志記錄流,每個流都可以從ceph-mon Pod中的容器訪問。 在ceph-mon Pod中有3個容器運行:ceph-mon,相當于物理機上的ceph-mon.hostname.log,cluster-audit-log-tailer相當于物理機上的ceph.audit.log,cluster-log-tailer相當于物理機上的ceph.log或ceph -w。每個容器都可以通過--container或-c選項訪問。例如,要訪問cluster-tail-log,可以運行: ```bash $ kubectl -n ceph logs ceph-mon-cppdk -c cluster-log-tailer ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看