<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                ceph的數據管理始于ceph client的寫操作,鑒于ceph使用多副本及強一致性策略來保證數據的安全性和完整性,一個寫請求的數據會首先被寫入到primary OSD上去,然后primary OSD會進一步將數據復制到secondary和其它tertiary OSD上去并一直等待他們的完成通知,然后再將最終的完成確認發送給client。這篇文章主要從ceph數據管理這個方面入手,通過具體的實例介紹一下如何在ceph中找到數據的存放位置。 1. 我們先創建一個包含數據的test文件、一個ceph pool并且設置pool的副本數為3 $ echo "Hello ceph, I'm learning the data management part." > /tmp/testfile $ cat /tmp/testfile Hello ceph, I'm learning the data management part. $ ceph osd pool create helloceph 192 192 pool 'helloceph' created $ ceph osd pool set helloceph size 3 set pool 3 size to 3 2. 將文件寫入到創建的pool中 $ rados -p helloceph put object1 /tmp/testfile $ rados -p helloceph ls object1 3. 查看object1的pg map $ ceph osd map helloceph object1 osdmap e8 pool 'helloceph' (3) object 'object1' -> pg 3.bac5debc (3.bc) -> up ([0,1,2], p0) acting ([0,1,2], p0) 其中, osdmap e8 OSD map的版本號 pool 'helloceph' (3) pool的名字和ID object 'object1' object的名字 pg 3.bac5debc (3.bc) pg number,即3.bc up ([0,1,2], p0) OSD up set,因為我們設置的是3副本,所以每個pg都會被存放在3個OSD上 acting ([0,1,2], p0) acting set,即OSD.0(primary)、OSD.1(secondary)和OSD.2(tertiary) 4. 查看三個OSD的信息,主要是host信息即OSD在哪個機器上 $ ceph osd tree # id weight type name up/down reweight -1 3 root default -3 3 rack unknownrack -2 3 host server-185 0 1 osd.0 up 1 1 1 osd.1 up 1 2 1 osd.2 up 1 因為我搭建的是All-in-one的環境,所以所有的OSD都在一個機器上。 5. 從osd中找出testfile文件(這里以osd.1為例) $ df -h|grep osd.1 /dev/mapper/ceph-osd.1 99G 37M 99G 1% /var/ceph/osd.1 $ ls -l|grep 3.bc drwxr-xr-x 2 root root 38 2015-07-12 09:52 3.bc_head $ ls -l total 4 -rw-r--r-- 1 root root 51 2015-07-12 09:52 object1__head_BAC5DEBC__3 $ cat object1__head_BAC5DEBC__3 Hello ceph, I'm learning the data management part. 同樣,我們也可以在osd.0和osd.2中找到object1. **/var/lib/ceph/osd下面有磁盤與osd的對應關系。** 6、ceph 文件數: getfattr -d -m ".*" /ceph
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看