<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                ## 導讀:TokuDB在“云端”的優勢 為了降低用戶數據存儲成本,2015年4月份,云數據庫(Aliyun RDS)增加了TokuDB引擎支持(MySQL5.6版本),也是第一家支持TokuDB的RDS。 我們知道,當一個實例的數據空間超過TB級別時,空間存儲和運維成本都是非常高的,尤其是做實例遷移和備份,整個過程耗時會非常長。 我們對線上一些大的InnoDB實例(TB級)平滑遷移到TokuDB后,數據空間從?2TB+?直接降到?400GB?,空間成本僅為原來的五分之一,而且讀寫性能沒有任何降低(寫性能反而提升不少)。通過線上幾個大實例(TB級)的使用,TokuDB的壓縮比均在5倍以上,同樣的空間大小,使用TokuDB后可以存5倍的容量! TokuDB的另外一個特點就是低IO消耗,相同數據量下,IO花銷基本是InnoDB的1/8,IO成本也降低了不少,同樣的IOPS限制下,TokuDB寫性能會更高。因為IOPS消耗較少,RDS已經在線上部署TokuDB+廉價SATA盤集群,配合“方寸山”(分庫分表proxy)來提供低成本高性能的PB級日志存儲能力。 這個集群使用廉價的SATA盤(IOPS能力~120),單臺物理機基本可提供30,000 TPS的寫能力(tokudb_fsync_log_period=0和sync_binlog=1,針對類如日志型應用,對數據安全性要求不是很高的話,調整tokudb_fsync_log_period=1000,sync_binlog=1000,性能會更高),利用TokuDB的大頁(page size 4MB)壓縮優勢,尤其是對日志內容,壓縮比基本在1/8以上,單機可提供160TB+的的存儲能力,整個集群可輕松支持PB級。 使用TokuDB,讓你隨便任性! 本篇來探討下TokuDB內部的一個重要機制: checkpoint。 TokuDB的checkpoint只有一種方式:sharp checkpoint,即做checkpoint的時候,需要把內存中所有的”臟頁”都刷回磁盤。本篇就來介紹下TokuDB的sharp checkpoint的一些具體細節,使看官們對TokuDB的checkpoint有個大概了解。 ## 為什么要checkpoint TokuDB引擎里,數據存在于3個地方: 1. Redo Log (disk) 2. Buffer Pool (memory) 3. Data File (disk) 為了性能,對“頁”級的操作會先被Cache到Buffer Pool里,當觸發某些條件后,才會把這些“臟頁”刷到磁盤進行持久化,這樣就帶來一個問題: 對于TokuDB來說,整個Fractal-Tree元素有兩部分的“頁”組成:(Buffer Pool里的“頁” ) + (Data File里已持久化的”頁”),如果TokuDB crash后,Buffer Pool里的“頁”丟失,只剩Data File的“頁”,此時的Fractal-Tree處于“混沌“狀態(不一致狀態)。 為了避免出現這種“混沌“狀態,TokuDB需要定期(默認60s)做Checkpoint操作,把Buffer Pool里的“臟頁”刷到磁盤的Data File里。 當TokuDB Crash后,只需從上次的一致性狀態點開始“回放” Redo Log里的日志即可,恢復的數據量大概就是60s內的數據,快吧?嗯。 ## TokuDB Checkpoint機制 TokuDB checkpoint分2個階段:begin_checkpoint 和 end_checkpoint 大體邏輯如下: begin_checkpoint: ~~~ C1, 拿到checkpoint鎖 C2, 對buffer pool的page_list加read-lock C3, 遍歷page_list,對每個page設置checkpoint_pending flag C4, 釋放buffer pool page_list的讀鎖 ~~~ end_checkpoint: ~~~ C5, 遍歷page_list,對checkpoint_pending為true且為“臟”的page嘗試獲取write-lock C6, 如果拿到write-lock,clone出來一份,釋放write-lock,然后把clone的page刷回磁盤 ~~~ 以上就是整個checkpoint大概的邏輯,整個過程中,只有C6的任務最“繁重”,在這里面有幾個“大活”: 1. clone的時候如果是leaf“頁”,會對原“頁”重做數據均分(leaf里包含多個大小均勻的子分區) –CPU消耗 2. 刷盤前做大量壓縮 –CPU消耗 3. 多線程并發的把page刷到磁盤 –IO操作 以上3點會導致checkpoint的時候出現一點點的性能抖動,下面看組數據: ~~~ [ 250s] threads: 32, tps: 5095.80, reads/s: 71330.94, writes/s: 20380.38, response time: 8.14ms (95%) [ 255s] threads: 32, tps: 4461.80, reads/s: 62470.82, writes/s: 17848.80, response time: 10.03ms (95%) [ 260s] threads: 32, tps: 4968.79, reads/s: 69562.25, writes/s: 19873.96, response time: 8.49ms (95%) [ 265s] threads: 32, tps: 5123.61, reads/s: 71738.31, writes/s: 20494.03, response time: 8.06ms (95%) [ 270s] threads: 32, tps: 5119.00, reads/s: 71666.02, writes/s: 20475.61, response time: 8.11ms (95%) [ 275s] threads: 32, tps: 5117.00, reads/s: 71624.40, writes/s: 20469.00, response time: 8.07ms (95%) [ 280s] threads: 32, tps: 5117.39, reads/s: 71640.26, writes/s: 20471.56, response time: 8.08ms (95%) [ 285s] threads: 32, tps: 5103.21, reads/s: 71457.54, writes/s: 20414.24, response time: 8.11ms (95%) [ 290s] threads: 32, tps: 5115.80, reads/s: 71608.46, writes/s: 20461.42, response time: 8.11ms (95%) [ 295s] threads: 32, tps: 5121.98, reads/s: 71708.73, writes/s: 20484.72, response time: 8.09ms (95%) [ 300s] threads: 32, tps: 5115.01, reads/s: 71617.00, writes/s: 20462.46, response time: 8.08ms (95%) [ 305s] threads: 32, tps: 5115.00, reads/s: 71611.76, writes/s: 20461.79, response time: 8.11ms (95%) [ 310s] threads: 32, tps: 5100.01, reads/s: 71396.90, writes/s: 20398.03, response time: 8.13ms (95%) [ 315s] threads: 32, tps: 4479.20, reads/s: 62723.81, writes/s: 17913.40, response time: 10.02ms (95%) [ 320s] threads: 32, tps: 4964.80, reads/s: 69496.00, writes/s: 19863.60, response time: 8.63ms (95%) [ 325s] threads: 32, tps: 5112.19, reads/s: 71567.45, writes/s: 20447.56, response time: 8.12ms (95%) ~~~ 以上為sysbench測試數據(讀寫混合),僅供參考,可以看到在[255s]和[315s]checkpoint的時候,性能有個抖動。 喵,另外一個問題來了:如果TokuDB在end_checkpoint C6的時候crash了呢,只有部分“臟頁”被寫到磁盤?此時的數據庫(Fractal-Tree)狀態豈不是不一致了? TokuDB在這里使用了copy-on-write模型,本次checkpoint的“臟頁”在刷到磁盤的時候,不會覆寫上次checkpoint的文件區間,保證在整個checkpoint過程中出現crash,不會影響整個數據庫的狀態。 本篇只是大概介紹了TokuDB的checkpoint機制,還有非常多的細節,感興趣的同學可閱讀[ft/cachetable](https://github.com/Tokutek/ft-index/tree/master/ft/cachetable)代碼。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看