<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                企業??AI智能體構建引擎,智能編排和調試,一鍵部署,支持知識庫和私有化部署方案 廣告
                ## 背景 在RDS環境中,多租戶使用同一臺主機是很常見的事情,為了隔離用戶資源,有很多手段,例如使用虛擬機,或者CGROUP技術。以CGROUP為例,可以控制進程的CPU使用、內存使用、塊設備的IOPS或者吞吐速率等資源使用。限制資源的好處是可以在共用的硬件層面為多個用戶提供承諾的性能指標。當然這種方法也有一定的弊端,例如當用戶需要運行消耗較多資源的SQL的時候,無法利用機器的空閑資源,因為它被限制住了。還有一些弊端可能導致RDS的不穩定,本文將展開討論其中的弊端之一,資源限制是如何導致備庫延遲的。 ## 分析過程 創建一個實例,32G內存(確保測試時不被OOM),1000GB存儲空間,將塊設備的讀寫IOPS都限制到200。 修改限制可以在控制臺或者主機上操作,主機上操作舉例: 獲取postmaster進程號 ~~~ $ps -ewf|grep pg468619 pg468619 24097 1 0 Jul29 ? 00:00:17 /path/to/postgres -D /path/to/data ~~~ 查看這個進程所在的組 ~~~ $cat /proc/24097/cgroup 3:memory:/rule3038 2:cpu:/rule3038 1:blkio:/rule3038 ~~~ 目前已限制在200 ~~~ $cat /path/to/rule3038/blkio.throttle.read_iops_device 8:0 200 8:16 200 $cat /path/to/rds/rule3038/blkio.throttle.write_iops_device 8:0 200 8:16 200 ~~~ 如果沒有限制,可以直接修改以上兩個文件,或者使用cgset例如: ~~~ cgset -r blkio.throttle.write_iops_device='8:0 200' rule3038 cgset -r blkio.throttle.write_iops_device='8:16 200' rule3038 ~~~ 對應的塊設備 ~~~ $ll /dev/sda brw-rw---- 1 root disk 8, 0 May 18 13:59 /dev/sda $ll /dev/sdb brw-rw---- 1 root disk 8, 16 Jul 31 07:45 /dev/sdb ~~~ 接下來需要做的是創建測試用戶,以及測試函數和測試表: 連接到數據庫 ~~~ $psql -p 3038 -U pg468619 postgres ~~~ 創建測試用戶 ~~~ postgres=# create role digoal login encrypted password 'digoal'; postgres=# grant all on database postgres to digoal; ~~~ 創建測試表,用于存儲主節點插入XLOG的LSN以及備庫接收、寫、FLUSH、APPLY的XLOG LSN位置,以便計算主節點產生XLOG的速度和備庫延遲的字節數。 ~~~ postgres=# create table tbl_xlog_insert(id int, sec numeric default EXTRACT(EPOCH from clock_timestamp()), xlog_lsn text default pg_current_xlog_insert_location(), sent_lsn pg_lsn, write_lsn pg_lsn, flush_lsn pg_lsn, replay_lsn pg_lsn); ~~~ 創建測試函數,目的是快速的產生大量XLOG,同時記錄XLOG的位置到測試表。 ~~~ postgres=# \c postgres postgres postgres=# create or replace function f_test(int) returns void as $_$ declare v_tbl name := 'tbl_'||pg_backend_pid(); ddl text := ' (id int8'; begin set synchronous_commit=off; execute 'DROP TABLE IF EXISTS '||v_tbl; -- 創建一個201個字段的測試表. for i in 1..200 loop ddl := ddl||',c'||i||' int8 default 0'; end loop; ddl := ddl||') '; execute 'create table '||v_tbl||ddl||' with (autovacuum_enabled=off, toast.autovacuum_enabled=off)'; execute 'insert into '||v_tbl||' select generate_series(1,'||$1||')'; execute 'DROP TABLE IF EXISTS '||v_tbl; -- 插入當前的XLOG位置, standby接收到的位置. 可能有一定的反饋延遲,延遲10毫秒 perform pg_sleep(0.01); insert into tbl_xlog_insert(id, sent_lsn,write_lsn,flush_lsn,replay_lsn) select pg_backend_pid(), sent_location,write_location,flush_location,replay_location from pg_stat_replication limit 1; end; $_$ language plpgsql strict security definer; postgres=# GRANT execute on function f_test(int) to digoal; ~~~ 在測試機創建測試腳本 ~~~ postgres@digoal-> cat test.sql select f_test(5000); ~~~ 測試時,使用1個連接就夠了,太多了又會帶來新的問題,心跳檢測失敗可能導致主備切換,這也是目前資源限制引起的一個問題,后面再來談這個問題如何處理。 ~~~ postgres@digoal-> pgbench -M prepared -n -r -f ./test.sql -c 1 -j 1 -t 100 -h digoal0001.pg.rds.aliyuncs.com -p 3999 -U digoal postgres ~~~ 在主節點查詢主庫產生XLOG的速度,以及備庫延遲的情況: ~~~ postgres=# select round(0.001*pg_xlog_location_diff((lead(xlog_lsn,1) over(order by sec))::pg_lsn,xlog_lsn::pg_lsn)/(case lead(sec,1) over(order by sec)-sec when 0 then 1 else lead(sec,1) over(order by sec)-sec end), 2)as gen_xlog_KBpsec, round(pg_xlog_location_diff(xlog_lsn::pg_lsn,sent_lsn)/1024.0, 2) AS sent_xlog_delay_KB, round(pg_xlog_location_diff(xlog_lsn::pg_lsn,write_lsn)/1024.0, 2) AS write_xlog_delay_KB, round(pg_xlog_location_diff(xlog_lsn::pg_lsn,flush_lsn)/1024.0, 2) AS flush_xlog_delay_KB, round(pg_xlog_location_diff(xlog_lsn::pg_lsn,replay_lsn)/1024.0, 2) AS replay_xlog_delay_KB from tbl_xlog_insert where id=(select id from tbl_xlog_insert order by sec desc limit 1) order by sec; gen_xlog_kbpsec | sent_xlog_delay_kb | write_xlog_delay_kb | flush_xlog_delay_kb | replay_xlog_delay_kb -----------------+--------------------+---------------------+---------------------+---------------------- 814.87 | 8616.37 | 8616.37 | 8616.37 | 8616.54 809.63 | 10752.44 | 13952.44 | 16896.44 | 16898.04 802.75 | 12747.91 | 14795.91 | 17227.91 | 17227.92 821.96 | 12173.92 | 16269.92 | 19981.92 | 23022.60 820.87 | 13167.77 | 17515.32 | 19799.77 | 23168.63 581.98 | 15177.93 | 19553.93 | 23009.93 | 26213.85 1320.19 | 12735.36 | 17087.36 | 20415.36 | 24112.03 818.03 | 17281.54 | 21761.54 | 25729.54 | 28560.27 808.66 | 17093.73 | 21445.73 | 25413.73 | 26377.87 . . . . . . . . . . . . . . . . . . . . 806.56 | 79064.04 | 83544.04 | 87128.04 | 90281.66 804.10 | 82864.95 | 87472.95 | 87472.95 | 92209.77 802.53 | 84598.83 | 88310.83 | 91766.83 | 95275.68 598.90 | 82998.66 | 90038.66 | 90038.66 | 93495.39 1304.52 | 84704.81 | 89184.81 | 92768.81 | 92768.95 807.26 | 86847.66 | 91199.66 | 94527.66 | 97583.47 823.86 | 89209.00 | 93561.00 | 97785.00 | 97785.32 805.17 | 96967.23 | 101319.23 | 105287.23 | 108879.23 829.41 | 101531.95 | 105883.95 | 109979.95 | 113468.85 797.81 | 101999.23 | 106479.23 | 110447.23 | 110448.48 609.47 | 102789.51 | 107269.51 | 111109.51 | 114741.74 | 99381.25 | 103861.25 | 107573.25 | 111083.62 (100 rows) ~~~ XLOG產生速度約800KB/s, 由于XLOG盤的IOPS限制,導致STANDBY延遲不斷增大。為了排除并不是網絡延遲導致的延遲,我們可以使用systemtap跟蹤wal sender、wal writter進程的IOPS以及wal sender進程的網絡吞吐。 需要先安裝kernel debuginfo,跟蹤指定postgres wal writter進程,獲取讀寫IOPS: ~~~ \# vi io.stp global reads, writes, total_io probe vfs.read.return { reads[execname()] += 1 } probe vfs.write.return { writes[execname()] += 1 } \# print every 1 seconds probe timer.s(1) { foreach (name in writes) total_io[name] += writes[name] foreach (name in reads) total_io[name] += reads[name] printf ("%16s\t%10s\t%10s\n", "Process", "SUM Read IO", "SUM Written IO") foreach (name in total_io-) printf("%16s\t%10d\t%10d\n", name, reads[name], writes[name]) delete reads delete writes print("\n") } \# print total_io when exit probe end { foreach (name in total_io-) printf("%16s\t%10d\t%10d\n", name, reads[name], writes[name]) delete total_io print("\n") } \# stap -vp 5 -DMAXSKIPPED=9999 -DSTP_NO_OVERLOAD -DMAXTRYLOCK=100 ./io.stp -x $pid ~~~ 跟蹤指定postgres wal sender進程,獲取網絡傳輸速率: ~~~ \# vi net.stp global ifxmit, ifrecv global ifmerged // 定義3個全局變量, 分別用于存儲傳輸,接收,以及合并數組; // 傳輸和接收數組中存儲統計信息; // 合并數組存儲網絡接口上按照pid(), dev_name, execname(), uid()維度累加的傳輸和接收包個數. probe netdev.transmit { ifxmit[pid(), dev_name, execname(), uid()] <<< length } // netdev.transmit 探針, 網絡設備傳輸buffer時觸發. probe netdev.receive { ifrecv[pid(), dev_name, execname(), uid()] <<< length } // netdev.receive 探針, 從網絡設備接收數據時觸發. function print_activity() { printf("%5s %5s %-7s %7s %7s %7s %7s %-15s\n", "PID", "UID", "DEV", "XMIT_PK", "RECV_PK", "XMIT_KB", "RECV_KB", "COMMAND") foreach ([pid, dev, exec, uid] in ifrecv) { ifmerged[pid, dev, exec, uid] += @count(ifrecv[pid,dev,exec,uid]); } foreach ([pid, dev, exec, uid] in ifxmit) { ifmerged[pid, dev, exec, uid] += @count(ifxmit[pid,dev,exec,uid]); } // 按照倒序輸出 foreach ([pid, dev, exec, uid] in ifmerged-) { n_xmit = @count(ifxmit[pid, dev, exec, uid]) n_recv = @count(ifrecv[pid, dev, exec, uid]) printf("%5d %5d %-7s %7d %7d %7d %7d %-15s\n", pid, uid, dev, n_xmit, n_recv, n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0, n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0, exec) } // 輸出: pid, uid, 網絡接口, 傳輸包數, 接收包數, 傳輸KB, 接收KB, command. print("\n") delete ifxmit delete ifrecv delete ifmerged } // print_activity 函數, 按照pid,dev,exec,uid維度, 根據網絡接收和傳輸包數倒序輸出. // n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0, // n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0, // 表示傳輸和接收的KB數. // print_activity 函數的末尾清除三個全局變量的值. 下次調用時重新輸出上一次輸出以來的統計信息. probe timer.s(1), end, error { print_activity() } // 每1秒調用一次print_activity . \# stap -vp 5 -DMAXSKIPPED=9999 -DSTP_NO_OVERLOAD -DMAXTRYLOCK=100 ./io.stp -x $pid ~~~ 因為生產上沒有debuginfo的包,沒有辦法完成以上跟蹤。不過為了驗證不是網絡原因導致的問題,還有其他手段。`sar -n DEV 1 100000`也可以查看網絡傳輸。iotop則可以觀察進程的IO,很遺憾生產上也沒有iotop。 那么接下來,放開cgroup的限制,我們可以通過`sar -n DEV 1 100000`觀察一下網絡帶寬: 主庫和備庫的postgres進程都從cgroup移除: ~~~ \#cat /proc/50511/cgroup 3:memory:/rule3008 2:cpu:/rule3008 1:blkio:/rule3008 \#cgdelete memory:/rule3008 \#cgdelete cpu:/rule3008 \#cgdelete blkio:/rule3008 $cat /proc/119876/cgroup 3:memory:/rule3090 2:cpu:/rule3090 1:blkio:/rule3090 \#cgdelete memory:/rule3090 \#cgdelete cpu:/rule3090 \#cgdelete blkio:/rule3090 ~~~ 目前的網絡帶寬是1GB: ~~~ \#ethtool eth0|grep Speed Speed: 1000Mb/s ~~~ 再次測試,我們可以觀察到網絡傳輸達到了1GB/8,約120MB/s: ~~~ 01:54:24 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s 01:54:25 PM lo 132.29 132.29 29.42 29.42 0.00 0.00 0.00 01:54:25 PM eth0 85685.42 6495.83 125161.86 538.14 0.00 0.00 0.00 01:54:25 PM eth1 125.00 77.08 32.55 34.17 0.00 0.00 2.08 01:54:25 PM bond0 85808.33 6570.83 125192.78 572.14 0.00 0.00 2.08 01:54:25 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s 01:54:26 PM lo 125.00 125.00 28.50 28.50 0.00 0.00 0.00 01:54:26 PM eth0 85412.50 6509.38 124869.50 519.73 0.00 0.00 0.00 01:54:26 PM eth1 129.17 97.92 11.04 57.40 0.00 0.00 0.00 01:54:26 PM bond0 85544.79 6611.46 124883.69 577.44 0.00 0.00 0.00 ~~~ 當然,完全開放IOPS也有風險。如果網絡帶寬低于主節點產生WAL的速度,可能導致備庫需要的WAL在主庫WAL被快速回收,這種情況下,備庫如果不能從任何地方獲取到需要的XLOG,則備庫會處于青黃不接的狀態。我們需要考慮調整相應的wal keep segments,或者備庫使用歸檔進行恢復,脫離青黃不接的狀態。 以下是完全放開IOPS限制后帶來的問題,由于寫WAL速度超過了網絡帶寬120MB/S,達到了300MB/S,所以備庫同樣會延遲: ~~~ postgres=# select round(0.001*pg_xlog_location_diff((lead(xlog_lsn,1) over(order by sec))::pg_lsn,xlog_lsn::pg_lsn)/(case lead(sec,1) over(order by sec)-sec when 0 then 1 else lead(sec,1) over(order by sec)-sec end), 2)as gen_xlog_KBpsec, round(pg_xlog_location_diff(xlog_lsn::pg_lsn,sent_lsn)/1024.0, 2) AS sent_xlog_delay_KB, round(pg_xlog_location_diff(xlog_lsn::pg_lsn,write_lsn)/1024.0, 2) AS write_xlog_delay_KB, round(pg_xlog_location_diff(xlog_lsn::pg_lsn,flush_lsn)/1024.0, 2) AS flush_xlog_delay_KB, round(pg_xlog_location_diff(xlog_lsn::pg_lsn,replay_lsn)/1024.0, 2) AS replay_xlog_delay_KB from tbl_xlog_insert where id=(select id from tbl_xlog_insert order by sec desc limit 1) order by sec; gen_xlog_kbpsec | sent_xlog_delay_kb | write_xlog_delay_kb | flush_xlog_delay_kb | replay_xlog_delay_kb -----------------+--------------------+---------------------+---------------------+---------------------- 306692.66 | 2763682.05 | 2769058.05 | 2769058.05 | 2773902.84 310323.46 | 2821964.55 | 2826060.55 | 2826060.55 | 2830312.84 286290.82 | 2878171.34 | 2882267.34 | 2882267.34 | 2888606.55 308019.02 | 2933665.58 | 2937761.58 | 2937761.58 | 2941141.12 347211.31 | 2992420.60 | 2996516.60 | 2996516.60 | 3000172.73 339915.61 | 3049958.89 | 3054054.89 | 3054054.89 | 3060592.84 345896.39 | 3110687.52 | 3115295.52 | 3115295.52 | 3123686.84 326032.85 | 3169349.81 | 3174213.81 | 3174341.81 | 3176994.25 309214.90 | 3223985.07 | 3231409.07 | 3231409.07 | 3234051.52 335529.65 | 3273305.11 | 3277529.11 | 3277529.11 | 3286191.45 329873.46 | 3330912.32 | 3335008.32 | 3335008.32 | 3343094.45 293908.19 | 3386875.67 | 3392251.67 | 3392251.67 | 3398439.91 329390.57 | 3439531.98 | 3443627.98 | 3443627.98 | 3448171.41 ~~~ 當備庫需要的XLOG在主庫被覆蓋后,備庫無法獲得對應的WAL,這種情況下,只能重新搭建備庫,或者從歸檔獲取需要的XLOG: ~~~ 91812 2015-07-31 05:55:40 UTC 00000 LOG: started streaming WAL from primary at B/7A000000 on timeline 1 91812 2015-07-31 05:55:40 UTC XX000 FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000B0000007A has already been removed 91833 2015-07-31 05:55:45 UTC 00000 LOG: started streaming WAL from primary at B/7A000000 on timeline 1 91833 2015-07-31 05:55:45 UTC XX000 FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000B0000007A has already been removed ~~~ ## 解決辦法探討 問題: 1. 限制XLOG盤的IOPS,在用戶長時間臨近IOPS限制的操作下,可能導致備庫延遲,同時可能帶來心跳超時的問題。 2. 如果不限制XLOG盤的IOPS,那么可能導致主節點產生WAL的速度過快,超出網卡的帶寬(目前來看,換萬兆網就可以解決這個問題),這樣的話主庫的XLOG可能會被覆蓋掉,備庫會無法獲取需要的XLOG。 解決辦法探討: 1. 不限制XLOG盤的IOPS,同時使用萬兆網卡。 2. 備庫配置restore command,當主庫的WAL被覆蓋后,使用歸檔恢復。 ## 參考 [紅帽cgroup文檔](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/index.html) [systemtap文檔](https://sourceware.org/systemtap/)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看