<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                [TOC] # failover故障轉移 在完成單點的Flume NG搭建后,下面我們搭建一個高可用的Flume NG集群,架構圖如下所示: ![](https://box.kancloud.cn/3d741fd38c6abd270a133ff61abb99c3_934x688.png) ## 節點分配 Flume的Agent和Collector分布如下表所示 | 名稱 | ip地址 | Host | 角色 | | --- | --- | --- | --- | | Agent1 | 1992.168.200.101 | it1 | webserver | | collector1 | 192.168.200.102 | it2 | AgentMstr1 | | collector2 | 192.168.200.103 | it3 | AgentMstr2 | Agent1數據分別流入到Collector1和Collector2,Flume NG本身提供了Failover機制,可以自動切換和恢復。下面我們開發配置Flume NG集群 當其中一臺Collector掛了,另外一臺都可以切換過去 ## 配置 在下面單點Flume中,基本配置都完成了,我們只需要新添加兩個配置文件,它們是flume-client.conf和flume-server.conf,其配置內容如下所示: 這邊it1,it2,it3是不同機器的hostname 1. it1上的flume-client.conf配置 ~~~ #agent1 name agent1.channels = c1 agent1.sources = r1 # sinks可以配置多個,數據要輸出到多個地方 agent1.sinks = k1 k2 #設置組別為g1 agent1.sinkgroups = g1 #把g1和k1,k2這2個組裝起來 agent1.sinkgroups.g1.sinks = k1 k2 #設置channel agent1.channels.c1.type = memory agent1.channels.c1.capacity = 1000 agent1.channels.c1.transactionCapacity = 100 # 數據采集源設置 agent1.sources.r1.channels = c1 agent1.sources.r1.type = exec agent1.sources.r1.command = tail -F /root/log/test.log agent1.sources.r1.interceptors = i1 i2 agent1.sources.r1.interceptors.i1.type = static agent1.sources.r1.interceptors.i1.key = Type agent1.sources.r1.interceptors.i1.value = LOGIN agent1.sources.r1.interceptors.i2.type = timestamp # 設置sink k1 agent1.sinks.k1.channel = c1 # avro傳輸 agent1.sinks.k1.type = avro # 傳到這個機器上 agent1.sinks.k1.hostname = it2 # 傳輸的端口 agent1.sinks.k1.port = 52020 # 設置sink k2 agent1.sinks.k2.channel = c1 agent1.sinks.k2.type = avro agent1.sinks.k2.hostname = it3 agent1.sinks.k2.port = 52020 #設置failover故障轉移 agent1.sinkgroups.g1.processor.type = failover # 配置優先級,優先傳輸到k1 agent1.sinkgroups.g1.processor.priority.k1 = 10 agent1.sinkgroups.g1.processor.priority.k2 = 5 # 將故障的sink剔除的時間,出了故障不會立刻剔除 agent1.sinkgroups.g1.processor.maxpenalty = 10000 #這里首先要申明一個sinkgroups,然后再設置2個sink ,k1與k2,其中2個優先級是10和5, #而processor的maxpenalty被設置為10秒,默認是30秒 ~~~ 啟動命令: ~~~ flume-ng agent -n agent1 -c conf -f /root/flume/conf/flume-client.conf -Dflume.root.logger=DEBUG,console ~~~ 2. it2和it3上的flume-server.conf配置 ~~~ #set Agent name a1.sources = r1 a1.channels = c1 a1.sinks = k1 #set channel a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # 數據源的類型是avro a1.sources.r1.type = avro # 任意一臺服務器都可以連接 a1.sources.r1.bind = 0.0.0.0 a1.sources.r1.port = 52020 a1.sources.r1.channels = c1 # 配置攔截器 a1.sources.r1.interceptors = i1 i2 a1.sources.r1.interceptors.i1.type = timestamp a1.sources.r1.interceptors.i2.type = host a1.sources.r1.interceptors.i2.hostHeader=hostname # 設置sink輸出到hdfs a1.sinks.k1.type=hdfs a1.sinks.k1.hdfs.path=/data/flume/logs/%{hostname} a1.sinks.k1.hdfs.filePrefix=%Y-%m-%d a1.sinks.k1.hdfs.fileType=DataStream a1.sinks.k1.hdfs.writeFormat=TEXT a1.sinks.k1.hdfs.rollInterval=10 a1.sinks.k1.channel=c1 ~~~ 啟動命令: ~~~ flume-ng agent -n a1 -c conf -f /root/flume/conf/flume-server.conf -Dflume.root.logger=DEBUG,console ~~~ ## 測試failover 1. 先在it2和it3上啟動腳本 ~~~ flume-ng agent -n a1 -c conf -f conf/flume-server.conf -Dflume.root.logger=DEBUG,console ~~~ 2. 然后啟動it1上的腳本 ~~~ flume-ng agent -n agent1 -c conf -f conf/flume-client.conf -Dflume.root.logger=DEBUG,console ~~~ 3. Shell腳本生成數據 ~~~ while true;do date >> test.log; sleep 1s ;done ~~~ # load balance負載均衡 1. 節點分配 如failover故障轉移的節點分配 2. 配置 在failover故障轉移的配置上稍作修改 it1上的flume-client-loadbalance.conf配置 ~~~ #agent1 name agent1.channels = c1 agent1.sources = r1 agent1.sinks = k1 k2 #set gruop agent1.sinkgroups = g1 #set channel agent1.channels.c1.type = memory agent1.channels.c1.capacity = 1000 agent1.channels.c1.transactionCapacity = 100 agent1.sources.r1.channels = c1 agent1.sources.r1.type = exec agent1.sources.r1.command = tail -F /root/log/test.log # set sink1 agent1.sinks.k1.channel = c1 agent1.sinks.k1.type = avro agent1.sinks.k1.hostname = it2 agent1.sinks.k1.port = 52020 # set sink2 agent1.sinks.k2.channel = c1 agent1.sinks.k2.type = avro agent1.sinks.k2.hostname = it3 agent1.sinks.k2.port = 52020 #set sink group agent1.sinkgroups.g1.sinks = k1 k2 #設置負載均衡 agent1.sinkgroups.g1.processor.type = load_balance # 默認是round_robin輪詢,還可以選擇random隨機 agent1.sinkgroups.g1.processor.selector = round_robin #如果backoff被開啟,則 sink processor會屏蔽故障的sink # 開啟,表示如果有某個slink發生故障,他就不會再繼續發送,會把他剔除 agent1.sinkgroups.g1.processor.backoff = true ~~~ it2和it3上的flume-server-loadbalance.conf配置 ~~~ #set Agent name a1.sources = r1 a1.channels = c1 a1.sinks = k1 #set channel a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # other node,nna to nns a1.sources.r1.type = avro a1.sources.r1.bind = 0.0.0.0 a1.sources.r1.port = 52020 a1.sources.r1.channels = c1 a1.sources.r1.interceptors = i1 i2 a1.sources.r1.interceptors.i1.type = timestamp a1.sources.r1.interceptors.i2.type = host a1.sources.r1.interceptors.i2.hostHeader=hostname a1.sources.r1.interceptors.i2.useIP=false #set sink to hdfs a1.sinks.k1.type=hdfs a1.sinks.k1.hdfs.path=/data/flume/loadbalance/%{hostname} a1.sinks.k1.hdfs.fileType=DataStream a1.sinks.k1.hdfs.writeFormat=TEXT a1.sinks.k1.hdfs.rollInterval=10 a1.sinks.k1.channel=c1 a1.sinks.k1.hdfs.filePrefix=%Y-%m-%d ~~~ ## 測試load balance 1. 先在it2和it3上啟動腳本 ~~~ bin/flume-ng agent -n a1 -c conf -f conf/flume-server-loadbalance.conf -Dflume.root.logger=DEBUG,console ~~~ 2. 然后啟動itcast01上的腳本 ~~~ bin/flume-ng agent -n agent1 -c conf -f conf/flume-client-loadbalance.conf -Dflume.root.logger=DEBUG,console ~~~ 3. Shell腳本生成數據 ~~~ while true;do date >> test.log; sleep 1s ;done ~~~ 4. 觀察HDFS上生成的數據目錄,由于輪訓機制都會收集到數據 5. it2上的agent被干掉之后,it2上不在產生數據 6. it2上的agent重新啟動后,兩者都可以接受到數據
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看