<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                [TOC] # 簡介 目標: 使用flume-1監控文件變動,flume-1將變動內容傳遞給flume-2,flume-2負責存儲到HDFS. 同時flume-1將變動內容傳遞給flume-3,flume-3負責輸出到local filesystem. ![](https://box.kancloud.cn/dbf30f8bc3c2a7bec63c08567b2002df_1850x526.png) # 實現 1. 創建flume-1.conf,用于監控hive.log文件的變動,同時產生兩個channel和兩個sink分別輸送給flume-2和flume-3 **flume-1.conf** ~~~ a1.sources=r1 a1.sinks=k1 k2 a1.channels=c1 c2 # 將數據流復制給多個channel a1.sources.r1.selector.type=replicating # 定義sources a1.sources.r1.type=exec a1.sources.r1.command=tail -F /root/data/hive.log a1.sources.r1.shell=/bin/bash -c # 定義sinks a1.sinks.k1.type=avro a1.sinks.k1.hostname=master a1.sinks.k1.port=4141 a1.sinks.k2.type=avro a1.sinks.k2.hostname=master a1.sinks.k2.port=4142 # 定義channel a1.channels.c1.type=memory a1.channels.c1.capacity=1000 a1.channels.c1.transactionCapacity=100 a1.channels.c2.type=memory a1.channels.c2.capacity=1000 a1.channels.c2.transactionCapacity=100 # 綁定 a1.sources.r1.channels=c1 c2 a1.sinks.k1.channel=c1 a1.sinks.k2.channel=c2 ~~~ **flume-2.conf** ~~~ a2.sources=r1 a2.sinks=k1 a2.channels=c1 # 定義sources a2.sources.r1.type=avro a2.sources.r1.bind=master a2.sources.r1.port=4141 # 定義sink a2.sinks.k1.type=hdfs a2.sinks.k1.hdfs.path=hdfs://master:8020/flume2?%Y-%m-%d-%H #上傳文件的前綴 a2.sinks.k1.hdfs.filePrefix=flume2- #是否按照時間滾動文件夾 a2.sinks.k1.hdfs.round=true #多少時間單位創建一個新的文件夾 a2.sinks.k1.hdfs.roundValue=1 #重新定義時間單位 a2.sinks.k1.hdfs.roundUnit=hour #是否使用本地時間戳 a2.sinks.k1.hdfs.useLocalTimeStamp=true #積攢多少個Event才flush到HDFS一次 a2.sinks.k1.hdfs.batchSize=100 #設置文件類型,可支持壓縮 a2.sinks.k1.hdfs.fileType=DataStream #多久生成一個新的文件 a2.sinks.k1.hdfs.rollInterval=600 #設置每個文件的滾動大小大概是128M a2.sinks.k1.hdfs.rollSize=134217700 #文件的滾動與Event數量無關 a2.sinks.k1.hdfs.rollCount=0 #最小冗余數 a2.sinks.k1.hdfs.minBlockReplicas=1 # 定義channel a2.channels.c1.type=memory a2.channels.c1.capacity=1000 a2.channels.c1.transactionCapacity=100 # 綁定 a2.sources.r1.channels=c1 a2.sinks.k1.channel=c1 ~~~ **flume-3.conf** ~~~ a3.sources=r1 a3.sinks=k1 a3.channels=c1 # 定義source a3.sources.r1.type=avro a3.sources.r1.bind=master a3.sources.r1.port=4142 # 定義sink a3.sinks.k1.type=file_roll a3.sinks.k1.sink.directory=/root/flume3 # 定義channel a3.channels.c1.type=memory a3.channels.c1.capacity=1000 a3.channels.c1.transactionCapacity=100 # 綁定 a3.sources.r1.channels=r1 a3.sinks.k1.channel=c1 ~~~
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看