<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                [TOC] <br > ***** # **在 CentOS 7 上安裝 Kafka 2.3.1 單節點環境** <br > ## **前提** 假設我們已經滿足了以下條件,如果沒有的話,請參考 *附錄A*: * 一臺2核4G的虛擬機,IP 地址192.168.80.81 * 已安裝 CentOS 7.7 64 位操作系統 * 具備 root 用戶權限 * 已安裝 JDK 1.8.0\_221 <br > ## **安裝** 1. 下載 Kafka 從 [Apache Kafka](%E5%B9%B6%E8%A7%A3%E5%8E%8B%E7%BC%A9) 官網下載推薦的二進制包 [kafka\_2.12-2.3.1.tgz](https://www.apache.org/dyn/closer.cgi?path=/kafka/2.3.1/kafka_2.12-2.3.1.tgz),并上傳到 /opt 目錄解壓縮: ~~~ # cd /opt # tar -xzf kafka_2.12-2.3.1.tgz # chown -R lemon:oper /opt/kafka_2.12-2.3.1 # cd /opt/kafka_2.12-2.3.1 ~~~ <br > 1. 啟動服務器 Kafka 使用 [ZooKeeper](https://zookeeper.apache.org/) ,如果你還沒有 ZooKeeper 服務器,你需要先啟動一個 ZooKeeper 服務器。 您可以通過與 kafka 打包在一起的便捷腳本來快速簡單地創建一個單節點 ZooKeeper 實例。 ~~~ $ nohup bin/zookeeper-server-start.sh config/zookeeper.properties > zookeeper.log & ... [2020-01-11 21:35:36,457] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) ~~~ 稍等片刻,待 Zookeeper 啟動完成后,再啟動 Kafka 服務器: ~~~ $ export JMX_PORT=9988 $ nohup bin/kafka-server-start.sh config/server.properties > kafka-server.log & ... [2020-01-11 21:37:22,229] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) ~~~ <br > ## **測試** 1. 創建一個 topic 讓我們創建一個名為“test”的 topic,它有一個分區和一個副本: ~~~ $ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test Created topic test. ~~~ 現在我們可以運行 list(列表)命令來查看這個 topic: ~~~ $ bin/kafka-topics.sh --list --zookeeper localhost:2181 test ~~~ <br > 1. 發送一些消息 Kafka 自帶一個命令行客戶端,它從文件或標準輸入中獲取輸入,并將其作為 message(消息)發送到 Kafka 集群。默認情況下,每行將作為單獨的 message 發送。 使用另一個終端運行 producer,然后在控制臺輸入一些消息以發送到服務器。 ~~~ $ cd /opt/kafka_2.12-2.3.1/ $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test > Hello World! > Hello China! ^C ~~~ <br > 1. 啟動一個 consumer Kafka 還有一個命令行 consumer(消費者),將消息轉儲到標準輸出。 ~~~ $ cd /opt/kafka_2.12-2.3.1/ $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning Hello World! Hello China! ^C ~~~ 如果您將上述命令在不同的終端中運行,那么現在就可以將消息輸入到生產者終端中,并將它們在消費終端中顯示出來。 所有的命令行工具都有其他選項;運行不帶任何參數的命令將顯示更加詳細的使用信息。 <br > 2. 生產者吞吐量測試 kafka-producer-perf-test 腳本是 Kafka 提供的用于測試 producer 性能的腳本,該腳本可以很方便地計算出 producer 在一段時間內的吞吐量和平均延時 ~~~ $ bin/kafka-producer-perf-test.sh --topic test --num-records 500000 --record-size 200 --throughput -1 --producer-props bootstrap.servers=localhost:9092 acks=-1 221506 records sent, 44063.3 records/sec (8.40 MB/sec), 2382.8 ms avg latency, 3356.0 ms max latency. 500000 records sent, 64102.564103 records/sec (12.23 MB/sec), 2078.40 ms avg latency, 3356.00 ms max latency, 1841 ms 50th, 3250 ms 95th, 3343 ms 99th, 3354 ms 99.9th. ~~~ 輸出結果表明在這臺測試機上運行一個 kafka producer,平均每秒發送 *64102* 條記錄,平均吞吐量是每秒 *12.23MB*(占用 97.84Mb/s 左右的寬帶),平均延時 *2078* 毫秒,最大延時 *3356* 毫秒,50% 的消息發送需 *1841* 毫秒,95% 的消息發送需 *3250* 毫秒,99% 的消息發送需 *3343* 毫秒,99.9 的消息發送需 *3354* 毫秒。 <br > 3. 消費者吞吐量測試 和 kafka-producer-perf-test 腳本類似,Kafka 為 consumer 也提供了方便、便捷的性能測試腳本,即 kafka-consumer-perf-test 腳本。我們首先用它在剛剛搭建的 Kafka 集群環境中測試一下新版本 consumer 的吞吐量。 ~~~ $ bin/kafka-consumer-perf-test.sh --broker-list localhost:9092 --messages 500000 --topic test start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec 2020-01-13 12:52:48:112, 2020-01-13 12:52:49:295, 95.3675, 80.6149, 500002, 422655.9594, 25, 1158, 82.3553, 431780.6563 ~~~ <br > ## **關閉** 1. 首先使用 kafka-server-stop 腳本關閉 kafka 集群: ~~~ $ bin/kafka-server-stop.sh ~~~ <br > 1. 然后稍等片刻,使用 zookeeper-server-stop 腳本關閉 zookeeper: ~~~ $ bin/zookeeper-server-stop.sh ~~~ <br > ## **調優** ### **操作系統調優** 1. 調整最大文件描述符上限: ~~~ # ulimit -n 100000 ~~~ > 使用 Kafka 的用戶有時候會碰到“too many files open”的錯誤,這就需要為 broker 所在機器調優最大文件描述符上線。調優可參考這樣的公式:broker 上可能的最大分區數 \* (每個分區平均數據量 / 平均的日志段大小 + 3)。實際一般將該值設置得很大,比如 1000000。 <br > 1. 關閉 swap: ~~~ # sysctl vm.swappiness=1 # vim /etc/sysctl.conf vm.swappiness=1 ~~~ > 關閉 swap 是很多使用磁盤的應用程序的常規調優手段,將 vm.swappiness 調整為 1 個較小的數 ,即大幅降低對 swap 空間的使用,以免極大地拉低性能。可以使用 `free -m` 命令驗證。 <br > 1. 優化 /data 分區: 調整 /data 分區設置,*vim /etc/fstab*,在 defaults 后面增加 `,noatime,largeio`: ~~~ /dev/mapper/centos-data /data xfs defaults,noatime,largeio 0 0 ~~~ > 禁止 atime 更新:由于 Kafka 大量使用物理磁盤進行消息持久化,故文件系統的選擇是重要的調優步驟。對于 Linux 系統上的任何文件系統,Kafka 都推薦用戶在掛載文件系統(mount)時設置 noatime 選項,即取消文件 atime(最新訪問時間)屬性的更新——禁掉 atime 更新避免了 inode 訪問時間的寫入操作,因此極大地減少了文件系統寫操作數,從而提升了集群性能。Kafka 并沒有使用 atime,因此禁掉它是安全的操作。可以使用 `ls -l --time=atime` 命令驗證。 > largeio 參數將影響 stat 調用返回的 I/O 大小。對于大數據量的磁盤寫入操作而言,它能夠提升一定的性能。 > 重新掛載 /data 分區: ~~~ # mount -o remount /data ~~~ <br > 1. 生產者吞吐量測試 ~~~ $ cd /opt/kafka_2.12-2.3.1 $ bin/kafka-producer-perf-test.sh --topic test --num-records 500000 --record-size 200 --throughput -1 --producer-props bootstrap.servers=localhost:9092 acks=-1 442836 records sent, 88496.4 records/sec (16.88 MB/sec), 1311.3 ms avg latency, 1937.0 ms max latency. 500000 records sent, 91642.228739 records/sec (17.48 MB/sec), 1328.20 ms avg latency, 1937.00 ms max latency, 1368 ms 50th, 1886 ms 95th, 1930 ms 99th, 1935 ms 99.9th. ~~~ <br > 1. 消費者吞吐量測試 和 kafka-producer-perf-test 腳本類似,Kafka 為 consumer 也提供了方便、便捷的性能測試腳本,即 kafka-consumer-perf-test 腳本。我們首先用它在剛剛搭建的 Kafka 集群環境中測試一下新版本 consumer 的吞吐量。 ~~~ $ bin/kafka-consumer-perf-test.sh --broker-list localhost:9092 --messages 500000 --topic test start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec 2020-01-13 13:17:34:483, 2020-01-13 13:17:35:580, 95.4016, 86.9659, 500181, 455953.5096, 20, 1077, 88.5809, 464420.6128 ~~~ <br > ### **JVM 調優** 1. 調整 JVM 啟動參數: 編輯個人文件 `vim ~/.bash_profile`,將以下內容添加到文件中: ~~~ export KAFKA_HEAP_OPTS="-Xmx1g -Xms1g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=85" ~~~ 編譯 .bash\_profile ~~~ source ~/.bash_profile ~~~ <br > 1. 生產者吞吐量測試 ~~~ $ cd /opt/kafka_2.12-2.3.1 $ bin/kafka-producer-perf-test.sh --topic test --num-records 500000 --record-size 200 --throughput -1 --print-metrics --producer-props bootstrap.servers=localhost:9092 acks=-1 377763 records sent, 75552.6 records/sec (14.41 MB/sec), 1507.6 ms avg latency, 1979.0 ms max latency. 500000 records sent, 84160.915671 records/sec (16.05 MB/sec), 1466.62 ms avg latency, 1979.00 ms max latency, 1507 ms 50th, 1952 ms 95th, 1965 ms 99th, 1978 ms 99.9th. Metric Name Value app-info:commit-id:{client-id=producer-1} : 18a913733fb71c01 app-info:start-time-ms:{client-id=producer-1} : 1579140833567 app-info:version:{client-id=producer-1} : 2.3.1 kafka-metrics-count:count:{client-id=producer-1} : 102.000 producer-metrics:batch-size-avg:{client-id=producer-1} : 16337.068 producer-metrics:batch-size-max:{client-id=producer-1} : 16377.000 producer-metrics:batch-split-rate:{client-id=producer-1} : 0.000 producer-metrics:batch-split-total:{client-id=producer-1} : 0.000 producer-metrics:buffer-available-bytes:{client-id=producer-1} : 33554432.000 producer-metrics:buffer-exhausted-rate:{client-id=producer-1} : 0.000 producer-metrics:buffer-exhausted-total:{client-id=producer-1} : 0.000 producer-metrics:buffer-total-bytes:{client-id=producer-1} : 33554432.000 producer-metrics:bufferpool-wait-ratio:{client-id=producer-1} : 0.081 producer-metrics:bufferpool-wait-time-total:{client-id=producer-1} : 2834023273.000 producer-metrics:compression-rate-avg:{client-id=producer-1} : 1.000 producer-metrics:connection-close-rate:{client-id=producer-1} : 0.000 producer-metrics:connection-close-total:{client-id=producer-1} : 0.000 producer-metrics:connection-count:{client-id=producer-1} : 2.000 producer-metrics:connection-creation-rate:{client-id=producer-1} : 0.056 producer-metrics:connection-creation-total:{client-id=producer-1} : 2.000 producer-metrics:failed-authentication-rate:{client-id=producer-1} : 0.000 producer-metrics:failed-authentication-total:{client-id=producer-1} : 0.000 producer-metrics:failed-reauthentication-rate:{client-id=producer-1} : 0.000 producer-metrics:failed-reauthentication-total:{client-id=producer-1} : 0.000 producer-metrics:incoming-byte-rate:{client-id=producer-1} : 10062.678 producer-metrics:incoming-byte-total:{client-id=producer-1} : 360586.000 producer-metrics:io-ratio:{client-id=producer-1} : 0.012 producer-metrics:io-time-ns-avg:{client-id=producer-1} : 23895.015 producer-metrics:io-wait-ratio:{client-id=producer-1} : 0.098 producer-metrics:io-wait-time-ns-avg:{client-id=producer-1} : 204193.407 producer-metrics:io-waittime-total:{client-id=producer-1} : 3537650773.000 producer-metrics:iotime-total:{client-id=producer-1} : 413981132.000 producer-metrics:metadata-age:{client-id=producer-1} : 5.829 producer-metrics:network-io-rate:{client-id=producer-1} : 358.811 producer-metrics:network-io-total:{client-id=producer-1} : 12858.000 producer-metrics:outgoing-byte-rate:{client-id=producer-1} : 2939361.696 producer-metrics:outgoing-byte-total:{client-id=producer-1} : 105329087.000 producer-metrics:produce-throttle-time-avg:{client-id=producer-1} : 0.000 producer-metrics:produce-throttle-time-max:{client-id=producer-1} : 0.000 producer-metrics:reauthentication-latency-avg:{client-id=producer-1} : NaN producer-metrics:reauthentication-latency-max:{client-id=producer-1} : NaN producer-metrics:record-error-rate:{client-id=producer-1} : 0.000 producer-metrics:record-error-total:{client-id=producer-1} : 0.000 producer-metrics:record-queue-time-avg:{client-id=producer-1} : 1458.931 producer-metrics:record-queue-time-max:{client-id=producer-1} : 1977.000 producer-metrics:record-retry-rate:{client-id=producer-1} : 0.000 producer-metrics:record-retry-total:{client-id=producer-1} : 0.000 producer-metrics:record-send-rate:{client-id=producer-1} : 13975.068 producer-metrics:record-send-total:{client-id=producer-1} : 500000.000 producer-metrics:record-size-avg:{client-id=producer-1} : 286.000 producer-metrics:record-size-max:{client-id=producer-1} : 286.000 producer-metrics:records-per-request-avg:{client-id=producer-1} : 77.809 producer-metrics:request-latency-avg:{client-id=producer-1} : 4.382 producer-metrics:request-latency-max:{client-id=producer-1} : 136.000 producer-metrics:request-rate:{client-id=producer-1} : 179.406 producer-metrics:request-size-avg:{client-id=producer-1} : 16383.432 producer-metrics:request-size-max:{client-id=producer-1} : 16431.000 producer-metrics:request-total:{client-id=producer-1} : 6429.000 producer-metrics:requests-in-flight:{client-id=producer-1} : 0.000 producer-metrics:response-rate:{client-id=producer-1} : 179.411 producer-metrics:response-total:{client-id=producer-1} : 6429.000 producer-metrics:select-rate:{client-id=producer-1} : 481.330 producer-metrics:select-total:{client-id=producer-1} : 17325.000 producer-metrics:successful-authentication-no-reauth-total:{client-id=producer-1} : 0.000 producer-metrics:successful-authentication-rate:{client-id=producer-1} : 0.000 producer-metrics:successful-authentication-total:{client-id=producer-1} : 0.000 producer-metrics:successful-reauthentication-rate:{client-id=producer-1} : 0.000 producer-metrics:successful-reauthentication-total:{client-id=producer-1} : 0.000 producer-metrics:waiting-threads:{client-id=producer-1} : 0.000 producer-node-metrics:incoming-byte-rate:{client-id=producer-1, node-id=node--1} : 12.335 producer-node-metrics:incoming-byte-rate:{client-id=producer-1, node-id=node-0} : 10064.949 producer-node-metrics:incoming-byte-total:{client-id=producer-1, node-id=node--1} : 442.000 producer-node-metrics:incoming-byte-total:{client-id=producer-1, node-id=node-0} : 360144.000 producer-node-metrics:outgoing-byte-rate:{client-id=producer-1, node-id=node--1} : 1.702 producer-node-metrics:outgoing-byte-rate:{client-id=producer-1, node-id=node-0} : 2943220.331 producer-node-metrics:outgoing-byte-total:{client-id=producer-1, node-id=node--1} : 61.000 producer-node-metrics:outgoing-byte-total:{client-id=producer-1, node-id=node-0} : 105329026.000 producer-node-metrics:request-latency-avg:{client-id=producer-1, node-id=node--1} : NaN producer-node-metrics:request-latency-avg:{client-id=producer-1, node-id=node-0} : 4.382 producer-node-metrics:request-latency-max:{client-id=producer-1, node-id=node--1} : NaN producer-node-metrics:request-latency-max:{client-id=producer-1, node-id=node-0} : 136.000 producer-node-metrics:request-rate:{client-id=producer-1, node-id=node--1} : 0.056 producer-node-metrics:request-rate:{client-id=producer-1, node-id=node-0} : 179.590 producer-node-metrics:request-size-avg:{client-id=producer-1, node-id=node--1} : 30.500 producer-node-metrics:request-size-avg:{client-id=producer-1, node-id=node-0} : 16388.521 producer-node-metrics:request-size-max:{client-id=producer-1, node-id=node--1} : 37.000 producer-node-metrics:request-size-max:{client-id=producer-1, node-id=node-0} : 16431.000 producer-node-metrics:request-total:{client-id=producer-1, node-id=node--1} : 2.000 producer-node-metrics:request-total:{client-id=producer-1, node-id=node-0} : 6427.000 producer-node-metrics:response-rate:{client-id=producer-1, node-id=node--1} : 0.056 producer-node-metrics:response-rate:{client-id=producer-1, node-id=node-0} : 179.615 producer-node-metrics:response-total:{client-id=producer-1, node-id=node--1} : 2.000 producer-node-metrics:response-total:{client-id=producer-1, node-id=node-0} : 6427.000 producer-topic-metrics:byte-rate:{client-id=producer-1, topic=test} : 2934343.237 producer-topic-metrics:byte-total:{client-id=producer-1, topic=test} : 104981998.000 producer-topic-metrics:compression-rate:{client-id=producer-1, topic=test} : 1.000 producer-topic-metrics:record-error-rate:{client-id=producer-1, topic=test} : 0.000 producer-topic-metrics:record-error-total:{client-id=producer-1, topic=test} : 0.000 producer-topic-metrics:record-retry-rate:{client-id=producer-1, topic=test} : 0.000 producer-topic-metrics:record-retry-total:{client-id=producer-1, topic=test} : 0.000 producer-topic-metrics:record-send-rate:{client-id=producer-1, topic=test} : 13975.459 producer-topic-metrics:record-send-total:{client-id=producer-1, topic=test} : 500000.000 ~~~ > throughput:用來進行限流控制,當設定的值小于 0 時不限流,當設定的值大于 0 時,如果發送的吞吐量大于該值時就會被阻塞一段時間。 > print-metrics:指定了這個參數時會在測試完成之后打印很多指標信息,對很多測試任務而言具有一定的參考價值。 <br > 1. 消費者吞吐量測試 和 kafka-producer-perf-test 腳本類似,Kafka 為 consumer 也提供了方便、便捷的性能測試腳本,即 kafka-consumer-perf-test 腳本。我們首先用它在剛剛搭建的 Kafka 集群環境中測試一下新版本 consumer 的吞吐量。 ~~~ $ bin/kafka-consumer-perf-test.sh --broker-list localhost:9092 --messages 500000 --topic test start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec 2020-01-13 13:24:56:267, 2020-01-13 13:24:57:259, 95.4016, 96.1710, 500181, 504214.7177, 26, 966, 98.7594, 517785.7143 ~~~ <br > # **參考資料** * [Apache Kafka QuickStart](http://kafka.apache.org/quickstart) <br >
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看