<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??碼云GVP開源項目 12k star Uniapp+ElementUI 功能強大 支持多語言、二開方便! 廣告
                在kafka當中, 每一個topic會有一個單獨文件夾,這個文件夾存儲在 {kafka_home}/config/server.properties中指定的`log.dirs`路徑中。 <br/> 在topic下會為每一個分區生成一個單獨的文件夾,將這二者合并命名`topicName-分區號`, 例如`topic1-0`。 <br/> 在每一個分區下又會有多個segment,,既然已經有多個分區了, 為什么要再進行劃分為多個segment?因為: ①如果只存一個文件中, 文件會越來越大; ②Kafka中的數據默認存儲7天、每一天都會刪除7天前的數據、 如果都存在一個文件當中、會不好刪。 1. 生成測試數據 ```shell [root@hadoop101 kafka]# cd logs/test-0 -- 生成測試數據 [root@hadoop101 test-0]# kafka-producer-perf-test.sh --topic test --num-records 500000 --record-size 1000 \ > --producer-props bootstrap.servers=hadoop101:9092 --throughput 1000000000 448145 records sent, 89629.0 records/sec (85.48 MB/sec), 339.6 ms avg latency, 535.0 max latency. 500000 records sent, 89365.504915 records/sec (85.23 MB/sec), 340.32 ms avg latency, 535.00 ms max latency, 309 ms 50th, 505 ms 95th, 526 ms 99th, 534 ms 99.9th. [root@hadoop101 test-0]# ls -lh total 1.6G -- 第一個segment -rw-r--r--. 1 root root 518K Jan 19 22:20 00000000000000000000.index -rw-r--r--. 1 root root 1.0G Jan 19 22:20 00000000000000000000.log -rw-r--r--. 1 root root 141K Jan 19 22:20 00000000000000000000.timeindex -- 第二個segment -rw-r--r--. 1 root root 10M Jan 19 22:20 00000000000001060144.index -rw-r--r--. 1 root root 425M Jan 19 22:20 00000000000001060144.log -rw-r--r--. 1 root root 10 Jan 19 22:20 00000000000001060144.snapshot -rw-r--r--. 1 root root 10M Jan 19 22:20 00000000000001060144.timeindex ``` 2. 查看index和log文件 ```shell -- 查看log文件 [root@hadoop101 test-0]# kafka-run-class.sh kafka.tools.DumpLogSegments --files 00000000000000000000.log --print-data-log baseOffset: 1060112 lastOffset: 1060127 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false position: 1073694700 CreateTime: 1611066045108 isvalid: true size: 16205 magic: 2 compresscodec: NONE crc: 792895623 baseOffset: 1060128 lastOffset: 1060143 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false position: 1073710905 CreateTime: 1611066045108 isvalid: true size: 16205 magic: 2 compresscodec: NONE crc: 792895623 -- 查看index文件 [root@hadoop101 test-0]# kafka-run-class.sh kafka.tools.DumpLogSegments --files 00000000000000000000.index --print-data-log offset: 1060064 position: 1073646085 offset: 1060080 position: 1073662290 offset: 1060096 position: 1073678495 offset: 1060112 position: 1073694700 offset: 1060128 position: 1073710905 ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看