<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                企業??AI智能體構建引擎,智能編排和調試,一鍵部署,支持知識庫和私有化部署方案 廣告
                [TOC] # 編譯hadoop源碼支持snappy壓縮 jar包準備(hadoop元am,jdk8,maven,protobuf) 1. hadoop-2.7.7-src.tar.gz 2. jdk-8u144-linux-x64.tar.gz 3. snappy-1.1.3.tar.gz 4. apache-maven-3.0.5-bin.tar.gz 5. protobuf-2.5.0.tar.gz 首先jdk安裝好,環境變量配置好 **maven解壓配置** ~~~ tar -zxvf apache-maven-3.0.5-bin.tar.gz ~~~ 配置環境變量 ~~~ # maven MAVEN_HOME=/root/tools/maven-3.0.5 export MAVEN_HOME export PATH=$MAVEN_HOME/bin:$PATH ~~~ ~~~ source /etc/profile ~~~ 自定義倉庫位置 ~~~ <localRepository>自己倉庫的存放目錄,我的倉庫已經移動到e盤了,具體看上面的那個圖的箭頭所指的位置</localRepository> ~~~ 阿里云Maven鏡像: ~~~ <mirror> <id>nexus-aliyun</id> <mirrorOf>central</mirrorOf> <name>Nexus aliyun</name> <url>http://maven.aliyun.com/nexus/content/groups/public</url> </mirror> ~~~ 編譯源碼 ~~~ yum install -y svn autoconf automake libtool cmake ncurses-devel openssl-devel gcc* tar -zxvf snappy-1.1.3.tar.gz cd snappy-1.1.3 ./configure make make install ~~~ 查看snappy庫文件 ~~~ ls -lh /usr/local/lib | grep snappy ~~~ **編譯安裝protobuf** ~~~ tar -zxvf protobuf-2.5.0.tar.gz cd protobuf-2.5.0/ ./configure make make install ~~~ 驗證下 ~~~ protoc --version ~~~ **編譯hadoop native** 需要點時間 ~~~ tar -zxvf hadoop-2.7.2-src.tar.gz cd hadoop-2.7.2-src mvn clean package -DskipTests -Pdist,native -Dtar -Dsnappy.lib=/usr/local/lib -Dbundle.snappy ~~~ 執行成功后,/path/hadoop-dist/target/hadoop-2.7.2.tar.gz是新生成的支持snappy壓縮的二進制安裝包 # 開啟map輸出階段壓縮 開啟map輸出階段壓縮可以減少job中map和reduce task間數據傳輸量.具體配置如下 1. 開啟hive中間數據傳輸壓縮功能 ~~~ hive> set hive.exec.compress.intermediate=true; ~~~ 2. 開啟mapreduce中map輸出壓縮功能 ~~~ hive> set mapreduce.map.output.compress=true; ~~~ 3. 設置mapreduce中map輸出數據的壓縮方式 ~~~ hive> set mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec; ~~~ 4. 執行查詢語句 ~~~ hive> select count(ename) name from emp; ~~~ # 開啟reduce輸出階段壓縮 當hive將輸出寫入到表中,輸出內容同樣可以進行壓縮. 屬性hive.exec.compress.output控制這個功能.用戶可能需要保存默認設置文件中的默認值false.這樣默認的輸出就是非壓縮的純文本文件. 用戶可以通過在查詢語句或執行腳本中設置這個值為true,來開啟輸出結果壓縮功能. 1. 開啟hive最終輸出數據壓縮功能 ~~~ hive> set hive.exec.compress.output=true; ~~~ 2. 開啟mapreduce最終輸出數據壓縮 ~~~ hive> set mapreduce.output.fileoutputformat.compress=true; ~~~ 3. 設置mapreduce最終數據輸出壓縮方式 ~~~ hive> set mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec; ~~~ 4. 設置mapreduce最終數據輸出壓縮為塊壓縮 ~~~ hive> set mapreduce.output.fileoutputformat.compress.type=BLOCK; ~~~ 5. 測試一下輸出結果是否是壓縮文件 ~~~ hive> insert overwrite local directory '/root/distribute-result' select * from emp distribute by deptno sort by empno desc; ~~~ 文件內容是被壓縮的
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看