<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                ## 簡單使用 * 啟動hive,在hive安裝目錄中bin文件夾直接執行hive命令。 ``` bin/hive ``` * 之后,配置的數據庫中會生成一個庫。 ![](https://img.kancloud.cn/5d/b5/5db5d1e1a7a3834aa3da661bb0f16176_139x45.png) * 新建一個hive的數據庫 ``` hive> create database test_hive; ``` * 新建一個表,這個表是可以直接用文件導入的。見下文。 ``` create table players(id int,name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' ``` * 在hive的data文件夾新建一個文件players。 ``` 1 james 2 zion 3 davis 4 george ``` * 然后將文件導入players表中。 ``` load data local inpath '/home/bizzbee/work/app/hive-1.1.0-cdh5.15.1/data/players' overwrite into table players; ``` * 如果執行統計的話,會自動生成MapReduce作業。 ``` hive> select count(1) from players; Query ID = bizzbee_20191105232020_fa9a96e2-3a68-4671-a4a5-df1e88145c50 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1572942693118_0001, Tracking URL = http://bizzbee:8088/proxy/application_1572942693118_0001/ Kill Command = /home/bizzbee/work/app/hadoop-2.6.0-cdh5.15.1/bin/hadoop job -kill job_1572942693118_0001 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2019-11-05 23:21:13,111 Stage-1 map = 0%, reduce = 0% 2019-11-05 23:21:25,470 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 5.96 sec 2019-11-05 23:21:35,551 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 11.61 sec MapReduce Total cumulative CPU time: 11 seconds 610 msec Ended Job = job_1572942693118_0001 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 11.61 sec HDFS Read: 7283 HDFS Write: 2 SUCCESS Total MapReduce CPU Time Spent: 11 seconds 610 msec OK 4 Time taken: 50.814 seconds, Fetched: 1 row(s) ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看