<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                [TOC] # 語法 ~~~ explain [extended | dependency | authorization] query ~~~ 加上extended就是顯示更詳細的信息 # hive語句執行順序 ## msyql語句執行順序 代碼寫的順序: ~~~ select ... from... where.... group by... having... order by.. 或者 from ... select ... ~~~ 代碼的執行順序: ~~~ from... where...group by... having.... select ... order by... ~~~ ## hive 語句執行順序 大致順序 ~~~ from … where … group by … having … select … order by … from … on … join … where … group by … having … select … distinct … order by … limit ~~~ # explain查看執行計劃 ## 例子一 ~~~ select count(1) from dw.fact_ord_arranged where dt = '20160101' ~~~ ~~~ Explain STAGE DEPENDENCIES: Stage-1 is a root stage Stage-0 is a root stage STAGE PLANS: Stage: Stage-1 Map Reduce Map Operator Tree: --------------- Map階段 TableScan alias: fact_ord_arranged --------------- 掃描的表 Statistics: Num rows: 0 Data size: 1379094784 Basic stats: PARTIAL Column stats: COMPLETE Select Operator Statistics: Num rows: 0 Data size: 1379094784 Basic stats: PARTIAL Column stats: COMPLETE Group By Operator aggregations: count(1) --------------- 聚合函數 mode: hash outputColumnNames: _col0 --------------- 臨時字段 Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE Reduce Output Operator sort order: Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE value expressions: _col0 (type: bigint) Reduce Operator Tree: --------------- Reduce階段 Group By Operator aggregations: count(VALUE._col0) mode: mergepartial outputColumnNames: _col0 Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE Select Operator expressions: _col0 (type: bigint) outputColumnNames: _col0 Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE File Output Operator compressed: false Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat --------------- 輸出文件格式 serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe Stage: Stage-0 Fetch Operator limit: -1 --------------- job沒有limit,所有沒有操作 ~~~ ## 例子二 ~~~ explain select city,ad_type,device,sum(cnt) as cnt from tb_pmp_raw_log_basic_analysis where day = '2016-05-28' and type = 0 and media = 'sohu' and (deal_id = '' or deal_id = '-' or deal_id is NULL) group by city,ad_type,device ~~~ ~~~ STAGE DEPENDENCIES: Stage-1 is a root stage Stage-0 is a root stage STAGE PLANS: Stage: Stage-1 Map Reduce Map Operator Tree: TableScan alias: tb_pmp_raw_log_basic_analysis Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (((deal_id = '') or (deal_id = '-')) or deal_id is null) (type: boolean) Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: city (type: string), ad_type (type: string), device (type: string), cnt (type: bigint) outputColumnNames: city, ad_type, device, cnt Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE Group By Operator aggregations: sum(cnt) keys: city (type: string), ad_type (type: string), device (type: string) mode: hash outputColumnNames: _col0, _col1, _col2, _col3 Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string) sort order: +++ Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string) Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE value expressions: _col3 (type: bigint) Reduce Operator Tree: Group By Operator aggregations: sum(VALUE._col0) keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string) mode: mergepartial outputColumnNames: _col0, _col1, _col2, _col3 Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: bigint) outputColumnNames: _col0, _col1, _col2, _col3 Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE File Output Operator compressed: false Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe Stage: Stage-0 Fetch Operator limit: -1 ~~~ ~~~ 具體介紹如下 **stage1的map階段** TableScan:from加載表,描述中有行數和大小等 Filter Operator:where過濾條件篩選數據,描述有具體篩選條件和行數、大小等 Select Operator:篩選列,描述中有列名、類型,輸出類型、大小等。 Group By Operator:分組,描述了分組后需要計算的函數,keys描述用于分組的列,outputColumnNames為輸出的列名,可以看出列默認使用固定的別名_col0,以及其他信息 Reduce Output Operator:map端本地的reduce,進行本地的計算,然后按列映射到對應的reduce **stage1的reduce階段Reduce Operator Tree** Group By Operator:總體分組,并按函數計算。map計算后的結果在reduce端的合并。描述類似。mode: mergepartial是說合并map的計算結果。map端是hash映射分組 Select Operator:最后過濾列用于輸出結果 File Output Operator:輸出結果到臨時文件中,描述介紹了壓縮格式、輸出文件格式。 stage0第二階段沒有,這里可以實現limit 100的操作。 ~~~ 總結 ~~~ 1,每個stage都是一個獨立的MR,復雜的hql語句可以產生多個stage,可以通過執行計劃的描述,看看具體步驟是什么。 2,執行計劃有時預測數據量,不是真實運行,可能不準確 ~~~ # group by的MR ~~~ hive語句最好寫子查詢嵌套,這樣分階段的導入數據,可以逐步減少數據量。但可能會浪費時間。所以需要設計好。 group by本身也是一種數據篩選,可以大量減少數據,尤其用于去重等方面,功效顯著。但group by產生MR有時不可控,不知道在哪個階段更好。尤其,map端本地的reduce減少數據有很大作用。 尤其,hadoop的MR不患寡而患不均。數據傾斜將是MR計算的最大瓶頸。hive中可以設置分區、桶、distribute by等來控制分配數據給Reduce。 那么,group by生成MR是否可以優化呢? 下面兩端代碼,可以對比一下, ~~~ 代碼1 ~~~ explain select advertiser_id,crt_id,ad_place_id,channel,ad_type,rtb_type,media,count(1) as cnt from ( select split(all,'\\\\|~\\\\|')[41] as advertiser_id, split(all,'\\\\|~\\\\|')[11] as crt_id, split(all,'\\\\|~\\\\|')[8] as ad_place_id, split(all,'\\\\|~\\\\|')[34] as channel, split(all,'\\\\|~\\\\|')[42] as ad_type, split(all,'\\\\|~\\\\|')[43] as rtb_type, split(split(all,'\\\\|~\\\\|')[5],'/')[1] as media from tb_pmp_raw_log_bid_tmp tb ) a group by advertiser_id,crt_id,ad_place_id,channel,ad_type,rtb_type,media; ~~~ 代碼2 ~~~ explain select split(all,'\\\\|~\\\\|')[41] as advertiser_id, split(all,'\\\\|~\\\\|')[11] as crt_id, split(all,'\\\\|~\\\\|')[8] as ad_place_id, split(all,'\\\\|~\\\\|')[34] as channel, split(all,'\\\\|~\\\\|')[42] as ad_type, split(all,'\\\\|~\\\\|')[43] as rtb_type, split(split(all,'\\\\|~\\\\|')[5],'/')[1] as media from tb_pmp_raw_log_bid_tmp tb group by split(all,'\\\\|~\\\\|')[41],split(all,'\\\\|~\\\\|')[11],split(all,'\\\\|~\\\\|')[8],split(all,'\\\\|~\\\\|')[34],split(all,'\\\\|~\\\\|')[42],split(all,'\\\\|~\\\\|')[43],split(split(all,'\\\\|~\\\\|')[5],'/')[1] ~~~ ~~~ 先進行子查詢,然后group by,還是直接group by,兩種那個好一點, 我個人測試后認為,數據量小,第一種會好一點,如果數據量大,可能第二種會好。至于數據量多大。TB級以下的都是小數據。 兩個執行計劃對比如下,可以看出基本執行的步驟的數據分析量差不多。 group by一定要用,但內外,先后執行順序效果差不多。 ~~~
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看