<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??碼云GVP開源項目 12k star Uniapp+ElementUI 功能強大 支持多語言、二開方便! 廣告
                ### Hive集成mysql數據庫 默認情況下,Hive元數據保存在內嵌的Derby數據庫中,只能允許一個會話連接。 實際生產環境不適用,為了支持多會話,需要一個獨立的元數據庫,使用MySQL。 > Hive內部對MySQL提供很好的支持。 先安裝mysql,使用docker環境即可。 - 新建庫,用來保存hive的元數據 ```mysql create database hive; CREATE USER 'hadoop'@'%' IDENTIFIED BY 'mysql'; GRANT ALL PRIVILEGES ON *.* TO 'hadoop'@'%' WITH GRANT OPTION; flush privileges; ``` ### 修改hive-env.sh ```shell export JAVA_HOME=/opt/tools/jdk1.8.0_131 ##Java路徑 export HADOOP_HOME=/opt/hadoop/hadoop-2.6.0-cdh5.4.0 ##Hadoop安裝路徑 export HIVE_HOME=/opt/hadoop/apache-hive-2.1.1-bin ##Hive安裝路徑 export HIVE_CONF_DIR=/opt/hadoop/apache-hive-2.1.1-bin/conf ##Hive配置文件路徑 ``` ### 在hdfs中創建下面的目錄,并授權 ```sh hdfs dfs -mkdir -p /user/hive/warehouse hdfs dfs -mkdir -p /user/hive/tmp hdfs dfs -mkdir -p /user/hive/log hdfs dfs -chmod -R 777 /user/hive/warehouse hdfs dfs -chmod -R 777 /user/hive/tmp hdfs dfs -chmod -R 777 /user/hive/log ``` ### 修改hive-site.xml - 將 hive-site.xml 文件中以下幾個配置項的值設置成上一步中創建的幾個路徑。 ``` <property> <name>hive.exec.scratchdir</name> <value>/user/hive/tmp</value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <property> <name>hive.querylog.location</name> <value>/user/hive/log</value> </property> ``` - 需要在 hive-site.xml 文件中配置 MySQL 數據庫連接信息 ```sh <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true&amp;characterEncoding=UTF-8&amp;useSSL=false</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hadoop</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>mysql</value> </property> ``` - 創建tmp文件 ```sh mkdir /home/hadoop/hive-2.1.1/tmp ``` ### 初始化hive schematool -dbType mysql -initSchema ### 參考 [hive2.1.1 部署安裝](https://blog.csdn.net/u013310025/article/details/70306421) [Hive安裝與配置](http://www.cnblogs.com/kinginme/p/7233315.html) ### 創建表并導入數據 ```sql create table dep(id int, name string) row format delimited fields terminated by '\t' load data local inpath '/home/hadoop/hivetestdata/people.txt' into table hive_test.dep; ``` [徹底弄清Hive安裝過程中的幾個疑問點](http://www.aboutyun.com/thread-10937-1-1.html)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看