<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                ## 概述 ![](https://img.kancloud.cn/c6/25/c6258d658fdd2f2e7ed0c8f49b7e9667_543x253.png) ![](https://img.kancloud.cn/4f/be/4fbeedf9aa3a2d96466eb1e3370c6458_1058x311.png) > 移動計算比移動數據更劃算。 ## HDFS的架構 1. NameNode(master) and DataNodes(slave) 2. master/slave的架構 3. NN: > the file system namespace > /home/hadoop/software > /app > regulates access to files by clients 4. DN:storage 5. HDFS exposes a file system namespace and allows user data to be stored in files. 6. a file is split into one or more blocks > blocksize: 128M > 150M拆成2個block 7. blocks are stored in a set of DataNodes 為什么? 容錯!!! 8. NameNode executes file system namespace operations:CRUD 9. determines the mapping of blocks to DataNodes > a.txt 150M blocksize=128M > a.txt 拆分成2個block 一個是block1:128M 另一個是block2:22M > block1存放在哪個DN?block2存放在哪個DN? > > a.txt > block1:128M, 192.168.199.1 > block2:22M, 192.168.199.2 > > get a.txt > > 這個過程對于用戶來說是不感知的 10. 通常情況下:1個Node部署一個組件 ## 開發環境 linux > 先創建幾個文件夾 ``` mkdir software app data lib shell maven_resp ``` ****非常重要!!!!!linux主機名改成沒有下劃線的!!!否則有問題!!!**** ### Hadoop環境搭建 * 使用的Hadoop相關版本:CDH * CDH相關軟件包下載地址:http://archive.cloudera.com/cdh5/cdh/5/ * Hadoop使用版本:hadoop-2.6.0-cdh5.15.1 * Hadoop下載:wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.15.1.tar.gz * Hive使用版本:hive-1.1.0-cdh5.15.1 ### Hadoop安裝前置要求 Java 1.8+ ssh ### 安裝Java * 解壓安裝包。 ![](https://img.kancloud.cn/f3/e4/f3e44c5e2a3ea7103808f2c0d5402677_571x19.png) * 注冊環境變量。 ![](https://img.kancloud.cn/bc/cb/bccb7517d646e7a1a26b14dd8b08b430_381x263.png) * 環境變量生效。 ### 配置無密碼登錄 > 如果??目錄里沒有.ssh 文件夾。`ssh localhost`先訪問下本機,就會自動生成了。 ``` ssh-keygen -t rsa 一路回車 ``` ``` cd ~/.ssh [hadoop@hadoop000 .ssh]$ ll 總用量 12 -rw------- 1 hadoop hadoop 1679 10月 15 02:54 id_rsa 私鑰 -rw-r--r-- 1 hadoop hadoop 398 10月 15 02:54 id_rsa.pub 公鑰 -rw-r--r-- 1 hadoop hadoop 358 10月 15 02:54 known_hosts cat id_rsa.pub >> authorized_keys chmod 600 authorized_keys ``` * 這樣登陸本機就不用輸密碼了。mac連還是要密碼。 ``` ssh root@139.155.58.151 ``` ## 安裝hadoop ![](https://img.kancloud.cn/08/01/080193a0e8d9a594d729f38bcdc83770_574x28.png) * 解壓到:~/app ``` tar -zxvf hadoop-2.6.0-cdh5.15.1.tar.gz.1 -C ~/app/ ``` * 添加HADOOP_HOME/bin到系統環境變量 ``` vim ~/.bash_profile ``` ``` export HADOOP_HOME=/root/app/hadoop-2.6.0-cdh5.15.1 export PATH=$HADOOP_HOME/bin:$PATH ``` ``` source ~/.bash_profile ``` * 修改Hadoop配置文件,地址:`/root/app/hadoop-2.6.0-cdh5.15.1/etc/hadoop` ``` vim hadoop-env.sh ``` ![](https://img.kancloud.cn/4e/de/4ede68361fd4522ffa6a5c28ad0955f4_292x59.png) ``` #配置主機 core-site.xml <property> <name>fs.defaultFS</name> <value>hdfs://127.0.0.1:8020</value> </property> # 副本系數改為1 hdfs-site.xml <property> <name>dfs.replication</name> <value>1</value> </property> #修改默認臨時目錄位置,改為新建的文件夾 <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/app/tmp</value> </property> ``` slaves hadoop000 * 啟動HDFS: * 第一次執行的時候一定要格式化文件系統,不要重復執行: hdfs namenode -format ![](https://img.kancloud.cn/e3/c0/e3c08baf021b792d7d214cb26457a63f_983x209.png) * 會在指定的tmp目錄生成一些內容。 * 啟動集群:$HADOOP_HOME/sbin/start-dfs.sh * 驗證: ``` [hadoop@hadoop000 sbin]$ jps 60002 DataNode 60171 SecondaryNameNode 59870 NameNode ``` http://139.155.58.151:50070 如果發現jps ok,但是瀏覽器不OK? 十有八九是防火墻問題 查看防火墻狀態:sudo firewall-cmd --state 關閉防火墻: sudo systemctl stop firewalld.service ![](https://img.kancloud.cn/86/a9/86a9bb7bdaa91725cf9b96e445bf4545_1158x458.png) ### hadoop軟件包常見目錄說明 ``` bin:hadoop客戶端名單 etc/hadoop:hadoop相關的配置文件存放目錄 sbin:啟動hadoop相關進程的腳本 share:常用例子 ``` ### 停止hdfs * start/stop-dfs.sh與hadoop-daemons.sh的關系 ``` start-dfs.sh = hadoop-daemons.sh start namenode hadoop-daemons.sh start datanode hadoop-daemons.sh start secondarynamenode stop-dfs.sh = .... ``` > 視頻課到3-10 [主機改名的問題](https://blog.csdn.net/snowlive/article/details/69662882#hadoop%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3%E4%B8%BB%E6%9C%BA%E5%90%8D%E6%9B%B4%E6%94%B9)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看