<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                如果你沒有為HDFS配置環境變量,請切換到Hadoop的安裝目錄下進行Shell操作。 ```sql [root@hadoop101 rovt]# cd /opt/install/hadoop [root@hadoop101 hadoop]# pwd /opt/install/hadoop [root@hadoop101 hadoop]# ```` <br/> **基本語法** ```sql bin/hdfs dfs 具體命令 ``` <br/> **命令大全** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs Usage: hadoop fs [generic options] [-appendToFile <localsrc> ... <dst>] [-cat [-ignoreCrc] <src> ...] [-checksum <src> ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] ``` <br/> **啟動hadoop集群** ```sql -- 啟動hdfs集群 [root@hadoop101 hadoop]# sbin/start-dfs.sh -- 啟動yarn集群 [root@hadoop101 hadoop]# sbin/start-yarn.sh ``` <br/> **關閉hadoop集群** ```sql -- 關閉hdfs集群 [root@hadoop101 hadoop]# sbin/stop-dfs.sh -- 關閉yarn集群 [root@hadoop101 hadoop]# sbin/stop-yarn.sh ``` <br/> **啟動單個節點** ```sql -- 單獨啟動NameNode [root@hadoop101 hadoop]# sbin/hadoop-daemon.sh start namenode -- 單獨啟動DataNode [root@hadoop101 hadoop]# sbin/hadoop-daemon.sh start datanode -- 單獨啟動resourcemanager [root@hadoop101 hadoop]# sbin/yarn-daemon.sh start resourcemanager -- 單獨啟動nodemanager [root@hadoop101 hadoop]# sbin/yarn-daemon.sh start nodemanager ``` <br/> **`-help`:查看一個命令有哪些參數** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -help cat -cat [-ignoreCrc] <src> ... : Fetch all files that match the file pattern <src> and display their content on stdout ``` <br/> **`-ls`:顯示目錄信息,比如根目錄 /** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -ls / Found 5 items drwxr-xr-x - root supergroup 0 2020-12-23 10:27 /data drwxr-xr-x - root supergroup 0 2020-12-28 19:47 /hbase drwx-wx-wx - root supergroup 0 2020-12-10 09:17 /home drwxrwxrwx - root supergroup 0 2020-12-23 19:30 /tmp drwx------ - root supergroup 0 2020-12-23 14:44 /user ``` <br/> **`-mkdir`:創建hdfs目錄** ```sql -- -p:可以創建多級目錄 [root@hadoop101 hadoop]# bin/hdfs dfs -mkdir -p /test1/test2 ``` <br/> **`-moveFromLocal`:剪切本地文件到hdfs上** 注意:剪切到hdfs上后,本地的文件就不存在了 ```sql -- /hdatas/logs2.txt為Linux的目錄 -- /test1/test2為hdfs目錄 [root@hadoop101 hadoop]# bin/hdfs dfs -moveFromLocal /hdatas/logs2.txt /test1/test2 ``` <br/> **`-appendToFile`:追加一個本地文件的數據到已經存在的文件末尾** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -appendToFile /hdatas/logs.txt /test1/test2/logs2.txt ``` <br/> **`-cat`,或者`-text`:查看文件內容** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -cat /test1/test2/logs2.txt 1,1 2,1 3,1 4,2 5,1 6,2 ``` <br/> **`-tail`:查看一個文件的末尾** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -tail /test1/test2/logs2.txt 1,1 2,1 3,1 4,2 5,1 6,2 ``` <br/> **`-chgrp` 、`-chmod`、`-chown`:linux 文件系統中的用法一樣,修改文件所屬權限** ```sql -- 修改logs2.txt文件的用戶權限 [root@hadoop101 hadoop]# bin/hdfs dfs -chmod 666 /test1/test2/logs2.txt -- 修改logs2.txt文件的屬主:屬組,即Owner:Group [root@hadoop101 hadoop]# bin/hdfs dfs -chown rovt:rovt /test1/test2/logs2.txt -- 只更改文件屬主 [root@hadoop101 hadoop]# bin/hdfs dfs -chown root /test1/test2/logs2.txt -- 只更改文件屬組 bin/hdfs dfs -chgrp root /test1/test2/logs2.txt ``` <br/> **`-copyFromLocal`:將本地文件拷貝到 hdfs 系統上** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -copyFromLocal /hdatas/logs.txt /test1/test2 ``` <br/> **`-put`:等同于 copyFromLocal** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -put /hdatas/merge.txt /test1/test2 ``` <br/> **`-copyToLocal`:從 hdfs 拷貝到本地** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -copyToLocal /test1/test2/logs2.txt /hdatas/ ``` <br/> **`-get`:等同于 copyToLocal,就是從 hdfs 下載文件到本地** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -get /test1/test2/logs2.txt /hdatas/ ``` <br/> **`-cp` :從 hdfs 的一個路徑拷貝到 hdfs 的另一個路徑** ```sql -- 將/test1/test2/logs2.txt拷貝到/test1/logs2.txt,/test1/目錄必須以存在,否則拷貝失敗 -- /test1/logs2.txt文件已存在,拷貝失敗 [root@hadoop101 hadoop]# bin/hdfs dfs -cp /test1/test2/logs2.txt /test1/logs2.txt ``` <br/> **`-mv`:在 hdfs 目錄中移動文件** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -mv /test1/test2/logs.txt /test1/ ``` <br/> **`-getmerge`:將hdfs上的/test1/目錄及其子目錄下的所有文件下載到本地,并合并成一個merge.txt大文件** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -getmerge /test1/* /hdatas/merge.txt ``` <br/> **`-du`:統計文件夾大小** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -du -s -h /test1/ 330 330 /test1 單位是Byte [root@hadoop101 hadoop]# bin/hdfs dfs -du -h /test1/ 33 33 /test1/logs.txt 66 66 /test1/logs2.txt 231 231 /test1/test2 ``` <br/> **`-setrep`:設置 hdfs 中文件的副本數量** ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -setrep 10 /test1/test2/logs2.txt ``` 這里設置的副本數只是記錄在 NameNode 的元數據中,是否真的會有這么多副本,還得看 DataNode 的數量。如果只有 3 臺設備,最多也就 3 個副本,只有 節點數的增加到 10 臺時,副本數才能達到 10。 <br/> **`-rm`:刪除文件或文件夾** ```sql -- 刪除merge.txt [root@hadoop101 hadoop]# bin/hdfs dfs -rm /test1/test2/merge.txt -- 刪除test1目錄及子目錄,無論該目錄是否為空 [root@hadoop101 hadoop]# bin/hdfs dfs -rm -R /test1/ ``` <br/> **`-rmdir`:刪除空目錄** 非空目錄不能刪除。 ```sql [root@hadoop101 hadoop]# bin/hdfs dfs -rmdir /test1/test2 ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看