<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                NameNode 故障后,可以采用如下兩種方法恢復數據。 <br/> **方法一:將 SecondaryNameNode 中數據拷貝到 NameNode 存儲數據的目錄。** (1)kill -9 namenode 進程。 (2)刪除 NameNode 存儲的數據({hadoop_home}/data/tmp/dfs/name)。 ```sql [root@hadoop101 hadoop]$ rm -rf opt/install/hadoop/data/tmp/dfs/name/* ``` (3)拷貝 SecondaryNameNode 中數據到原 NameNode 存儲數據目錄。 集群環境中NameNode在hadoop101機器上,SecondaryNameNode在hadoop104上。 ```sql [root@hadoop101 dfs]$ scp -r hadoop@hadoop104:/opt/install/hadoop/data/tmp/dfs/namesecondary/* ./name/ ``` (4)重新啟動 namenode ``` [root@hadoop101 hadoop]$ sbin/hadoop-daemon.sh start namenode ``` <br/> **方法二:使用 `-importCheckpoint` 選項啟動 NameNode 守護進程,從而將SecondaryNameNode 中數據拷貝到 NameNode 目錄中**。 (1)修改 hdfs-site.xml ```xml <property> <name>dfs.namenode.checkpoint.period</name> <value>120</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/opt/install/hadoop/data/tmp/dfs/name</value> </property> ``` (2)kill -9 namenode 進程 (3)刪除 NameNode 存儲的數據(/opt/install/hadoop/data/tmp/dfs/name) ```sql [root@hadoop101 hadoop]$ rm -rf /opt/install/hadoop/data/tmp/dfs/name/* ``` (4)如果 SecondaryNameNode 不和 NameNode 在一個主機節點上,需要將SecondaryNameNode 存儲數據的目錄拷貝到 NameNode 存儲數據的平級目錄,并刪除 in_use.lock 文件。 ```sql [root@hadoop101 dfs]$ scp -r hadoop@hadoop104:/opt/install/hadoop/data/tmp/dfs/namesecondary ./ [root@hadoop101 namesecondary]$ rm -rf in_use.lock [root@hadoop101 dfs]$ pwd /opt/install/hadoop/data/tmp/dfs [kgc@hadoop101 dfs]$ ls data name namesecondary ``` (5)導入檢查點數據(等待一會 ctrl+c 結束掉) ```sql [root@hadoop101 hadoop]$ bin/hdfs namenode -importCheckpoint ``` (6)啟動 namenode ```sql [root@hadoop101 hadoop]$ sbin/hadoop-daemon.sh start namenode ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看