<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                ![bigdata](https://img.kancloud.cn/e5/99/e5990d45c6572ac922966eecb6c1db86_1068x699.png) # 一、關閉防火墻開機自啟 ```shell systemctl disable firewalld ``` # 二、`[克隆機重設]`設定主機名 ```shell echo "hadoop01" >/etc/hostname ``` ```shell echo "hadoop02" >/etc/hostname ``` ```shell echo "hadoop03" >/etc/hostname ``` # 三、配置<u>*hosts*</u>文件 ```shell echo "192.168.8.101 hadoop01" >>/etc/hosts echo "192.168.8.102 hadoop02" >>/etc/hosts echo "192.168.8.103 hadoop03" >>/etc/hosts ``` # 四、`[克隆機重設]`固定IP地址 ```shell vi /etc/sysconfig/network-scripts/ifcfg-ens33 ``` ①、將`BOOTPROTO`=`dhcp`修改為`BOOTPROTO`=`static` 。 ②、將`ONBOOT`=`no`修改為`ONBOOT`=`yes` 。 ```shell echo "IPADDR=192.168.8.101" >>/etc/sysconfig/network-scripts/ifcfg-ens33 echo "NETMASK=255.255.255.0" >>/etc/sysconfig/network-scripts/ifcfg-ens33 echo "GATEWAY=192.168.8.1" >>/etc/sysconfig/network-scripts/ifcfg-ens33 echo "DNS1=192.168.8.1" >>/etc/sysconfig/network-scripts/ifcfg-ens33 ``` # 五、配置<u>*Java*</u> ①、解壓 — ②、改名 — ③、刪源 ```shell tar -zxvf jdk-8u231-linux-x64.tar.gz -C /opt/ \ && mv /opt/jdk1.8.0_231 /opt/java \ && rm -rf jdk-8u231-linux-x64.tar.gz ``` ①、添加`JAVA_HOME` — ②、配置`PATH` — ③、配置生效 ```shell echo "export JAVA_HOME=/opt/java" >>/etc/profile \ && echo "export PATH=\$PATH:\$JAVA_HOME/bin" >>/etc/profile \ && source /etc/profile ``` # 六、簡配<u>*Hadoop*</u> ①、解壓 — ②、改名 — ③、刪源 ```bash tar -zxvf hadoop-2.7.5.tar.gz -C /opt/ \ && mv /opt/hadoop-2.7.5 /opt/hadoop \ && rm -rf hadoop-2.7.5.tar.gz ``` ①、添加`HADOOP_HOME` — ②、配置`PATH` — ③、配置生效 ```shell echo "export HADOOP_HOME=/opt/hadoop" >>/etc/profile \ && echo "export PATH=\$PATH:\$HADOOP_HOME/bin:\$HADOOP_HOME/sbin" >>/etc/profile \ && source /etc/profile ``` # 七、配置SSH免密(操作前重啟) ```shell ssh-keygen -t rsa ``` > .ssh # 權限為`700` ```shell ssh-copy-id hadoop01 ``` > authorized_keys # 權限為`600` 通過SSH執行命令環境變量失效問題 解決(一):在執行命令前,先執行 `source /etc/profile` 。 解決(二):環境變量配置在 `/etc/bashrc` 文件中。 # 八、`[克隆機重設]`同步時間 ```shell rpm -qa | grep ntp # 查詢是否安裝了ntp服務 ``` [ntp-4.2.6p5-29.el7.centos.x86_64.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/ntp-4.2.6p5-29.el7.centos.x86_64.rpm) 依賴 ①、[autogen-libopts-5.18-5.el7.x86_64.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/autogen-libopts-5.18-5.el7.x86_64.rpm)和②、[ntpdate-4.2.6p5-29.el7.centos.x86_64.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/ntpdate-4.2.6p5-29.el7.centos.x86_64.rpm) ```shell # 安裝軟件 rpm -ivh ntpdate-4.2.6p5-29.el7.centos.x86_64.rpm rpm -ivh autogen-libopts-5.18-5.el7.x86_64.rpm rpm -ivh ntp-4.2.6p5-29.el7.centos.x86_64.rpm ``` ```shell vi /etc/ntp.conf ``` 放開注釋 ``` # Hosts on local network are less restricted. #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap ``` 改后結果 ``` # Hosts on local network are less restricted. restrict 192.168.8.0 mask 255.255.255.0 nomodify notrap ``` > IP地址192.168.8.1-192.168.8.254,默認網關255.255.255.0的服務器都可以使用NTP服務器同步時間。 注釋掉 ``` # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst ``` 改后結果 ```shell # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). # server 0.centos.pool.ntp.org iburst # server 1.centos.pool.ntp.org iburst # server 2.centos.pool.ntp.org iburst # server 3.centos.pool.ntp.org iburst server 127.127.1.0 fudge 127.127.1.0 stratum 10 ``` > 當服務器與公用的時間服務器失去聯系時以本地時間為客戶端提供時間服務 ```shell vi /etc/sysconfig/ntpd ``` 添加一行,同步硬件時鐘 ``` SYNC_HWCLOCK=yes ``` ```shell systemctl start ntpd # 啟動服務 systemctl enable ntpd # 開機自啟 ``` 定時任務(主節點不用操作,從節點操作) ```shell crontab -e ``` 每隔10分鐘同步一次時間 ``` */10 * * * * /usr/sbin/ntpdate hadoop01 ``` ```shell date -s "2010-05-05 12:12:12" ``` # 九、配置<u>*Hadoop*</u> ![configfile](https://img.kancloud.cn/6b/b9/6bb920c88d93280c553dc54325c9b59a_1128x833.png) ### hadoop-env.sh ```shell vi /opt/hadoop/etc/hadoop/hadoop-env.sh ``` ```shell export JAVA_HOME=/opt/java ``` ### core-site.xml ```shell vi /opt/hadoop/etc/hadoop/core-site.xml ``` ```xml <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/data/tmp</value> </property> ``` ### hdfs-site.xml ```shell vi /opt/hadoop/etc/hadoop/hdfs-site.xml ``` ```xml <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop02:50090</value> </property> ``` ### yarn-env.sh ```shell vi /opt/hadoop/etc/hadoop/yarn-env.sh ``` ```shell export JAVA_HOME=/opt/java ``` ### yarn-site.xml ```shell vi /opt/hadoop/etc/hadoop/yarn-site.xml ``` ```xml <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> ``` ### mapred-env.sh ```shell vi /opt/hadoop/etc/hadoop/mapred-env.sh ``` ```shell export JAVA_HOME=/opt/java ``` ### mapred-site.xml.template ```shell cp /opt/hadoop/etc/hadoop/mapred-site.xml.template /opt/hadoop/etc/hadoop/mapred-site.xml vi /opt/hadoop/etc/hadoop/mapred-site.xml ``` ```xml <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> ``` ### slaves ```shell echo "hadoop01" >/opt/hadoop/etc/hadoop/slaves echo "hadoop02" >>/opt/hadoop/etc/hadoop/slaves echo "hadoop03" >>/opt/hadoop/etc/hadoop/slaves ``` ##### 普通復制 ```shell cp -r source destination ``` ##### 遠程復制 source和destination都可以是遠程主機 ```shell scp -r source destination ``` ```shell scp -r root@hadoop01:/root/test root@hadoop03:/root ``` ##### 歸檔拷貝 source和destination只能有一個是遠程主機 ```shell rsync -av source destination ``` ```shell rsync -av -r root@hadoop01:/root/test /root/ ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看