<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                [TOC] # 1. windows環境搭建 1. 官網下載與Linux中一致的hadoop安裝包 https://archive.apache.org/dist/hadoop/common/hadoop-2.6.0/ ![](https://img.kancloud.cn/4d/05/4d05267dafd00df04cc8e9a1502c7463_1040x256.png) Windows和Linux使用的是同一個.tar.gz文件。 2. 將安裝包解壓到D盤或其他盤符下 ![](https://img.kancloud.cn/89/75/897519331713ae9c760ff3ad33299f98_1145x38.png) 3. 添加 hadoop.dll 和 winutils.exe 到 D:\hadoop-2.6.0-cdh5.14.2\bin 目錄下(去網上找文件) 4. 添加hadoop到Windows的環境變量中 ![](https://img.kancloud.cn/5b/1f/5b1f39fcad70c06ff8644b324ee5fa52_841x219.png) ![](https://img.kancloud.cn/ad/48/ad486f29d72d09fc30df5c365044ba1b_1219x347.png) <br/> # 2. WordCount案例代碼 統計`/input`目錄下的單詞數量,我在該目錄下放了兩個 `hello001.txt` 和 `hello002.txt` 兩個文件,它們的內容一樣,如下: ```java Hello BigData Hello Hadoop MapReduce Hello HDFS BigData Hadoop Hadoop MapReduce ``` 1. 使用IDEA創建一個Maven工程 ![](https://img.kancloud.cn/ce/73/ce73b4a55efbe6cd9efbec22fd7e964d_1055x464.png) 2. 添加依賴 *`pom.xml`* ```xml <repositories> <repository> <id>cloudera</id> <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url> </repository> </repositories> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>RELEASE</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.8.2</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-core</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-jobclient</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-auth</artifactId> <version>2.6.0</version> </dependency> </dependencies> <build> <pluginManagement> <!-- lock down plugins versions to avoid using Maven defaults (may be moved to parent pom) --> <plugins> <!-- 打包插件 --> <plugin> <artifactId>maven-assembly-plugin</artifactId> <executions> <execution> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin> </plugins> </pluginManagement> </build> ``` *`resources/log4j.properties`* ```xml log4j.rootLogger=INFO, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n log4j.appender.logfile=org.apache.log4j.FileAppender log4j.appender.logfile.File=target/spring.log log4j.appender.logfile.layout=org.apache.log4j.PatternLayout log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n ``` 3. Java程序 *`com/exa/mapreduce001/WordCountMapper.java`* ```java package com.exa.mapreduce001; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import java.io.IOException; /** * Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> * * KEYIN:輸入的key * VALUEIN:輸入的value * KEYOUT:輸出的key * VALUEOUT:輸出的value */ public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> { Text keyOut = new Text(); IntWritable valueOut = new IntWritable(1); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { // 1. Mapper以行為單位讀取數據 String line = value.toString(); // 2. 分割每一行的數據,\\s+匹配所有類型的空格 String[] words = line.split("\\s+"); // 3. 寫入Context上下文對象中,其格式為(word, 1) for (String word : words) { keyOut.set(word); context.write(keyOut, valueOut); } } } ``` *`com/exa/mapreduce001/WordCountReducer.java`* ```java package com.exa.mapreduce001; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException; /** * Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT> * */ public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> { int sum; IntWritable count = new IntWritable(); @Override protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { sum = 0; for (IntWritable value : values) { sum += value.get(); } count.set(sum); context.write(key, count); } } ``` *`com/exa/mapreduce001/WordCountDriver.java`* ```java package com.exa.mapreduce001; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class WordCountDriver { public static void main(String args[]) throws IOException, ClassNotFoundException, InterruptedException { //1.獲取配置信息以及創建任務 Configuration conf = new Configuration(); Job job = Job.getInstance(conf); //2.指定Driver類程序jar所在的路徑 job.setJarByClass(WordCountDriver.class); //3.指定Mapper和Reducer job.setMapperClass(WordCountMapper.class); job.setReducerClass(WordCountReducer.class); //4.指定Mapper端的輸出類型(key和value) job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); //5.指定最終的結果輸出類型 job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); //6.指定輸入文件和輸出文件的路徑 // 如果輸入或輸出路徑在本地盤符則用file:///開頭 FileInputFormat.setInputPaths(job, new Path("file:///D:\\IDEAWorkspace\\hadoop\\mapreduce001\\hadoop\\input")); // 輸出路徑已存在則報錯 FileOutputFormat.setOutputPath(job, new Path("file:///D:\\IDEAWorkspace\\hadoop\\mapreduce001\\hadoop\\output")); // 7. 提交代碼 boolean result = job.waitForCompletion(true); // 退出程序 System.exit(result ? 0 : 1); } } ``` 結果寫到了`/output/part-r-0000`文件中 ```java BigData 4 HDFS 2 Hadoop 6 Hello 6 MapReduce 4 ``` 上面在 WordCountDriver 的輸入和輸出路徑都是在本地盤符上,如果需要輸入和輸出路徑是HDFS系統,則需要將第6步的代碼替換如下: ```java // 6.指定輸入文件和輸出文件的路徑 // 指定輸入或輸出路徑為HDFS系統, 后面將會利用main函數的進行傳參 FileInputFormat.setInputPaths(job, new Path(args[0])); // 輸出路徑已存在則報錯 FileOutputFormat.setOutputPath(job, new Path(args[1])); ``` 1. 然后打包 ![](https://img.kancloud.cn/2b/7f/2b7f17da434f1cb859e4e476b5bd0bf2_935x436.png) ![](https://img.kancloud.cn/fa/66/fa665f7bb8521763f473d4f3419f1c38_1259x347.png) 2. 使用Xftp軟件將jar包上傳到Linux任一目錄下 3. 運行jar包 ```java -- /user/hadoop/input 就是傳給args[0]的 -- /user/hadoop/output 就是傳給args[1]的 -- 上面兩個都是HDFS的路徑,不是操作系統上的路徑 # hadoop jar com-exa-mapreduce001-1.0-SNAPSHOT.jar com.exa.mapreduce001.WordCountDriver /user/hadoop/input /user/hadoop/output ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看