# MapReduce 教程
[TOC]
## Purpose 目標
This document comprehensively describes all user-facing facets of the Hadoop MapReduce framework and serves as a tutorial.
這個文檔綜合地描述了所有面向用戶的Hadoop MapReduce 框架和服務。
## Prerequisites必要準備
Ensure that Hadoop is installed, configured and is running. More details:
請確保安裝、配置并運行起來 Hadoop,更多的請參考環境安裝環節(下面的是原文鏈接,英文不錯的可以試試)
* [Single Node Setup](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html)for first-time users.
* [Cluster Setup](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ClusterSetup.html)for large, distributed clusters.
## Overview
Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.
A MapReduce*job*usually splits the input data-set into independent chunks which are processed by the*map tasks*in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the*reduce tasks*. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System (see[HDFS Architecture Guide](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html)) are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster.
The MapReduce framework consists of a single masterResourceManager, one slaveNodeManagerper cluster-node, andMRAppMasterper application (see[YARN Architecture Guide](http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)).
Minimally, applications specify the input/output locations and supply*map*and*reduce*functions via implementations of appropriate interfaces and/or abstract-classes. These, and other job parameters, comprise the*job configuration*.
The Hadoop*job client*then submits the job (jar/executable etc.) and configuration to theResourceManagerwhich then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client.
Although the Hadoop framework is implemented in Java?, MapReduce applications need not be written in Java.
* [Hadoop Streaming](http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/streaming/package-summary.html)is a utility which allows users to create and run jobs with any executables (e.g. shell utilities) as the mapper and/or the reducer.
* [Hadoop Pipes](http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapred/pipes/package-summary.html)is a[SWIG](http://www.swig.org/)\-compatible C++ API to implement MapReduce applications (non JNI? based).
- 前言
- 虛擬機
- 前言
- 入門指南
- 集群規劃
- 常用網址
- Hadoop集群常用端口
- 安裝
- HA 環境安裝教程
- 創建用戶
- 設置SSH無密碼登錄
- MySQL安裝
- 安裝java
- 安裝zookeeper
- hadoop 安裝
- Hadoop HA 安裝
- hadoop詳細維護命令
- 集群性能測試
- 啟動
- hadoop使用案例
- 安裝hbase
- hive
- server2
- HA+聯邦集群安裝
- 常用
- 常用知識點
- HDFS
- HDFS 架構
- MapReduce
- MapReduce 教程
- HBase使用手冊
- 簡介
- HBase入門
- 安裝HBase
- HBase管理頁面
- 和HBase交互
- HBase Shell快速入門
- HBase數據模型
- HBase Schema設計
- HBase架構
- HBase安全
- HBase Shell命令
- HBase JSON配置使用說明
- HBase API使用說明
- HBase API運行教程
- HBase SQL基礎
- HIVE
- 附錄
- 各種數據庫
- 操作系統教程
- centos7.4三機準備
- 防火墻
- 軟件安裝
- 偽雙擊安裝指南
- 操作系統準備