參考文獻:https://blog.csdn.net/tanggao1314/article/details/51340672
倒排索引就是根據單詞內容來查找文檔的方式,由于不是根據文檔來確定文檔所包含的內容,進行了相反的操作,所以被稱為倒排索引
下面來看一個例子來理解什么是倒排索引
這里我準備了兩個文件 分別為1.txt和2.txt
1.txt的內容如下
```
I Love Hadoop
I like ZhouSiYuan
I love me
```
2.txt的內容如下
```
I Love MapReduce
I like NBA
I love Hadoop
```
我這里使用的是默認的輸入格式TextInputFormat,他是一行一行的讀的,鍵是偏移量,如果對于這個不理解,可以看我之前發表的文章
MapReduce工作原理
Hadoop數據類型和自定義輸入輸出
所以在map階段之前的到結果如下
map階段從1.txt的得到的輸入
```
0 I Love Hadoop
15 I like ZhouSiYuan
34 I love me
```
map階段從2.txt的得到的輸入
```
0 I Love MapReduce
18 I like NBA
30 I love Hadoop
```
map階段
把詞頻作為值
把單詞和URI組成key值
比如
key : I+hdfs://192.168.52.140:9000/index/2.txt value:1
為什么要這樣設置鍵和值?
因為這樣設計可以使用MapReduce框架自帶的map端排序,將同一單詞的詞頻組成列表
經過map階段1.txt得到的輸出如下
```
I:hdfs://192.168.52.140:9000/index/1.txt 1
Love:hdfs://192.168.52.140:9000/index/1.txt 1
MapReduce:hdfs://192.168.52.140:9000/index/1.txt 1
I:hdfs://192.168.52.140:9000/index/1.txt 1
Like:hdfs://192.168.52.140:9000/index/1.txt 1
ZhouSiYuan:hdfs://192.168.52.140:9000/index/1.txt 1
I:hdfs://192.168.52.140:9000/index/1.txt 1
love:hdfs://192.168.52.140:9000/index/1.txt 1
me:hdfs://192.168.52.140:9000/index/1.txt 1
```
經過map階段2.txt得到的輸出如下
```
I:hdfs://192.168.52.140:9000/index/2.txt 1
Love:hdfs://192.168.52.140:9000/index/2.txt 1
MapReduce:hdfs://192.168.52.140:9000/index/2.txt 1
I:hdfs://192.168.52.140:9000/index/2.txt 1
Like:hdfs://192.168.52.140:9000/index/2.txt 1
NBA:hdfs://192.168.52.140:9000/index/2.txt 1
I:hdfs://192.168.52.140:9000/index/2.txt 1
love:hdfs://192.168.52.140:9000/index/2.txt 1
Hadoop:hdfs://192.168.52.140:9000/index/2.txt 1
```
1.txt經過MapReduce框架自帶的map端排序得到的輸出結果如下
```
I:hdfs://192.168.52.140:9000/index/1.txt list{1,1,1}
Love:hdfs://192.168.52.140:9000/index/1.txt list{1}
MapReduce:hdfs://192.168.52.140:9000/index/1.txt list{1}
Like:hdfs://192.168.52.140:9000/index/1.txt list{1}
ZhouSiYuan:hdfs://192.168.52.140:9000/index/1.txt list{1}
love:hdfs://192.168.52.140:9000/index/1.txt list{1}
me:hdfs://192.168.52.140:9000/index/1.txt list{1}
```
2.txt經過MapReduce框架自帶的map端排序得到的輸出結果如下
```
I:hdfs://192.168.52.140:9000/index/2.txt list{1,1,1}
Love:hdfs://192.168.52.140:9000/index/2.txt list{1}
MapReduce:hdfs://192.168.52.140:9000/index/2.txt list{1}
Like:hdfs://192.168.52.140:9000/index/2.txt list{1}
NBA:hdfs://192.168.52.140:9000/index/2.txt list{1}
love:hdfs://192.168.52.140:9000/index/2.txt list{1}
Hadoop:hdfs://192.168.52.140:9000/index/2.txt list{1}
```
combine階段:
key值為單詞,
value值由URI和詞頻組成
value: hdfs://192.168.52.140:9000/index/2.txt:3 key:I
為什么這樣設計鍵值了?
因為在Shuffle過程將面臨一個問題,所有具有相同單詞的記錄(由單詞、URL和詞頻組成)應該交由同一個Reducer處理
所以重新把單詞設置為鍵可以使用MapReduce框架默認的Shuffle過程,將相同單詞的所有記錄發送給同一個Reducer處理
combine階段將key相同的value值累加
1.txt得到如下輸出
```
I hdfs://192.168.52.140:9000/index/1.txt:3
Love hdfs://192.168.52.140:9000/index/1.txt:1
MapReduce hdfs://192.168.52.140:9000/index/1.txt:1
Like hdfs://192.168.52.140:9000/index/1.txt:1
ZhouSiYuan hdfs://192.168.52.140:9000/index/1.txt:1
love hdfs://192.168.52.140:9000/index/1.txt:1
me hdfs://192.168.52.140:9000/index/1.txt:1
```
2.txt得到如下輸出
```
I hdfs://192.168.52.140:9000/index/2.txt:3
Love hdfs://192.168.52.140:9000/index/2.txt:1
MapReduce hdfs://192.168.52.140:9000/index/2.txt:1
Like hdfs://192.168.52.140:9000/index/2.txt:1
NBA hdfs://192.168.52.140:9000/index/2.txt:1
love hdfs://192.168.52.140:9000/index/2.txt:1
Hadoop hdfs://192.168.52.140:9000/index/2.txt:1
```
這樣reducer過程就很簡單了,它只用來生成文檔列表
比如相同的單詞I,這樣生成文檔列表
I hdfs://192.168.52.140:9000/index/2.txt:3;hdfs://192.168.52.140:9000/index/1.txt:3;
最后所有的輸出結果如下
Hadoop hdfs://192.168.52.140:9000/index/1.txt:1;hdfs://192.168.52.140:9000/index/2.txt:1;
I hdfs://192.168.52.140:9000/index/2.txt:3;hdfs://192.168.52.140:9000/index/1.txt:3;
Love hdfs://192.168.52.140:9000/index/1.txt:1;hdfs://192.168.52.140:9000/index/2.txt:1;
MapReduce hdfs://192.168.52.140:9000/index/2.txt:1;
NBA hdfs://192.168.52.140:9000/index/2.txt:1;
ZhouSiYuan hdfs://192.168.52.140:9000/index/1.txt:1;
like hdfs://192.168.52.140:9000/index/1.txt:1;hdfs://192.168.52.140:9000/index/2.txt:1;
love hdfs://192.168.52.140:9000/index/2.txt:1;hdfs://192.168.52.140:9000/index/1.txt:1;
me hdfs://192.168.52.140:9000/index/1.txt:1;
下面是整個源代碼
```
package com.hadoop.mapreduce.test8.invertedindex;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class InvertedIndex {
/**
*
* @author 湯高
*
*/
public static class InvertedIndexMapper extends Mapper<Object, Text, Text, Text>{
private Text keyInfo = new Text(); // 存儲單詞和URI的組合
private Text valueInfo = new Text(); //存儲詞頻
private FileSplit split; // 存儲split對象。
@Override
protected void map(Object key, Text value, Mapper<Object, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
//獲得<key,value>對所屬的FileSplit對象。
split = (FileSplit) context.getInputSplit();
System.out.println("偏移量"+key);
System.out.println("值"+value);
//StringTokenizer是用來把字符串截取成一個個標記或單詞的,默認是空格或多個空格(\t\n\r等等)截取
StringTokenizer itr = new StringTokenizer( value.toString());
while( itr.hasMoreTokens() ){
// key值由單詞和URI組成。
keyInfo.set( itr.nextToken()+":"+split.getPath().toString());
//詞頻初始為1
valueInfo.set("1");
context.write(keyInfo, valueInfo);
}
System.out.println("key"+keyInfo);
System.out.println("value"+valueInfo);
}
}
/**
*
* @author 湯高
*
*/
public static class InvertedIndexCombiner extends Reducer<Text, Text, Text, Text>{
private Text info = new Text();
@Override
protected void reduce(Text key, Iterable<Text> values, Reducer<Text, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
//統計詞頻
int sum = 0;
for (Text value : values) {
sum += Integer.parseInt(value.toString() );
}
int splitIndex = key.toString().indexOf(":");
//重新設置value值由URI和詞頻組成
info.set( key.toString().substring( splitIndex + 1) +":"+sum );
//重新設置key值為單詞
key.set( key.toString().substring(0,splitIndex));
context.write(key, info);
System.out.println("key"+key);
System.out.println("value"+info);
}
}
/**
*
* @author 湯高
*
*/
public static class InvertedIndexReducer extends Reducer<Text, Text, Text, Text>{
private Text result = new Text();
@Override
protected void reduce(Text key, Iterable<Text> values, Reducer<Text, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
//生成文檔列表
String fileList = new String();
for (Text value : values) {
fileList += value.toString()+";";
}
result.set(fileList);
context.write(key, result);
}
}
public static void main(String[] args) {
try {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf,"InvertedIndex");
job.setJarByClass(InvertedIndex.class);
//實現map函數,根據輸入的<key,value>對生成中間結果。
job.setMapperClass(InvertedIndexMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setCombinerClass(InvertedIndexCombiner.class);
job.setReducerClass(InvertedIndexReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
//我把那兩個文件上傳到這個index目錄下了
FileInputFormat.addInputPath(job, new Path("hdfs://192.168.52.140:9000/index/"));
//把結果輸出到out_index+時間戳的目錄下
FileOutputFormat.setOutputPath(job, new Path("hdfs://192.168.52.140:9000/out_index"+System.currentTimeMillis()+"/"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
```
- 空白目錄
- 第一章 Linux虛擬機安裝
- 第二章 SSH配置
- 第三章 jdk配置
- 第四章 Hadoop配置-單機
- 第五章 Hadoop配置-集群
- 第六章 HDFS
- 第七章 MapReduce
- 7.1 MapReduce(上)
- 7.2 MapReduce(下)
- 7.3 MapReduce實驗1 去重
- 7.4 MapReduce實驗2 單例排序
- 7.5 MapReduce實驗3 TopK
- 7.6 MapReduce實驗4 倒排索引
- 第八章 Hive
- Hive安裝
- 數據定義
- 數據操作
- 第九章 HBase
- 第十章 SaCa RealRec數據科學平臺
- 第十一章 Spark Core
- 第十二章 Spark Streaming
- 第十章 Spark測試題