[[common-grams]]
=== common_grams Token Filter
The `common_grams` token filter is designed to make phrase queries with
stopwords more efficient. ((("stopwords", "phrase queries and", "common_grams token filter")))((("common_grams token filter")))((("phrase matching", "stopwords and", "common_grams token filter")))It is similar to the `shingles` token ((("shingles", "shingles token filter")))filter (see
<<shingles>>), which creates _bigrams_ out of every pair of adjacent words. It
is most easily explained by example.((("bigrams")))
The `common_grams` token filter produces different output depending on whether
`query_mode` is set to `false` (for indexing) or to `true` (for searching), so
we have to create two separate analyzers:
[source,json]
-------------------------------
PUT /my_index
{
"settings": {
"analysis": {
"filter": {
"index_filter": { <1>
"type": "common_grams",
"common_words": "_english_" <2>
},
"search_filter": { <1>
"type": "common_grams",
"common_words": "_english_", <2>
"query_mode": true
}
},
"analyzer": {
"index_grams": { <3>
"tokenizer": "standard",
"filter": [ "lowercase", "index_filter" ]
},
"search_grams": { <3>
"tokenizer": "standard",
"filter": [ "lowercase", "search_filter" ]
}
}
}
}
}
-------------------------------
<1> First we create two token filters based on the `common_grams` token
filter: `index_filter` for index time (with `query_mode` set to the
default `false`), and `search_filter` for query time (with `query_mode`
set to `true`).
<2> The `common_words` parameter accepts the same options as the `stopwords`
parameter (see <<specifying-stopwords>>). The filter also
accepts a `common_words_path` parameter, which allows you to maintain the
common words list in a file.
<3> Then we use each filter to create an analyzer for index time and another
for query time.
With our custom analyzers in place, we can create a field that will use the
`index_grams` analyzer at index time:
[source,json]
-------------------------------
PUT /my_index/_mapping/my_type
{
"properties": {
"text": {
"type": "string",
"index_analyzer": "index_grams", <1>
"search_analyzer": "standard" <1>
}
}
}
-------------------------------
<1> The `text` field uses the `index_grams` analyzer at index time, but
defaults to using the `standard` analyzer at search time, for reasons we
will explain next.
==== At Index Time
If we were to ((("common_grams token filter", "at index time")))analyze the phrase _The quick and brown fox_ with shingles, it
would produce these terms:
[source,text]
-------------------------------
Pos 1: the_quick
Pos 2: quick_and
Pos 3: and_brown
Pos 4: brown_fox
-------------------------------
Our new `index_grams` analyzer produces the following terms instead:
[source,text]
-------------------------------
Pos 1: the, the_quick
Pos 2: quick, quick_and
Pos 3: and, and_brown
Pos 4: brown
Pos 5: fox
-------------------------------
All terms are output as unigrams—`the`, `quick`, and so forth--but if a word is a
common word or is followed by a common word, then it also outputs a bigram in
the same position as the unigram—`the_quick`, `quick_and`, `and_brown`.
==== Unigram Queries
Because the index contains unigrams,((("unigrams", "unigram phrase queries")))((("common_grams token filter", "unigram queries"))) the field can be queried using the same
techniques that we have used for any other field, for example:
[source,json]
-------------------------------
GET /my_index/_search
{
"query": {
"match": {
"text": {
"query": "the quick and brown fox",
"cutoff_frequency": 0.01
}
}
}
}
-------------------------------
The preceding query string is analyzed by the `search_analyzer` configured for the
`text` field--the `standard` analyzer in this example--to produce the
terms `the`, `quick`, `and`, `brown`, `fox`.
Because the index for the `text` field contains the same unigrams as produced
by the `standard` analyzer, search functions as it would for any normal
field.
==== Bigram Phrase Queries
However, when we come to do phrase queries,((("common_grams token filter", "bigram phrase queries")))((("bigrams", "bigram phrase queries"))) we can use the specialized
`search_grams` analyzer to make the process much more efficient:
[source,json]
-------------------------------
GET /my_index/_search
{
"query": {
"match_phrase": {
"text": {
"query": "The quick and brown fox",
"analyzer": "search_grams" <1>
}
}
}
}
-------------------------------
<1> For phrase queries, we override the default `search_analyzer` and use the
`search_grams` analyzer instead.
The `search_grams` analyzer would produce the following terms:
[source,text]
-------------------------------
Pos 1: the_quick
Pos 2: quick_and
Pos 3: and_brown
Pos 4: brown
Pos 5: fox
-------------------------------
The analyzer has stripped out all of the common word unigrams, leaving the common word
bigrams and the low-frequency unigrams. Bigrams like `the_quick` are much
less common than the single term `the`. This has two advantages:
* The positions data for `the_quick` is much smaller than for `the`, so it is
faster to read from disk and has less of an impact on the filesystem cache.
* The term `the_quick` is much less common than `the`, so it drastically
decreases the number of documents that have to be examined.
==== Two-Word Phrases
There is one further optimization. ((("common_grams token filter", "two word phrases"))) By far the majority of phrase queries
consist of only two words. If one of those words happens to be a common word,
such as
[source,json]
-------------------------------
GET /my_index/_search
{
"query": {
"match_phrase": {
"text": {
"query": "The quick",
"analyzer": "search_grams"
}
}
}
}
-------------------------------
then the `search_grams` analyzer outputs a single token: `the_quick`. This
transforms what originally could have been an expensive phrase query for `the`
and `quick` into a very efficient single-term lookup.
- Introduction
- 入門
- 是什么
- 安裝
- API
- 文檔
- 索引
- 搜索
- 聚合
- 小結
- 分布式
- 結語
- 分布式集群
- 空集群
- 集群健康
- 添加索引
- 故障轉移
- 橫向擴展
- 更多擴展
- 應對故障
- 數據
- 文檔
- 索引
- 獲取
- 存在
- 更新
- 創建
- 刪除
- 版本控制
- 局部更新
- Mget
- 批量
- 結語
- 分布式增刪改查
- 路由
- 分片交互
- 新建、索引和刪除
- 檢索
- 局部更新
- 批量請求
- 批量格式
- 搜索
- 空搜索
- 多索引和多類型
- 分頁
- 查詢字符串
- 映射和分析
- 數據類型差異
- 確切值對決全文
- 倒排索引
- 分析
- 映射
- 復合類型
- 結構化查詢
- 請求體查詢
- 結構化查詢
- 查詢與過濾
- 重要的查詢子句
- 過濾查詢
- 驗證查詢
- 結語
- 排序
- 排序
- 字符串排序
- 相關性
- 字段數據
- 分布式搜索
- 查詢階段
- 取回階段
- 搜索選項
- 掃描和滾屏
- 索引管理
- 創建刪除
- 設置
- 配置分析器
- 自定義分析器
- 映射
- 根對象
- 元數據中的source字段
- 元數據中的all字段
- 元數據中的ID字段
- 動態映射
- 自定義動態映射
- 默認映射
- 重建索引
- 別名
- 深入分片
- 使文本可以被搜索
- 動態索引
- 近實時搜索
- 持久化變更
- 合并段
- 結構化搜索
- 查詢準確值
- 組合過濾
- 查詢多個準確值
- 包含,而不是相等
- 范圍
- 處理 Null 值
- 緩存
- 過濾順序
- 全文搜索
- 匹配查詢
- 多詞查詢
- 組合查詢
- 布爾匹配
- 增加子句
- 控制分析
- 關聯失效
- 多字段搜索
- 多重查詢字符串
- 單一查詢字符串
- 最佳字段
- 最佳字段查詢調優
- 多重匹配查詢
- 最多字段查詢
- 跨字段對象查詢
- 以字段為中心查詢
- 全字段查詢
- 跨字段查詢
- 精確查詢
- 模糊匹配
- Phrase matching
- Slop
- Multi value fields
- Scoring
- Relevance
- Performance
- Shingles
- Partial_Matching
- Postcodes
- Prefix query
- Wildcard Regexp
- Match phrase prefix
- Index time
- Ngram intro
- Search as you type
- Compound words
- Relevance
- Scoring theory
- Practical scoring
- Query time boosting
- Query scoring
- Not quite not
- Ignoring TFIDF
- Function score query
- Popularity
- Boosting filtered subsets
- Random scoring
- Decay functions
- Pluggable similarities
- Conclusion
- Language intro
- Intro
- Using
- Configuring
- Language pitfalls
- One language per doc
- One language per field
- Mixed language fields
- Conclusion
- Identifying words
- Intro
- Standard analyzer
- Standard tokenizer
- ICU plugin
- ICU tokenizer
- Tidying text
- Token normalization
- Intro
- Lowercasing
- Removing diacritics
- Unicode world
- Case folding
- Character folding
- Sorting and collations
- Stemming
- Intro
- Algorithmic stemmers
- Dictionary stemmers
- Hunspell stemmer
- Choosing a stemmer
- Controlling stemming
- Stemming in situ
- Stopwords
- Intro
- Using stopwords
- Stopwords and performance
- Divide and conquer
- Phrase queries
- Common grams
- Relevance
- Synonyms
- Intro
- Using synonyms
- Synonym formats
- Expand contract
- Analysis chain
- Multi word synonyms
- Symbol synonyms
- Fuzzy matching
- Intro
- Fuzziness
- Fuzzy query
- Fuzzy match query
- Scoring fuzziness
- Phonetic matching
- Aggregations
- overview
- circuit breaker fd settings
- filtering
- facets
- docvalues
- eager
- breadth vs depth
- Conclusion
- concepts buckets
- basic example
- add metric
- nested bucket
- extra metrics
- bucket metric list
- histogram
- date histogram
- scope
- filtering
- sorting ordering
- approx intro
- cardinality
- percentiles
- sigterms intro
- sigterms
- fielddata
- analyzed vs not
- 地理坐標點
- 地理坐標點
- 通過地理坐標點過濾
- 地理坐標盒模型過濾器
- 地理距離過濾器
- 緩存地理位置過濾器
- 減少內存占用
- 按距離排序
- Geohashe
- Geohashe
- Geohashe映射
- Geohash單元過濾器
- 地理位置聚合
- 地理位置聚合
- 按距離聚合
- Geohash單元聚合器
- 范圍(邊界)聚合器
- 地理形狀
- 地理形狀
- 映射地理形狀
- 索引地理形狀
- 查詢地理形狀
- 在查詢中使用已索引的形狀
- 地理形狀的過濾與緩存
- 關系
- 關系
- 應用級別的Join操作
- 扁平化你的數據
- Top hits
- Concurrency
- Concurrency solutions
- 嵌套
- 嵌套對象
- 嵌套映射
- 嵌套查詢
- 嵌套排序
- 嵌套集合
- Parent Child
- Parent child
- Indexing parent child
- Has child
- Has parent
- Children agg
- Grandparents
- Practical considerations
- Scaling
- Shard
- Overallocation
- Kagillion shards
- Capacity planning
- Replica shards
- Multiple indices
- Index per timeframe
- Index templates
- Retiring data
- Index per user
- Shared index
- Faking it
- One big user
- Scale is not infinite
- Cluster Admin
- Marvel
- Health
- Node stats
- Other stats
- Deployment
- hardware
- other
- config
- dont touch
- heap
- file descriptors
- conclusion
- cluster settings
- Post Deployment
- dynamic settings
- logging
- indexing perf
- rolling restart
- backup
- restore
- conclusion