es - elasticsearch自定义分析器 - 内建分词过滤器 - 10
生活随笔
收集整理的這篇文章主要介紹了
es - elasticsearch自定义分析器 - 内建分词过滤器 - 10
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
世界上并沒有完美的程序,但是我們并不因此而沮喪,因為寫程序就是一個不斷追求完美的過程。
自定義分析器 :
????1. 作用 : 字符的增、刪、改轉換
????2. 數量限制 : 可以有0個或多個
????3. 內建字符過濾器 :
????????1. HTML Strip Character filter : 去除html標簽
????????2. Mapping Character filter : 映射替換
????????3. Pattern Replace Character filter : 正則替換
????1. 作用 :
????????1. 分詞
????????2. 記錄詞的順序和位置(短語查詢)
????????3. 記錄詞的開頭和結尾位置(高亮)
????????4. 記錄詞的類型(分類)
????2. 數量限制 : 有且只能有一個
????3. 分類 :
????????1. 完整分詞 :
????????????1. Standard
????????????2. Letter
????????????3. Lowercase
????????????4. whitespace
????????????5. UAX URL Email
????????????6. Classic
????????????7. Thai
????????2. 切詞 :
????????????1. N-Gram
????????????2. Edge N-Gram
????????3. 文本 :
????????????1. Keyword
????????????2. Pattern
????????????3. Simple Pattern
????????????4. Char Group
????????????5. Simple Pattern split
????????????6. Path
????1. 作用 : 分詞的增、刪、改轉換
????2. 數量限制 : 可以有0個或多個
????3. 分類 :
????????1. apostrophe
????????2. asciifolding
????????3. cjk bigram
????????4. cjk width
????????5. classic
????????6. common grams
????????7. conditional
????????8. decimal digit
????????9. delimited payload
????????10. dictionary decompounder
????????11. edge ngram
????????12. elision
????????13. fingerprint
????????14. flatten_graph
????????15. hunspell
????????16. hyphenation decompounder
????????17. keep types
????????18. keep words
????????19. keyword marker
????????20. keyword repeat
????????21. kstem
????????22. length
????????23. limit token count
????????24. lowercase
????????25. min_hash
????????26. multiplexer
????????27. ngram
????????28. normalization
????????29. pattern_capture
????????30. pattern replace
????????31. porter stem
????????32. predicate script
????????33. remove duplicates
????????34. reverse
????????35. shingle
????????36. snowball
????????37. stemmer
今天演示 : 34-37
# reverse token filter # 作用 : 分詞反轉GET /_analyze {"tokenizer" : "whitespace","filter" : ["reverse"],"text" : ["hello gooding me"] }# 結果 {"tokens" : [{"token" : "olleh","start_offset" : 0,"end_offset" : 5,"type" : "word","position" : 0},{"token" : "gnidoog","start_offset" : 6,"end_offset" : 13,"type" : "word","position" : 1},{"token" : "em","start_offset" : 14,"end_offset" : 16,"type" : "word","position" : 2}] } # shingle token filter # 作用 : 分詞重復顯示,連詞 # 配置項 : # 1. max_shingle_size : # 2. min_shingle_size : # 3. output_unigrams : 是否輸出原始值,默認true # 4. output_unigrams_if_no_shingles : 如果沒有shingle則輸出原始值,如果有則輸出shingle # 5. token_separator : shingle的連接符,默認空格 # 6. filter_token : 占位符,如去除停用詞后的位置由filter_token指定的詞代替,默認下劃線GET /_analyze {"tokenizer": "whitespace","filter": [{"type" : "stop","stopwords" : ["good"]}, {"type" : "shingle","token_separator" : "+"}],"text": ["hello good me this is a dog"] }# 結果 {"tokens" : [{"token" : "hello","start_offset" : 0,"end_offset" : 5,"type" : "word","position" : 0},{"token" : "hello+_","start_offset" : 0,"end_offset" : 11,"type" : "shingle","position" : 0,"positionLength" : 2},{"token" : "_+me","start_offset" : 11,"end_offset" : 13,"type" : "shingle","position" : 1,"positionLength" : 2},{"token" : "me","start_offset" : 11,"end_offset" : 13,"type" : "word","position" : 2},{"token" : "me+this","start_offset" : 11,"end_offset" : 18,"type" : "shingle","position" : 2,"positionLength" : 2},{"token" : "this","start_offset" : 14,"end_offset" : 18,"type" : "word","position" : 3},{"token" : "this+is","start_offset" : 14,"end_offset" : 21,"type" : "shingle","position" : 3,"positionLength" : 2},{"token" : "is","start_offset" : 19,"end_offset" : 21,"type" : "word","position" : 4},{"token" : "is+a","start_offset" : 19,"end_offset" : 23,"type" : "shingle","position" : 4,"positionLength" : 2},{"token" : "a","start_offset" : 22,"end_offset" : 23,"type" : "word","position" : 5},{"token" : "a+dog","start_offset" : 22,"end_offset" : 27,"type" : "shingle","position" : 5,"positionLength" : 2},{"token" : "dog","start_offset" : 24,"end_offset" : 27,"type" : "word","position" : 6}] } # snowball token filter # 作用 : 詞干提取GET /_analyze {"tokenizer" : "whitespace","filter" : ["snowball"],"text" : ["hello gooding me"] }# 結果 {"tokens" : [{"token" : "hello","start_offset" : 0,"end_offset" : 5,"type" : "word","position" : 0},{"token" : "good","start_offset" : 6,"end_offset" : 13,"type" : "word","position" : 1},{"token" : "me","start_offset" : 14,"end_offset" : 16,"type" : "word","position" : 2}] } # stemmer token filter # 作用 : 詞干提取 # 配置項 : # 1. language : 多種語言 # 2. name : 語言的別名GET /_analyze {"tokenizer" : "whitespace","filter" : ["stemmer"],"text" : ["hello gooding me"] }# 結果 {"tokens" : [{"token" : "hello","start_offset" : 0,"end_offset" : 5,"type" : "word","position" : 0},{"token" : "good","start_offset" : 6,"end_offset" : 13,"type" : "word","position" : 1},{"token" : "me","start_offset" : 14,"end_offset" : 16,"type" : "word","position" : 2}] }總結
以上是生活随笔為你收集整理的es - elasticsearch自定义分析器 - 内建分词过滤器 - 10的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 分享下nirsoft提供的注册表工具
- 下一篇: 从如“何看待人工智能“”开始