日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Elasticsearch+filebeat+logstash+kibana集群

發布時間:2023/12/14 编程问答 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Elasticsearch+filebeat+logstash+kibana集群 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

一、Elasticsearch+kibana部署server
注:此文檔為傻瓜式安裝,以避過所有坑,簡單安裝方便使用,如遇以外問題請度娘
環境部署&&版本需求
CentOS7
Elasticsearch-7.30
kibana-7.30
logstash-7.30
服務器需求兩臺

200.200.100.51 node1
200.200.100.52 node2
200.200.100.53 node3
1.關閉防火墻&&selinux

systemctl stop firewalld systemctl disable firewalld sed -i 's/enforcing/disabled/g' /etc/sysconfig/selinux

2.內核參數優化

echo ' * hard nofile 65536 * soft nofile 65536 * soft nproc 65536 * hard nproc 65536 '>>/etc/security/limits.conf echo ' vm.max_map_count = 262144 net.core.somaxconn=65535 net.ipv4.ip_forward = 1 '>>/etc/sysctl.conf sysctl -p

3.時間同步

yum -y install ntp systemctl enable ntpd systemctl start ntpd ntpdate -u cn.pool.ntp.org hwclock --systohc timedatectl set-timezone Asia/Shanghai

4.安裝必備軟件

yum install wget vim lsof net-tools lrzsz net-tools curl -y

5.配置JDK環境

tar -zxf jdk-11.0.4_linux-x64_bin.tar.gz mv jdk-11.0.4 /usr/local/jdk echo ' export JAVA_HOME=/usr/local/jdk export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$JAVA_HOME/bin:$PATH '>>/etc/profilesource /etc/profile java -version

6.安裝配置elasticserach

tar zxvf elasticsearch-7.3.0-linux-x86_64.gz mv elasticsearch-7.3.0 /usr/local/elasticsearch mkdir -p /data/{es-data,es-logs} 修改elasticsearch配置文件vim /usr/local/elasticsearch/config/elasticsearch.yml 修改后: grep -Ev "^$|#" /usr/local/elasticsearch/config/elasticsearch.yml

Node1

Node2

Node3

7.配置權限啟動elasticsearch
創建用戶

useradd efk chown -R efk:efk /usr/local/jdk chown -R efk:efk /usr/local/elasticsearch chown -R efk:efk /data su - efk /usr/local/elasticsearch/bin/elasticsearch -d

8.配置啟動kibana

tar zxf kibana-7.3.0-linux-x86_64.tar.gz mv kibana-7.3.0-linux-x86_64 /usr/local/kibana vim /usr/local/kibana/config/kibana.yml grep -Ev "^$|#" /usr/local/kibana/config/kibana.yml server.port: 5601 server.host: "200.200.100.51" elasticsearch.host: ["http:/200.200.100.51:9200"]chown -R efk:efk /usr/local/kibana su - efk /usr/local/kibana/bin/kibana &

訪問kibana:http://ip:5601

9.配置elasticsearch-head插件

下載安裝包wget https://nodejs.org/dist/v12.16.3/node-v12.16.3-linux-x64.tar.xz xz -d node-v12.16.3-linux-x64.tar.xz tar xf node-v12.16.3-linux-x64.tar -C /usr/local/ mv /usr/local/node-v12.16.3-linux-x64/ /usr/local/node echo "export PATH=$PATH:/usr/local/node/bin" >>/etc/profile . /etc/profile && source /etc/profile node -vyum install git bzip2 -y git clone https://github.com/mobz/elasticsearch-head.git mv elasticsearch-head /usr/local/ cd /usr/local/elasticsearch-head/ ```可不需要``` 安裝 grunt-cli npm install -g grunt-cli 安裝 grunt elasticsearch-head 下載完成后,進入 elasticsearch-head 文件夾,執行命令: npm install grunt --save

安裝依賴的 npm 包
npm install

如果出出現錯誤Error: Command failed: tar jxf /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2需要安裝bzip2軟件包

如果報錯為此錯誤:因為fsevents是Mac系統所需的軟件,用下面命令跳過此軟件安裝
npm install --unsafe-perm

vim /usr/local/elasticsearch-head/Gruntfile.js


vim /usr/local/elasticsearch-head/_site/app.js

將localhost 改成elasticsearch cluster的地址

chown -R efk:efk /usr/local/elasticsearch-head/ su - efk npm run start & #如果無法啟動的話 或者 /usr/local/elasticsearch-head/node_modules/grunt/bin/grunt server &

訪問:http://200.200.100.51:9100

二、 logstash部署
logstash安裝的Nginx服務器上,并非es服務器
1.解壓安裝logstash

tar xf logstash-7.3.0.tar.gz -C /usr/local/ mv /usr/local/logstash-7.3.0 /usr/local/logstash

一般情況下,我們可以不配置logstash直接就可以啟動,logstash下有一個叫logstash.yml的文件,里面可以對logstash做一些簡單的優化

vim /usr/local/logstash/config/logstash.yml config.reload.automatic: true #開啟配置文件自動加載 config.reload.interval: 10 #定義配置文件重載時間周期

可以添加也可以不添加!

2.創建配置文件

input {beats {port => 5044} } output {stdout {codec => rubydebug}if [log_source] == 'weblogic_yun' {elasticsearch {hosts => ["200.200.100.51:9200","200.200.100.52:9200","200.200.100.53:9200"]index => "weblogic_yun-%{+YYYY.MM.dd}"}}if [log_source] == 'weblogic_jl' {elasticsearch {hosts => ["200.200.100.51:9200","200.200.100.52:9200","200.200.100.53:9200"]index => "weblogic_jl-%{+YYYY.MM.dd}"}}if [log_source] == 'message' {elasticsearch {hosts => ["200.200.100.51:9200","200.200.100.52:9200","200.200.100.53:9200"]index => "message-%{+YYYY.MM.dd}"}} }

三、filebeat安裝
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.0-linux-x86_64.tar.gz
tar -zvxf filebeat-7.3.0-linux-x86_64.tar.gz -C /usr/local/

#=========================== Filebeat inputs ============================= filebeat.inputs: - type: logenabled: truepaths:- /var/log/messagefields:log_source: messagefields_under_root: true #============================= Filebeat modules =============================== filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false#==================== Elasticsearch template setting ========================== setup.template.settings:index.number_of_shards: 1#============================== Kibana ===================================== setup.kibana: host: "200.200.100.51:5601"#----------------------------- Logstash output -------------------------------- output.logstash:hosts: ["200.200.100.51:5044"]#================================ Processors ===================================== processors:- add_host_metadata: ~- add_cloud_metadata: ~


五、安裝并配置Nginx收集日志
安裝Nginx
wget http://nginx.org/download/nginx-1.10.3.tar.gz

yum install -y gcc glibc gcc-c++ prce-devel openssl-devel pcre-devel
useradd -s /sbin/nologin www -M
tar xf nginx-1.10.3.tar.gz && cd nginx-1.10.3

./configure --prefix=/usr/local/nginx-1.10.3 --user=www --group=www --with-http_ssl_module --with-http_stub_status_module
make && make install
ln -s /usr/local/nginx-1.10.3 /usr/local/nginx

手動啟動
/usr/local/nginx/sbin/nginx

設置開機啟動
echo “/usr/local/nginx/sbin/nginx” >>/etc/rc.local

查看服務器是否啟動
netstat -lntp|grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 7058/nginx: master
需要配置在我們要收集的服務器上
vim /usr/local/nginx/conf/nginx.conf
worker_processes 1;

events {
worker_connections 1024;
}

http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - remoteuser[remote_user [remoteu?ser[time_local] “KaTeX parse error: Expected 'EOF', got '#' at position 16: request" ' #? …status bodybytessent"body_bytes_sent "bodyb?ytess?ent"http_referer” ’
# ‘“httpuseragent""http_user_agent" "httpu?sera?gent""http_x_forwarded_for”’;

log_format json '{"@timestamp":"$time_iso8601",''"host":"$server_addr",''"clientip":"$remote_addr",''"remote_user":"$remote_user",''"request":"$request",''"http_user_agent":"$http_user_agent",''"size":$body_bytes_sent,''"responsetime":$request_time,''"upstreamtime":"$upstream_response_time",''"upstreamhost":"$upstream_addr",''"http_host":"$host",''"requesturi":"$request_uri",''"url":"$uri",''"domain":"$host",''"xff":"$http_x_forwarded_for",''"referer":"$http_referer",''"status":"$status"}'; access_log logs/access.log json;sendfile on; keepalive_timeout 65;server {listen 80;server_name localhost;location / {root html;index index.html index.htm;}}

}

###########################
實際上就是添加了json格式的日志
log_format json ‘{"@timestamp":“KaTeX parse error: Double superscript at position 35: … '?"host":"server_addr”,’
‘“clientip”:“KaTeX parse error: Double superscript at position 34: … '?"remote_user":"remote_user”,’
‘“request”:“KaTeX parse error: Double superscript at position 30: … '?"http_user_agen…http_user_agent”,’
‘“size”:KaTeX parse error: Double superscript at position 37: … '?"responsetime":request_time,’
‘“upstreamtime”:“KaTeX parse error: Double superscript at position 45: … '?"upstreamhost":…upstream_addr”,’
‘“http_host”:“KaTeX parse error: Double superscript at position 27: … '?"requesturi":"request_uri”,’
‘“url”:“KaTeX parse error: Double superscript at position 26: … '?"domain":"host”,’
‘“xff”:“KaTeX parse error: Double subscript at position 7: http_x_?forwarded_for",…http_referer”,’
‘“status”:"$status"}’;
access_log logs/access.log json;
日志保存在/usr/local/nginx/logs/下
配置完成后,訪問nginx。查看日志是否修改為json
[root@i4tnginx]# tail -f logs/access.log
{"@timestamp"“host”:“10.4.82.203”,“clientip”:“10.2.52.15”,“remote_user”:"-",“request”:“GET / HTTP/1.1”,“http_user_agent”:“Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Firefox/60.0”,“size”:0,“responsetime”:0.000,“upstreamtime”:"-",“upstreamhost”:"-",“http_host”:“10.4.82.203”,“requesturi”:"/",“url”:"/index.html",“domain”:“10.4.82.203”,“xff”:"-",“referer”:"-",“status”:“304”}
{"@timestamp"“host”:“10.4.82.203”,“clientip”:“10.2.52.15”,“remote_user”:"-",“request”:“GET / HTTP/1.1”,“http_user_agent”:“Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Firefox/60.0”,“size”:0,“responsetime”:0.000,“upstreamtime”:"-",“upstreamhost”:"-",“http_host”:“10.4.82.203”,“requesturi”:"/",“url”:"/index.html",“domain”:“10.4.82.203”,“xff”:"-",“referer”:"-",“status”:“304”}
測試logstash配置文件是否正常
我們盡量都是用efk用戶,所以需要提前將logstash設置為efk的屬主屬組
chown -R efk.efk /usr/local/logstash

/usr/local/logstash/bin/logstash -f /usr/local/logstash/conf/nginx.conf -t
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash’s logs to /usr/local/logstash/logs which is now configured via log4j2.properties
Configuration OK
[2019-01-28T11:54:38,481][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

在Nginx服務器上啟動logstash
[root@abcdocker logstash]# su - efk
[efk@abcdocker ~]$ /usr/local/logstash/bin/logstash -f /usr/local/logstash/conf/nginx.conf

溫馨提示,一定要確保logs目錄efk有權限寫入,建議我們在啟動efk之前在執行一次chown -R efk.efk /usr/local/logstash

請確保logstash中的file文件有讀取權限,否則無法在ES中創建索引!

我們可以查一下索引
[root@YZSJHL82-203 local]# curl -XGET ‘200.200.100.51:9200/_cat/indices?v&pretty’
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana 9l1XmifhTd2187a9Zpkqsw 1 1 1 0 3.2kb 3.2kb
yellow open pro_nginx_access-2019.02.15 Guze8x5hTymSzqzQKu5PTQ 5 1 1315 0 1.3mb 1.3mb
Kibana 配置
目前logstash已經將收集的日志存儲在es里面,我們需要用kibana進行展示

查看索引命令
[root@YZSJHL82-203 local]# curl -XGET ‘200.200.100.51:9200/_cat/indices?v&pretty’
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana 9l1XmifhTd2187a9Zpkqsw 1 1 1 0 3.2kb 3.2kb
yellow open pro_nginx_access-2019.02.15 Guze8x5hTymSzqzQKu5PTQ 5 1 1315 0 1.3mb 1.3mb

Kibana創建索引

創建完畢后查看索引

總結

以上是生活随笔為你收集整理的Elasticsearch+filebeat+logstash+kibana集群的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。