日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Telegraf安装及使用

發布時間:2024/7/5 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Telegraf安装及使用 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1 安裝

1.1 創建用戶

??(1)添加用戶

# useradd tigk # passwd tigk Changing password for user tigk. New password: BAD PASSWORD: The password is shorter than 8 characters Retype new password: passwd: all authentication tokens updated successfully.

??(2)授權

??個人用戶的權限只可以在本home下有完整權限,其他目錄需要別人授權。經常需要root用戶的權限,可以通過修改sudoers文件來賦予權限,使用sudo命令。

# 賦予讀寫權限 # chmod -v u+w /etc/sudoers mode of ‘/etc/sudoers’ changed from 0440 (r--r-----) to 0640 (rw-r-----)

??修改sudoers文件,添加新用戶信息 vi /etc/sudoers,添加內容"elastic ALL=(ALL) ALL "

## Allow root to run any commands anywhere root ALL=(ALL) ALL tigk ALL=(ALL) ALL

??收回權限

# chmod -v u-w /etc/sudoers mode of ‘/etc/sudoers’ changed from 0640 (rw-r-----) to 0440 (r--r-----)

創建tigk安裝目錄

# su - tigk $ mkdir /home/tigk/.local

??(3) 創建目錄存放TIGK相關文件

# mkdir /data/tigk # chown tigk:tigk /data/tigk # su - tigk $ mkdir /data/tigk/telegraf $ mkdir /data/tigk/influxdb $ mkdir /data/tigk/kapacitor

1.2 Tar包安裝

1.2.1 獲取tar包

wget https://dl.influxdata.com/telegraf/releases/telegraf-1.14.4_linux_amd64.tar.gz

1.2.2 解壓tar包

$ tar xf /opt/package/telegraf-1.14.4_linux_amd64.tar.gz -C /home/tigk/.local/

1.2.3 生成簡單配置

??可執行文件在 {telegraf根目錄}/usr/bin/telegraf,配置文件在安裝后的etc目錄下,也可直接配置生成

??查看幫助telegraf --help

??生成配置文件 telegraf config > telegraf.conf

??生成帶cpu、memroy、http_listener和influxdb插件的配置文件
telegraf --input-filter cpu:mem:http_listener --output-filter influxdb config > telegraf.conf

??執行程序 telegraf --config telegraf.conf

??以后臺方式啟動 nohup telegraf --config telegraf > /dev/null 2>&1 &

$ cd /home/tigk/.local/telegraf/usr/bin$ ./telegraf --help$ ./telegraf config > telegraf.conf$ ./telegraf --input-filter cpu:mem:http_listener --output-filter influxdb config > telegraf.conf

1.2.4 修改配置文件

[tigk@fbi-local-02 ~]$ mkdir /data/tigk/telegraf/logs

$ mkdir /data/tigk/telegraf/conf $ cp /home/tigk/.local/telegraf/usr/bin/telegraf.conf /data/tigk/telegraf/conf $ vim /data/tigk/telegraf/conf/telegraf.conf 找到[outputs.influxdb]部分提供用戶名和密碼,修改內容如下 [[outputs.influxdb]]urls = ["http://10.0.165.2:8085"]timeout = "5s"username = "tigk"password = "tigk" [agent]logfile = "/data/tigk/telegraf/logs/telegraf.log"

啟動

$ cd /home/tigk/.local/telegraf/usr/bin $ nohup ./telegraf --config /data/tigk/telegraf/conf/telegraf.conf &

1.3 RPM包安裝

??(1)獲取rpm包

wget https://dl.influxdata.com/telegraf/releases/telegraf-1.14.4-1.x86_64.rpm

??(2) 安裝rpm包

sudo yum localinstall telegraf-1.14.4-1.x86_64.rpm

??(3)啟動服務、添加開機啟動

systemctl start telegraf.service service telegraf status systemctl enable telegraf.service

??(4)查看版本,修改配置文件

telegraf --version

??配置文件位置(默認配置):/etc/telegraf/telegraf.conf
修改telegraf配置文件

vim /etc/telegraf/telegraf.conf

??(5)啟動

service telegraf start

2 使用

2.1 常見命令及配置

??(1)命令展示 telegraf –h

$ ./telegraf -h Telegraf, The plugin-driven server agent for collecting and reporting metrics.Usage:telegraf [commands|flags]The commands & flags are:config print out full sample configuration to stdoutversion print the version to stdout--aggregator-filter <filter> filter the aggregators to enable, separator is :--config <file> configuration file to load--config-directory <directory> directory containing additional *.conf files--plugin-directory directory containing *.so files, this directory will besearched recursively. Any Plugin found will be loadedand namespaced.--debug turn on debug logging--input-filter <filter> filter the inputs to enable, separator is :--input-list print available input plugins.--output-filter <filter> filter the outputs to enable, separator is :--output-list print available output plugins.--pidfile <file> file to write our pid to--pprof-addr <address> pprof address to listen on, don't activate pprof if empty--processor-filter <filter> filter the processors to enable, separator is :--quiet run in quiet mode--section-filter filter config sections to output, separator is :Valid values are 'agent', 'global_tags', 'outputs','processors', 'aggregators' and 'inputs'--sample-config print out full sample configuration--test gather metrics, print them out, and exit;processors, aggregators, and outputs are not run--test-wait wait up to this many seconds for serviceinputs to complete in test mode--usage <plugin> print usage for a plugin, ie, 'telegraf --usage mysql'--version display the version and exitExamples:# generate a telegraf config file:telegraf config > telegraf.conf# generate config with only cpu input & influxdb output plugins definedtelegraf --input-filter cpu --output-filter influxdb config# run a single telegraf collection, outputing metrics to stdouttelegraf --config telegraf.conf --test# run telegraf with all plugins defined in config filetelegraf --config telegraf.conf# run telegraf, enabling the cpu & memory input, and influxdb output pluginstelegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb# run telegraf with pproftelegraf --config telegraf.conf --pprof-addr localhost:6060

??(2)命令使用

命令解釋
telegraf --help查看幫助
telegraf config > telegraf.conf標準輸出生成配置文檔模板
telegraf --input-filter cpu --output-filter influxdb config只生成數據采集插件為cpu、輸出插件為influxdb的配置文檔模板
telegraf --config telegraf.conf --test使用指定配置文件進行測試、將收集到的數據輸出stdout
telegraf --config telegraf.conf使用指定文件啟動telegraf
telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb按指定配置文件啟動telegraf,過濾使用cpu、mem作為數據采集插件、influxdb為輸出插件

??(3)配置文檔位置

安裝方式默認位置默認補充配置文件夾
Linux RPM包/etc/telegraf/telegraf.conf/etc/telegraf/telegraf.d
Linux Tar包{安裝目錄}/etc/telegraf/telegraf.conf{安裝目錄}/etc/telegraf/telegraf.d

??(4)配置加載方式
??命令默認加載telegraf.conf和/etc/telegraf/telegraf.d下的所有配置。選項—config和–config-directory可改變其行為。配置中每一個input模塊,都會有對應的線程進行收集。如果有input配置重復,會造成資源浪費。

??(5)配置全局tag標簽
??在配置文件中的[global_tags]區域定義key=“value”形式的鍵值對,這樣收集到的metrics都會打上這樣子的標簽
??(6)Agent配置
[agent] 區域可以對本機所有進行數據采集的agent進行配置。

屬性說明
interval數據采集間隔
round_interval是否整時收集。如interval=10s,設置會使收集發生在每分鐘的00,10,20,30…
metric_batch_size發送到output的數據的分批大小
metric_buffer_limit發給output的數據buffer大小
collection_jitter收集數據前agent最大隨機休眠時間,主要防止agent在同一時間收集數據
flush_interval發送數據到output的時間間隔
flush_jitter發送數據前最大隨機休眠時間,主要防止一起發output時出現大的寫高峰
Precision時間精度
logfile日志名
debug是否debug模式
quiet安靜模式,只有錯誤消息
hostname默認os.Hostname(),設置則覆蓋
omit_hostnameTag中是否需要包含hostname

??(7)Input插件通用配置

屬性說明
interval數據采集間隔,如果設置,會覆蓋Agent的設置
name_override改變輸出的measurement名字
name_prefixmeasurement名字前綴
name_suffixmeasurement名字后綴
Tags添加到輸出measurement 的一個tag字典

??(8)Output通用插件配置:無通用配置
??(9)Measurement過濾,可以定義在input,output等插件中

屬性說明
namepass只有Measurement符合此正則的數據點通過
namedropmeasurement符合此正則的數據點被丟棄
fieldpass只有fieldkey符合此正則的field通過
fielddropfieldkey符合此正則的field被丟棄
tagpass只有tag符合此正則的點通過
tagdroptag符合此正則的點被丟棄
taginclude只有tag符合此正則的點通過,并丟掉不符合的tag
tagexclude丟掉符合正則的tag

??(10)典型配置舉例
??①Input - System – cpu

# Read metrics about cpu usage [[inputs.cpu]]## Whether to report per-cpu stats or notpercpu = true## Whether to report total system cpu stats or nottotalcpu = true## If true, collect raw CPU time metrics.collect_cpu_time = false## If true, compute and report the sum of all non-idle CPU states.report_active = false

??②Input - System – disk

# Read metrics about disk usage by mount point [[inputs.disk]]## By default stats will be gathered for all mount points.## Set mount_points will restrict the stats to only the specified mount points.# mount_points = ["/"]## Ignore mount points by filesystem type.ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

??③Input - System – kernel

# Get kernel statistics from /proc/stat [[inputs.kernel]]# no configuration

??④Input - System – MEM

# Read metrics about memory usage [[inputs.mem]]# no configuration

??⑤Input - System – netstat

# # Read TCP metrics such as established, time wait and sockets counts. # [[inputs.netstat]] # # no configuration

??⑥Input - System – processes

# Get the number of processes and group them by status [[inputs.processes]]# no configuration

??⑦Input - System – system

# Read metrics about system load & uptime [[inputs.system]]## Uncomment to remove deprecated metrics.# fielddrop = ["uptime_format"]

??⑧Input - System – ping

# # Ping given url(s) and return statistics # [[inputs.ping]] # ## Hosts to send ping packets to. # urls = ["example.org"] # # ## Method used for sending pings, can be either "exec" or "native". When set # ## to "exec" the systems ping command will be executed. When set to "native" # ## the plugin will send pings directly. # ## # ## While the default is "exec" for backwards compatibility, new deployments # ## are encouraged to use the "native" method for improved compatibility and # ## performance. # # method = "exec" # # ## Number of ping packets to send per interval. Corresponds to the "-c" # ## option of the ping command. # # count = 1 # # ## Time to wait between sending ping packets in seconds. Operates like the # ## "-i" option of the ping command. # # ping_interval = 1.0 # # ## If set, the time to wait for a ping response in seconds. Operates like # ## the "-W" option of the ping command. # # timeout = 1.0 # # ## If set, the total ping deadline, in seconds. Operates like the -w option # ## of the ping command. # # deadline = 10 # # ## Interface or source address to send ping from. Operates like the -I or -S # ## option of the ping command. # # interface = "" # # ## Specify the ping executable binary. # # binary = "ping" # # ## Arguments for ping command. When arguments is not empty, the command from # ## the binary option will be used and other options (ping_interval, timeout, # ## etc) will be ignored. # # arguments = ["-c", "3"] # # ## Use only IPv6 addresses when resolving a hostname. # # ipv6 = false

??⑨Input - App – procstat

# [[inputs.procstat]] # ## PID file to monitor process # pid_file = "/var/run/nginx.pid" # ## executable name (ie, pgrep <exe>) # # exe = "nginx" # ## pattern as argument for pgrep (ie, pgrep -f <pattern>) # # pattern = "nginx" # ## user as argument for pgrep (ie, pgrep -u <user>) # # user = "nginx" # ## Systemd unit name # # systemd_unit = "nginx.service" # ## CGroup name or path # # cgroup = "systemd/system.slice/nginx.service" # # ## Windows service name # # win_service = "" # # ## override for process_name # ## This is optional; default is sourced from /proc/<pid>/status # # process_name = "bar" # # ## Field name prefix # # prefix = "" # # ## When true add the full cmdline as a tag. # # cmdline_tag = false # # ## Add PID as a tag instead of a field; useful to differentiate between # ## processes whose tags are otherwise the same. Can create a large number # ## of series, use judiciously. # # pid_tag = false # # ## Method to use when finding process IDs. Can be one of 'pgrep', or # ## 'native'. The pgrep finder calls the pgrep executable in the PATH while # ## the native finder performs the search directly in a manor dependent on the # ## platform. Default is 'pgrep' # # pid_finder = "pgrep"

??⑩Input – App – redis

# # Read metrics from one or many redis servers # [[inputs.redis]] # ## specify servers via a url matching: # ## [protocol://][:password]@address[:port] # ## e.g. # ## tcp://localhost:6379 # ## tcp://:password@192.168.99.100 # ## unix:///var/run/redis.sock # ## # ## If no servers are specified, then localhost is used as the host. # ## If no port is specified, 6379 is used # servers = ["tcp://localhost:6379"] # # ## specify server password # # password = "s#cr@t%" # # ## Optional TLS Config # # tls_ca = "/etc/telegraf/ca.pem" # # tls_cert = "/etc/telegraf/cert.pem" # # tls_key = "/etc/telegraf/key.pem" # ## Use TLS but skip chain & host verification # # insecure_skip_verify = true

???Input – App – kafka_consumer

# # Read metrics from Kafka topics # [[inputs.kafka_consumer]] # ## Kafka brokers. # brokers = ["localhost:9092"] # # ## Topics to consume. # topics = ["telegraf"] # # ## When set this tag will be added to all metrics with the topic as the value. # # topic_tag = "" # # ## Optional Client id # # client_id = "Telegraf" # # ## Set the minimal supported Kafka version. Setting this enables the use of new # ## Kafka features and APIs. Must be 0.10.2.0 or greater. # ## ex: version = "1.1.0" # # version = "" # # ## Optional TLS Config # # enable_tls = true # # tls_ca = "/etc/telegraf/ca.pem" # # tls_cert = "/etc/telegraf/cert.pem" # # tls_key = "/etc/telegraf/key.pem" # ## Use TLS but skip chain & host verification # # insecure_skip_verify = false # # ## SASL authentication credentials. These settings should typically be used # ## with TLS encryption enabled using the "enable_tls" option. # # sasl_username = "kafka" # # sasl_password = "secret" # # ## SASL protocol version. When connecting to Azure EventHub set to 0. # # sasl_version = 1 # # ## Name of the consumer group. # # consumer_group = "telegraf_metrics_consumers" # # ## Initial offset position; one of "oldest" or "newest". # # offset = "oldest" # # ## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky". # # balance_strategy = "range" # # ## Maximum length of a message to consume, in bytes (default 0/unlimited); # ## larger messages are dropped # max_message_len = 1000000 # # ## Maximum messages to read from the broker that have not been written by an # ## output. For best throughput set based on the number of metrics within # ## each message and the size of the output's metric_batch_size. # ## # ## For example, if each message from the queue contains 10 metrics and the # ## output metric_batch_size is 1000, setting this to 100 will ensure that a # ## full batch is collected and the write is triggered immediately without # ## waiting until the next flush_interval. # # max_undelivered_messages = 1000 # # ## Data format to consume. # ## Each data format has its own unique set of configuration options, read # ## more about them here: # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md # data_format = "influx"

???Input – App – exec

# [[outputs.exec]] # ## Command to ingest metrics via stdin. # command = ["tee", "-a", "/dev/null"] # # ## Timeout for command to complete. # # timeout = "5s" # # ## Data format to output. # ## Each data format has its own unique set of configuration options, read # ## more about them here: # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md # # data_format = "influx"

???Output – influxdb

# # Configuration for sending metrics to InfluxDB # [[outputs.influxdb_v2]] # ## The URLs of the InfluxDB cluster nodes. # ## # ## Multiple URLs can be specified for a single cluster, only ONE of the # ## urls will be written to each interval. # ## ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"] # urls = ["http://127.0.0.1:9999"] # # ## Token for authentication. # token = "" # # ## Organization is the name of the organization you wish to write to; must exist. # organization = "" # # ## Destination bucket to write into. # bucket = "" # # ## The value of this tag will be used to determine the bucket. If this # ## tag is not set the 'bucket' option is used as the default. # # bucket_tag = "" # # ## If true, the bucket tag will not be added to the metric. # # exclude_bucket_tag = false # # ## Timeout for HTTP messages. # # timeout = "5s" # # ## Additional HTTP headers # # http_headers = {"X-Special-Header" = "Special-Value"} # # ## HTTP Proxy override, if unset values the standard proxy environment # ## variables are consulted to determine which proxy, if any, should be used. # # http_proxy = "http://corporate.proxy:3128" # # ## HTTP User-Agent # # user_agent = "telegraf" # # ## Content-Encoding for write request body, can be set to "gzip" to # ## compress body or "identity" to apply no encoding. # # content_encoding = "gzip" # # ## Enable or disable uint support for writing uints influxdb 2.0. # # influx_uint_support = false # # ## Optional TLS Config for use on HTTP connections. # # tls_ca = "/etc/telegraf/ca.pem" # # tls_cert = "/etc/telegraf/cert.pem" # # tls_key = "/etc/telegraf/key.pem" # ## Use TLS but skip chain & host verification # # insecure_skip_verify = false

2.2 獲取官方未提供input plugin的應用

??如獲取yarn中的應用,并存入influxdb:①可利用input插件exec,執行某個腳本,使其標準輸出打印符合influxdb line protocol的日志②通過腳本里利用yarn的api獲取正在跑的應用

#!bin/python import json import urllib import httplibhost="10.0.165.3:8088"path="/ws/v1/cluster/apps" data=urllib.urlencode({'state':"RUNNING","applicationTypes":"Apache Flink"}) path=path+"?"+data headers = {"Accept":"application/json"} conn=httplib.HTTPConnection(host) conn.request("GET",path,headers=headers) result=conn.getresponse() if(result.status):content = result.read()apps = json.loads(content)["apps"]["app"]for app in apps:if("test" in app["name"] or "TEST" in app["name"] or "Test" in app["name"]):continueapp["escaped_name"] = app["name"].replace(' ','\ ')print "APPLICATION.RUNNING,appname=%s,appid=%s field_appname=\"%s\",field_appid=\"%s\" " % (app["escaped_name"],app["id"],app["name"],app["id"])

??執行結果為APPLICATION.RUNNING,appname=iot_road_traffic,appid=application_1592979353214_0175 field_appname=“iot_road_traffic”,field_appid=“application_1592979353214_0175”
??配置input插件的exec如下

[[outputs.exec]]## Command to ingest metrics via stdin.command = ["python", "/data/tigk/telegraf/exec/getRunningFlinkJob.py"]# Timeout for command to complete.timeout = "5s"## Data format to output.## Each data format has its own unique set of configuration options, read## more about them here:## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.mddata_format = "influx"

總結

以上是生活随笔為你收集整理的Telegraf安装及使用的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。