日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

kafka 集群的部署安装

發(fā)布時間:2023/12/10 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 kafka 集群的部署安装 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
這里我們羅列一下我們的環(huán)境 10.19.18.88 zk1 10.19.16.84 zk2 10.19.11.44 zk3

?

這里公司需要接入kafka用于zipkin來定位調(diào)用鏈

kafka 的地址是http://kafka.apache.org/

zipkin 的地址是https://github.com/openzipkin/zipkin/tree/master/zipkin-server#environment-variables

kafka-manager 地址是https://github.com/yahoo/kafka-manager

這里首先我們現(xiàn)在kafka的下載包

kafka ??? download????? http://kafka.apache.org/downloads????? https://archive.apache.org/dist/kafka/0.11.0.0/kafka_2.11-0.11.0.0.tgz

?


下載包之后我們直接解壓使用,因?yàn)檫@里我們環(huán)境已經(jīng)配置過zookeeper

所以這里我就不配置kafka/config/zookeeper.properties

我們直接修改kafka的配置文件:

# The id of the broker. This must be set to a unique integer for each broker. broker.id=1 ############################# Socket Server Settings ############################# listeners=PLAINTEXT://10.19.18.88:1092 port=1092 host.name=10.19.18.88 # The number of threads handling network requests num.network.threads=8 # The number of threads doing disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=1048576 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=1048576 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 # The number of queued requests allowed before blocking the network threads queued.max.requests=100 # The purge interval (in number of requests) of the fetch request purgatory fetch.purgatory.purge.interval.requests=200 # The purge interval (in number of requests) of the producer request purgatory producer.purgatory.purge.interval.requests=200############################# Log Basics ############################# # A comma seperated list of directories under which to store log files log.dirs=/data/package/kafka/kafka-logs # The default number of log partitions per topic. num.partitions=24 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. num.recovery.threads.per.data.dir=2 # The maximum size of message that the server can receive message.max.bytes=1000000 # Enable auto creation of topic on the server auto.create.topics.enable=true # The interval with which we add an entry to the offset index log.index.interval.bytes=4096 # The maximum size in bytes of the offset index log.index.size.max.bytes=10485760 # Allow to delete topics delete.topic.enable=true ############################# Log Flush Policy ############################# # The number of messages to accept before forcing a flush of data to disk log.flush.interval.messages=20000 # The maximum amount of time a message can sit in a log before we force a flush log.flush.interval.ms=10000 # The frequency in ms that the log flusher checks whether any log needs to be flushed to disk log.flush.scheduler.interval.ms=2000 ############################# Log Retention Policy ############################# # The minimum age of a log file to be eligible for deletion log.retention.hours=168 # A size-based retention policy for logs. log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 # The maximum time before a new log segment is rolled out (in hours) log.roll.hours=168 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). zookeeper.connect=10.19.18.88:12081,10.19.16.84:12081,10.19.11.44:12081 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000 # How far a ZK follower can be behind a ZK leader zookeeper.sync.time.ms=2000############################# Replication configurations ################ # default replication factors for automatically created topics default.replication.factor=3 # Number of fetcher threads used to replicate messages from a source broker. num.replica.fetchers=4 # The number of bytes of messages to attempt to fetch for each partition. replica.fetch.max.bytes=1048576 # max wait time for each fetcher request issued by follower replicas. replica.fetch.wait.max.ms=500 # The frequency with which the high watermark is saved out to disk replica.high.watermark.checkpoint.interval.ms=5000 # The socket timeout for network requests. replica.socket.timeout.ms=30000 # The socket receive buffer for network requests replica.socket.receive.buffer.bytes=65536 # If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr replica.lag.time.max.ms=10000 # The socket timeout for controller-to-broker channels controller.socket.timeout.ms=30000 controller.message.queue.size=10

這里不同的就是上面紅色標(biāo)注的,我們這里有三臺機(jī)器組成的kafka集群

?

vim bin/kafka-server-start.sh 修改最后一行#exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@" exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "/data/package/kafka/config/server.properties"

?同樣的方法,我們修改一下其他的兩臺server的配置文件

然后我們就可以一次啟動kafka了。

最后我們安裝kafka-manager

功能: 為了簡化開發(fā)者和服務(wù)工程師維護(hù)Kafka集群的工作,yahoo構(gòu)建了一個叫做Kafka管理器的基于Web工具,叫做 Kafka Manager。這個管理工具可以很容易地發(fā)現(xiàn)分布在集群中的哪些topic分布不均勻,或者是分區(qū)在整個集群分布不均勻的的情況。它支持管理多個集群、選擇副本、副本重新分配以及創(chuàng)建Topic。同時,這個管理工具也是一個非常好的可以快速瀏覽這個集群的工具,有如下功能:1.管理多個kafka集群 2.便捷的檢查kafka集群狀態(tài)(topics,brokers,備份分布情況,分區(qū)分布情況) 3.選擇你要運(yùn)行的副本 4.基于當(dāng)前分區(qū)狀況進(jìn)行 5.可以選擇topic配置并創(chuàng)建topic(0.8.1.1和0.8.2的配置不同) 6.刪除topic(只支持0.8.2以上的版本并且要在broker配置中設(shè)置delete.topic.enable=true) 7.Topic list會指明哪些topic被刪除(在0.8.2以上版本適用) 8.為已存在的topic增加分區(qū) 9.為已存在的topic更新配置 10.在多個topic上批量重分區(qū) 11.在多個topic上批量重分區(qū)(可選partition broker位置) 安裝步驟1、獲取kafka-manager源碼,并編譯打包 # cd /usr/local # git clone https://github.com/yahoo/kafka-manager # cd kafka-manager # ./sbt clean dist 注: 執(zhí)行sbt編譯打包可能花費(fèi)很長時間,如果你hang在如下情況 將project/plugins.sbt 中的logLevel參數(shù)修改為logLevel := Level.Debug(默認(rèn)為Warn)2、安裝配置 編譯成功后,會在target/universal下生成一個zip包# cd /usr/local/kafka-manager/target/universal # unzip kafka-manager-1.3.3.7.zip 將application.conf中的kafka-manager.zkhosts的值設(shè)置為你的zk地址 如:kafka-manager.zkhosts="172.16.218.201:2181,172.16.218.202:2181,172.16.218.203:2181" 3、啟動,指定配置文件位置和啟動端口號,默認(rèn)為9000 直接啟動:# cd kafka-manager-1.3.3.7/bin # ./kafka-manager -Dconfig.file=../conf/application.conf 后臺運(yùn)行:# ./kafka-manager -h # nohup ./kafka-manager -Dconfig.file=../conf/application.conf & 指定端口,例如:# nohup bin/kafka-manager -Dconfig.file=conf/application.conf -Dhttp.port=9001 & 第一次進(jìn)入web UI要進(jìn)行kafka cluster的相關(guān)配置,根據(jù)自己的信息進(jìn)行配置。

?

參考文章鏈接:

kafka https://www.cnblogs.com/shiyiwen/p/6150213.html http://blog.51cto.com/xiangcun168/1933509 http://orchome.com/41 http://www.360doc.com/content/16/1117/16/37253246_607304757.shtml http://jayveehe.github.io/2017/02/01/elk-stack/ http://blog.51cto.com/wuyebamboo/1963786 https://facingissuesonit.com/2017/05/29/integrate-filebeat-kafka-logstash-elasticsearch-and-kibana/ https://www.yuanmas.com/info/GlypPG18y2.html https://www.cnblogs.com/yinchengzhe/p/5111635.html kafka 參數(shù) https://www.cnblogs.com/weixiuli/p/6413109.html kafka 配置文件參數(shù)詳解

?

轉(zhuǎn)載于:https://www.cnblogs.com/smail-bao/p/7987340.html

創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎勵來咯,堅(jiān)持創(chuàng)作打卡瓜分現(xiàn)金大獎

總結(jié)

以上是生活随笔為你收集整理的kafka 集群的部署安装的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。