日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

Kafka配置信息

發(fā)布時(shí)間:2025/5/22 编程问答 19 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Kafka配置信息 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

Kafka配置信息

broker配置信息

屬性默認(rèn)值描述
broker.id必填參數(shù),broker的唯一標(biāo)識(shí)
log.dirs/tmp/kafka-logsKafka數(shù)據(jù)存放的目錄。可以指定多個(gè)目錄,中間用逗號(hào)分隔,當(dāng)新partition被創(chuàng)建的時(shí)會(huì)被存放到當(dāng)前存放partition最少的目錄。
port9092BrokerServer接受客戶端連接的端口號(hào)
zookeeper.connectnullZookeeper的連接串,格式為:hostname1:port1,hostname2:port2,hostname3:port3。可以填一個(gè)或多個(gè),為了提高可靠性,建議都填上。注意,此配置允許我們指定一個(gè)zookeeper路徑來(lái)存放此kafka集群的所有數(shù)據(jù),為了與其他應(yīng)用集群區(qū)分開(kāi),建議在此配置中指定本集群存放目錄,格式為:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消費(fèi)者的參數(shù)要和此參數(shù)一致。
message.max.bytes1000000服務(wù)器可以接收到的最大的消息大小。注意此參數(shù)要和consumer的maximum.message.size大小一致,否則會(huì)因?yàn)樯a(chǎn)者生產(chǎn)的消息太大導(dǎo)致消費(fèi)者無(wú)法消費(fèi)。
num.io.threads8服務(wù)器用來(lái)執(zhí)行讀寫(xiě)請(qǐng)求的IO線程數(shù),此參數(shù)的數(shù)量至少要等于服務(wù)器上磁盤(pán)的數(shù)量。
queued.max.requests500I/O線程可以處理請(qǐng)求的隊(duì)列大小,若實(shí)際請(qǐng)求數(shù)超過(guò)此大小,網(wǎng)絡(luò)線程將停止接收新的請(qǐng)求。
socket.send.buffer.bytes100 * 1024The SO_SNDBUFF buffer the server prefers for socket connections.
socket.receive.buffer.bytes100 * 1024The SO_RCVBUFF buffer the server prefers for socket connections.
socket.request.max.bytes100 * 1024 * 1024服務(wù)器允許請(qǐng)求的最大值, 用來(lái)防止內(nèi)存溢出,其值應(yīng)該小于 Java heap size.
num.partitions1默認(rèn)partition數(shù)量,如果topic在創(chuàng)建時(shí)沒(méi)有指定partition數(shù)量,默認(rèn)使用此值,建議改為5
log.segment.bytes1024 * 1024 * 1024Segment文件的大小,超過(guò)此值將會(huì)自動(dòng)新建一個(gè)segment,此值可以被topic級(jí)別的參數(shù)覆蓋。
log.roll.{ms,hours}24 * 7 hours新建segment文件的時(shí)間,此值可以被topic級(jí)別的參數(shù)覆蓋。
log.retention.{ms,minutes,hours}7 daysKafka segment log的保存周期,保存周期超過(guò)此時(shí)間日志就會(huì)被刪除。此參數(shù)可以被topic級(jí)別參數(shù)覆蓋。數(shù)據(jù)量大時(shí),建議減小此值。
log.retention.bytes-1每個(gè)partition的最大容量,若數(shù)據(jù)量超過(guò)此值,partition數(shù)據(jù)將會(huì)被刪除。注意這個(gè)參數(shù)控制的是每個(gè)partition而不是topic。此參數(shù)可以被log級(jí)別參數(shù)覆蓋。
log.retention.check.interval.ms5 minutes刪除策略的檢查周期
auto.create.topics.enabletrue自動(dòng)創(chuàng)建topic參數(shù),建議此值設(shè)置為false,嚴(yán)格控制topic管理,防止生產(chǎn)者錯(cuò)寫(xiě)topic。
default.replication.factor1默認(rèn)副本數(shù)量,建議改為2。
replica.lag.time.max.ms10000在此窗口時(shí)間內(nèi)沒(méi)有收到follower的fetch請(qǐng)求,leader會(huì)將其從ISR(in-sync replicas)中移除。
replica.lag.max.messages4000如果replica節(jié)點(diǎn)落后leader節(jié)點(diǎn)此值大小的消息數(shù)量,leader節(jié)點(diǎn)就會(huì)將其從ISR中移除。
replica.socket.timeout.ms30 * 1000replica向leader發(fā)送請(qǐng)求的超時(shí)時(shí)間。
replica.socket.receive.buffer.bytes64 * 1024The socket receive buffer for network requests to the leader for replicating data.
replica.fetch.max.bytes1024 * 1024The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader.
replica.fetch.wait.max.ms500The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader.
num.replica.fetchers1Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker.
fetch.purgatory.purge.interval.requests1000The purge interval (in number of requests) of the fetch request purgatory.
zookeeper.session.timeout.ms6000ZooKeeper session 超時(shí)時(shí)間。如果在此時(shí)間內(nèi)server沒(méi)有向zookeeper發(fā)送心跳,zookeeper就會(huì)認(rèn)為此節(jié)點(diǎn)已掛掉。 此值太低導(dǎo)致節(jié)點(diǎn)容易被標(biāo)記死亡;若太高,.會(huì)導(dǎo)致太遲發(fā)現(xiàn)節(jié)點(diǎn)死亡。
zookeeper.connection.timeout.ms6000客戶端連接zookeeper的超時(shí)時(shí)間。
zookeeper.sync.time.ms2000H ZK follower落后 ZK leader的時(shí)間。
controlled.shutdown.enabletrue允許broker shutdown。如果啟用,broker在關(guān)閉自己之前會(huì)把它上面的所有l(wèi)eaders轉(zhuǎn)移到其它brokers上,建議啟用,增加集群穩(wěn)定性。
auto.leader.rebalance.enabletrueIf this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available.
leader.imbalance.per.broker.percentage10The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker.
leader.imbalance.check.interval.seconds300The frequency with which to check for leader imbalance.
offset.metadata.max.bytes4096The maximum amount of metadata to allow clients to save with their offsets.
connections.max.idle.ms600000Idle connections timeout: the server socket processor threads close the connections that idle more than this.
num.recovery.threads.per.data.dir1The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
unclean.leader.election.enabletrueIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.
delete.topic.enablefalse啟用deletetopic參數(shù),建議設(shè)置為true。
offsets.topic.num.partitions50The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200).
offsets.topic.retention.minutes1440Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic.
offsets.retention.check.interval.ms600000The frequency at which the offset manager checks for stale offsets.
offsets.topic.replication.factor3The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas.
offsets.topic.segment.bytes104857600Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads.
offsets.load.buffer.size5242880An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache.
offsets.commit.required.acks-1The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden.
offsets.commit.timeout.ms5000The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout.

producer配置信息

屬性默認(rèn)值描述
metadata.broker.list啟動(dòng)時(shí)producer查詢brokers的列表,可以是集群中所有brokers的一個(gè)子集。注意,這個(gè)參數(shù)只是用來(lái)獲取topic的元信息用,producer會(huì)從元信息中挑選合適的broker并與之建立socket連接。格式是:host1:port1,host2:port2。
request.required.acks0參見(jiàn)3.2節(jié)介紹
request.timeout.ms10000Broker等待ack的超時(shí)時(shí)間,若等待時(shí)間超過(guò)此值,會(huì)返回客戶端錯(cuò)誤信息。
producer.typesync同步異步模式。async表示異步,sync表示同步。如果設(shè)置成異步模式,可以允許生產(chǎn)者以batch的形式push數(shù)據(jù),這樣會(huì)極大的提高broker性能,推薦設(shè)置為異步。
serializer.classkafka.serializer.DefaultEncoder序列號(hào)類,.默認(rèn)序列化成 byte[] 。
key.serializer.classKey的序列化類,默認(rèn)同上。
partitioner.classkafka.producer.DefaultPartitionerPartition類,默認(rèn)對(duì)key進(jìn)行hash。
compression.codecnone指定producer消息的壓縮格式,可選參數(shù)為: “none”, “gzip” and “snappy”。關(guān)于壓縮參見(jiàn)4.1節(jié)
compressed.topicsnull啟用壓縮的topic名稱。若上面參數(shù)選擇了一個(gè)壓縮格式,那么壓縮僅對(duì)本參數(shù)指定的topic有效,若本參數(shù)為空,則對(duì)所有topic有效。
message.send.max.retries3Producer發(fā)送失敗時(shí)重試次數(shù)。若網(wǎng)絡(luò)出現(xiàn)問(wèn)題,可能會(huì)導(dǎo)致不斷重試。
retry.backoff.ms100Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.
topic.metadata.refresh.interval.ms600 * 1000The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available…). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed
queue.buffering.max.ms5000啟用異步模式時(shí),producer緩存消息的時(shí)間。比如我們?cè)O(shè)置成1000時(shí),它會(huì)緩存1秒的數(shù)據(jù)再一次發(fā)送出去,這樣可以極大的增加broker吞吐量,但也會(huì)造成時(shí)效性的降低。
queue.buffering.max.messages10000采用異步模式時(shí)producer buffer 隊(duì)列里最大緩存的消息數(shù)量,如果超過(guò)這個(gè)數(shù)值,producer就會(huì)阻塞或者丟掉消息。
queue.enqueue.timeout.ms-1當(dāng)達(dá)到上面參數(shù)值時(shí)producer阻塞等待的時(shí)間。如果值設(shè)置為0,buffer隊(duì)列滿時(shí)producer不會(huì)阻塞,消息直接被丟掉。若值設(shè)置為-1,producer會(huì)被阻塞,不會(huì)丟消息。
batch.num.messages200采用異步模式時(shí),一個(gè)batch緩存的消息數(shù)量。達(dá)到這個(gè)數(shù)量值時(shí)producer才會(huì)發(fā)送消息。
send.buffer.bytes100 * 1024Socket write buffer size
client.id“”The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.

consumer配置信息

屬性默認(rèn)值描述
group.idConsumer的組ID,相同goup.id的consumer屬于同一個(gè)組。
zookeeper.connectConsumer的zookeeper連接串,要和broker的配置一致。
consumer.idnull如果不設(shè)置會(huì)自動(dòng)生成。
socket.timeout.ms30 * 1000網(wǎng)絡(luò)請(qǐng)求的socket超時(shí)時(shí)間。實(shí)際超時(shí)時(shí)間由max.fetch.wait + socket.timeout.ms 確定。
socket.receive.buffer.bytes64 * 1024The socket receive buffer for network requests.
fetch.message.max.bytes1024 * 1024查詢topic-partition時(shí)允許的最大消息大小。consumer會(huì)為每個(gè)partition緩存此大小的消息到內(nèi)存,因此,這個(gè)參數(shù)可以控制consumer的內(nèi)存使用量。這個(gè)值應(yīng)該至少比server允許的最大消息大小大,以免producer發(fā)送的消息大于consumer允許的消息。
num.consumer.fetchers1The number fetcher threads used to fetch data.
auto.commit.enabletrue如果此值設(shè)置為true,consumer會(huì)周期性的把當(dāng)前消費(fèi)的offset值保存到zookeeper。當(dāng)consumer失敗重啟之后將會(huì)使用此值作為新開(kāi)始消費(fèi)的值。
auto.commit.interval.ms60 * 1000Consumer提交offset值到zookeeper的周期。
queued.max.message.chunks2用來(lái)被consumer消費(fèi)的message chunks 數(shù)量, 每個(gè)chunk可以緩存fetch.message.max.bytes大小的數(shù)據(jù)量。
auto.commit.interval.ms60 * 1000Consumer提交offset值到zookeeper的周期。
queued.max.message.chunks2用來(lái)被consumer消費(fèi)的message chunks 數(shù)量, 每個(gè)chunk可以緩存fetch.message.max.bytes大小的數(shù)據(jù)量。
fetch.min.bytes1The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.
fetch.wait.max.ms100The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes.
rebalance.backoff.ms2000Backoff time between retries during rebalance.
refresh.leader.backoff.ms200Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.
auto.offset.resetlargestWhat to do when there is no initial offset in ZooKeeper or if an offset is out of range ;smallest : automatically reset the offset to the smallest offset; largest : automatically reset the offset to the largest offset;anything else: throw exception to the consumer
consumer.timeout.ms-1若在指定時(shí)間內(nèi)沒(méi)有消息消費(fèi),consumer將會(huì)拋出異常。
exclude.internal.topicstrueWhether messages from internal topics (such as offsets) should be exposed to the consumer.
zookeeper.session.timeout.ms6000ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur.
zookeeper.connection.timeout.ms6000The max time that the client waits while establishing a connection to zookeeper.
zookeeper.sync.time.ms2000How far a ZK follower can be behind a ZK leader

本博客僅為博主學(xué)習(xí)總結(jié),感謝各大網(wǎng)絡(luò)平臺(tái)的資料。蟹蟹!!

轉(zhuǎn)載于:https://www.cnblogs.com/upuptop/p/11154290.html

總結(jié)

以上是生活随笔為你收集整理的Kafka配置信息的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 91人妻一区二区 | 91国自产精品中文字幕亚洲 | 99re国产在线| 亚洲韩国精品 | 人人澡超碰碰 | 亚洲妇女av | 玖草视频在线观看 | 亚洲a v网站 | 天天透天天干 | 51av视频| 天天做天天爱天天爽综合网 | 婷婷午夜| 亚洲天堂一区在线 | 午夜国产一级 | 在线激情网 | 丰满人妻一区二区三区53 | 中文字幕女同女同女同 | 中文字幕一区二区三区人妻在线视频 | 中文av一区二区 | 九九激情视频 | 先锋影音一区二区三区 | 日韩午夜一区 | 中文字幕素人 | 妺妺窝人体色WWW精品 | 国产欧美精品久久久 | 欧美女优一区 | 免费观看一区二区 | 妖精视频污 | 老熟女毛茸茸浓毛 | 操丝袜美女视频 | 欧美xxxx性 | 亚洲欧美伊人 | 久热青草| 日韩另类视频 | 激情五月激情综合网 | 麻豆精品免费观看 | 成人免费看片' | 在线不卡日韩 | 夜色一区二区 | 新版红楼梦在线高清免费观看 | 六月综合激情 | 国产精品美女久久久久图片 | 国产精品av网站 | 久久久蜜桃一区二区人 | 午夜亚洲国产 | 亚洲jizzjizz | 超碰97在线人人 | 久久国产精品国语对白 | 亚洲欧美一区二区在线观看 | 我和单位漂亮少妇激情 | 国产一级视频在线 | 久久aaaa片一区二区 | 在线中文字幕网站 | 怡红院综合网 | 添女人荫蒂视频 | 亚洲图区综合 | 高清无码一区二区在线观看吞精 | 18被视频免费观看视频 | av中文字幕在线免费观看 | 中国美女一级片 | 亚洲第一字幕 | 蜜桃视频在线观看一区 | www.夜色 | 天天干天天操天天爱 | 国产无码日韩精品 | 欧美xxxxav | 人妻互换免费中文字幕 | 亚洲国产精华液网站w | 国产亚洲精品美女 | 欧美久久网 | 成人免费高清 | 色播av| 欧美激情欧美激情在线五月 | xxxxx18日本| 一本一道久久a久久综合蜜桃 | 成人免费视频大全 | 色吧av| 国产视频第一页 | 日韩在线www| 亚洲啊啊啊啊啊 | 欧美精品一二三区 | 欧美一区二区性久久久 | 一路向西在线看 | 国产精品1| 精品一区二区毛片 | 五十路av | 欧美专区在线观看 | v片在线观看 | 久久精品视频在线 | 天天操夜夜操夜夜操 | 成人免费视频国产在线观看 | 亚洲精品国产电影 | 色综合天天综合网国产成人网 | 国产自偷| 日韩av日韩 | 亚洲精品白浆高清久久久久久 | 欧美一区二区三区不卡视频 | 国产在线1区 | 五月天丁香视频 |