| broker.id | | 必填參數(shù),broker的唯一標(biāo)識(shí) |
| log.dirs | /tmp/kafka-logs | Kafka數(shù)據(jù)存放的目錄。可以指定多個(gè)目錄,中間用逗號(hào)分隔,當(dāng)新partition被創(chuàng)建的時(shí)會(huì)被存放到當(dāng)前存放partition最少的目錄。 |
| port | 9092 | BrokerServer接受客戶端連接的端口號(hào) |
| zookeeper.connect | null | Zookeeper的連接串,格式為:hostname1:port1,hostname2:port2,hostname3:port3。可以填一個(gè)或多個(gè),為了提高可靠性,建議都填上。注意,此配置允許我們指定一個(gè)zookeeper路徑來(lái)存放此kafka集群的所有數(shù)據(jù),為了與其他應(yīng)用集群區(qū)分開(kāi),建議在此配置中指定本集群存放目錄,格式為:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消費(fèi)者的參數(shù)要和此參數(shù)一致。 |
| message.max.bytes | 1000000 | 服務(wù)器可以接收到的最大的消息大小。注意此參數(shù)要和consumer的maximum.message.size大小一致,否則會(huì)因?yàn)樯a(chǎn)者生產(chǎn)的消息太大導(dǎo)致消費(fèi)者無(wú)法消費(fèi)。 |
| num.io.threads | 8 | 服務(wù)器用來(lái)執(zhí)行讀寫(xiě)請(qǐng)求的IO線程數(shù),此參數(shù)的數(shù)量至少要等于服務(wù)器上磁盤(pán)的數(shù)量。 |
| queued.max.requests | 500 | I/O線程可以處理請(qǐng)求的隊(duì)列大小,若實(shí)際請(qǐng)求數(shù)超過(guò)此大小,網(wǎng)絡(luò)線程將停止接收新的請(qǐng)求。 |
| socket.send.buffer.bytes | 100 * 1024 | The SO_SNDBUFF buffer the server prefers for socket connections. |
| socket.receive.buffer.bytes | 100 * 1024 | The SO_RCVBUFF buffer the server prefers for socket connections. |
| socket.request.max.bytes | 100 * 1024 * 1024 | 服務(wù)器允許請(qǐng)求的最大值, 用來(lái)防止內(nèi)存溢出,其值應(yīng)該小于 Java heap size. |
| num.partitions | 1 | 默認(rèn)partition數(shù)量,如果topic在創(chuàng)建時(shí)沒(méi)有指定partition數(shù)量,默認(rèn)使用此值,建議改為5 |
| log.segment.bytes | 1024 * 1024 * 1024 | Segment文件的大小,超過(guò)此值將會(huì)自動(dòng)新建一個(gè)segment,此值可以被topic級(jí)別的參數(shù)覆蓋。 |
| log.roll.{ms,hours} | 24 * 7 hours | 新建segment文件的時(shí)間,此值可以被topic級(jí)別的參數(shù)覆蓋。 |
| log.retention.{ms,minutes,hours} | 7 days | Kafka segment log的保存周期,保存周期超過(guò)此時(shí)間日志就會(huì)被刪除。此參數(shù)可以被topic級(jí)別參數(shù)覆蓋。數(shù)據(jù)量大時(shí),建議減小此值。 |
| log.retention.bytes | -1 | 每個(gè)partition的最大容量,若數(shù)據(jù)量超過(guò)此值,partition數(shù)據(jù)將會(huì)被刪除。注意這個(gè)參數(shù)控制的是每個(gè)partition而不是topic。此參數(shù)可以被log級(jí)別參數(shù)覆蓋。 |
| log.retention.check.interval.ms | 5 minutes | 刪除策略的檢查周期 |
| auto.create.topics.enable | true | 自動(dòng)創(chuàng)建topic參數(shù),建議此值設(shè)置為false,嚴(yán)格控制topic管理,防止生產(chǎn)者錯(cuò)寫(xiě)topic。 |
| default.replication.factor | 1 | 默認(rèn)副本數(shù)量,建議改為2。 |
| replica.lag.time.max.ms | 10000 | 在此窗口時(shí)間內(nèi)沒(méi)有收到follower的fetch請(qǐng)求,leader會(huì)將其從ISR(in-sync replicas)中移除。 |
| replica.lag.max.messages | 4000 | 如果replica節(jié)點(diǎn)落后leader節(jié)點(diǎn)此值大小的消息數(shù)量,leader節(jié)點(diǎn)就會(huì)將其從ISR中移除。 |
| replica.socket.timeout.ms | 30 * 1000 | replica向leader發(fā)送請(qǐng)求的超時(shí)時(shí)間。 |
| replica.socket.receive.buffer.bytes | 64 * 1024 | The socket receive buffer for network requests to the leader for replicating data. |
| replica.fetch.max.bytes | 1024 * 1024 | The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader. |
| replica.fetch.wait.max.ms | 500 | The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader. |
| num.replica.fetchers | 1 | Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker. |
| fetch.purgatory.purge.interval.requests | 1000 | The purge interval (in number of requests) of the fetch request purgatory. |
| zookeeper.session.timeout.ms | 6000 | ZooKeeper session 超時(shí)時(shí)間。如果在此時(shí)間內(nèi)server沒(méi)有向zookeeper發(fā)送心跳,zookeeper就會(huì)認(rèn)為此節(jié)點(diǎn)已掛掉。 此值太低導(dǎo)致節(jié)點(diǎn)容易被標(biāo)記死亡;若太高,.會(huì)導(dǎo)致太遲發(fā)現(xiàn)節(jié)點(diǎn)死亡。 |
| zookeeper.connection.timeout.ms | 6000 | 客戶端連接zookeeper的超時(shí)時(shí)間。 |
| zookeeper.sync.time.ms | 2000 | H ZK follower落后 ZK leader的時(shí)間。 |
| controlled.shutdown.enable | true | 允許broker shutdown。如果啟用,broker在關(guān)閉自己之前會(huì)把它上面的所有l(wèi)eaders轉(zhuǎn)移到其它brokers上,建議啟用,增加集群穩(wěn)定性。 |
| auto.leader.rebalance.enable | true | If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available. |
| leader.imbalance.per.broker.percentage | 10 | The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker. |
| leader.imbalance.check.interval.seconds | 300 | The frequency with which to check for leader imbalance. |
| offset.metadata.max.bytes | 4096 | The maximum amount of metadata to allow clients to save with their offsets. |
| connections.max.idle.ms | 600000 | Idle connections timeout: the server socket processor threads close the connections that idle more than this. |
| num.recovery.threads.per.data.dir | 1 | The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. |
| unclean.leader.election.enable | true | Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. |
| delete.topic.enable | false | 啟用deletetopic參數(shù),建議設(shè)置為true。 |
| offsets.topic.num.partitions | 50 | The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200). |
| offsets.topic.retention.minutes | 1440 | Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic. |
| offsets.retention.check.interval.ms | 600000 | The frequency at which the offset manager checks for stale offsets. |
| offsets.topic.replication.factor | 3 | The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas. |
| offsets.topic.segment.bytes | 104857600 | Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads. |
| offsets.load.buffer.size | 5242880 | An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache. |
| offsets.commit.required.acks | -1 | The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden. |
| offsets.commit.timeout.ms | 5000 | The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout. |
| | |