日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Kafka集群部署详细步骤(包含zookeeper安装步骤)

發布時間:2023/12/19 编程问答 45 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Kafka集群部署详细步骤(包含zookeeper安装步骤) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Kafka集群部署

注意:如果jdk1.8和zookeeper都安裝設置過之后可以直接安裝kafka跳過其它步驟

kafka基礎簡介及基本命令

1.環境準備

1.1集群規劃

node01? node02? node03

? ? zk? ? ? ? ?zk? ? ? ? zk

kafka? ? ? kafka? ? ?kafka

?

?

?

1.2jar包下載

http://kafka.apache.org/downloads.html

1.3 虛擬機準備

1)準備3臺虛擬機

2)配置ip地址

3)配置主機名稱

4)3臺主機分別關閉防火墻

[hadoop@node01 hadoop]# chkconfig iptables off

[hadoop@node02 hadoop]# chkconfig iptables off

[hadoop@node03 hadoop]# chkconfig iptables off

?

1.4 安裝jdk1.8以上

1.5 安裝Zookeeper

0)集群規劃

? ? ? ? 在node01、node02和node03三個節點上部署Zookeeper。

1)解壓安裝

? ? ? ?(1)解壓zookeeper安裝包到/home/hadoop/目錄下

? ? ? ? ? ? [hadoop@hadoop01 ~]$ tar -zxvf zookeeper-3.4.10.tar.gz -C /bd/

? ? ? (2)在/bd/zk/這個目錄下創建zkData

? ? ? ? ? mkdir -p zkData

? ? ? (3)重命名/bd/zk/conf這個目錄下的zoo_sample.cfg為zoo.cfg

? ? ??? mv zoo_sample.cfg zoo.cfg

2)配置zoo.cfg文件

(1)具體配置

dataDir=/bd/zk/zkData

增加如下配置

#######################cluster##########################

server.2=node01:2888:3888

server.3=node02:2888:3888

server.4=node03:2888:3888

(2)配置參數解讀

Server.A=B:C:D。

A是一個數字,表示這個是第幾號服務器;

B是這個服務器的ip地址;

C是這個服務器與集群中的Leader服務器交換信息的端口;

D是萬一集群中的Leader服務器掛了,需要一個端口來重新進行選舉,選出一個新的Leader,而這個端口就是用來執行選舉時服務器相互通信的端口。

集群模式下配置一個文件myid,這個文件在dataDir目錄下,這個文件里面有一個數據就是A的值,Zookeeper啟動時讀取此文件,拿到里面的數據與zoo.cfg里面的配置信息比較從而判斷到底是哪個server。

3)集群操作

(1)在/bd/zk/zkData目錄下創建一個myid的文件

touch myid

添加myid文件,注意一定要在linux里面創建,在notepad++里面很可能亂碼

(2)編輯myid文件

vi myid

在文件中添加與server對應的編號:如2

(3)拷貝配置好的zookeeper到其他機器上

scp -r zk/ hadoop@node02:/bd/

scp -r zk/ hadoop@node03:/bd/

并分別修改myid文件中內容為3、4

(4)分別啟動zookeeper

[hadoop@node01 zk]# bin/zkServer.sh start

[hadoop@node02 zk]# bin/zkServer.sh start

[hadoop@node03 zk]# bin/zkServer.sh start

(5)查看狀態

[hadoop@node01 zk]# bin/zkServer.sh status

JMX enabled by default

Using config: /bd/zk/bin/../conf/zoo.cfg

Mode: follower

[hadoop@node02 zk]# bin/zkServer.sh status

JMX enabled by default

Using config: /bd/zk/bin/../conf/zoo.cfg

Mode: leader

[hadoop@node03 zk]# bin/zkServer.sh status

JMX enabled by default

Using config: /bd/zk/bin/../conf/zoo.cfg

Mode: follower

?

2.2 Kafka集群部署

1)解壓安裝包

[hadoop@hadoop01 ~]$ tar -zxvf kafka_2.11-0.11.0.0.tgz -C /bd/

2)修改解壓后的文件名稱

[hadoop@hadoop01 bd]$ mv kafka_2.11-0.11.0.0/ kafka

3)在/bd/kafka目錄下創建logs文件夾

[hadoop@hadoop01 kafka]$ mkdir logs

4)修改配置文件

[hadoop@hadoop01 kafka]$ cd config/

[hadoop@hadoop01 config]$ vi server.properties

輸入以下內容:(標紅的是注意要修改的哦)

#broker的全局唯一編號,不能重復

broker.id=0

#刪除topic功能使能

delete.topic.enable=true

#處理網絡請求的線程數量

num.network.threads=3

#用來處理磁盤IO的現成數量

num.io.threads=8

#發送套接字的緩沖區大小

socket.send.buffer.bytes=102400

#接收套接字的緩沖區大小

socket.receive.buffer.bytes=102400

#請求套接字的緩沖區大小

socket.request.max.bytes=104857600

#kafka運行日志存放的路徑

log.dirs=/bd/kafka/logs

#topic在當前broker上的分區個數

num.partitions=1

#用來恢復和清理data下數據的線程數量

num.recovery.threads.per.data.dir=1

#segment文件保留的最長時間,超時將被刪除

log.retention.hours=168

#配置連接Zookeeper集群地址

zookeeper.connect=node01:2181,node02:2181,node03:2181

?

?

?

5)配置環境變量

[hadoop@node01 bd]# vi /etc/profile

?

#KAFKA_HOME

export KAFKA_HOME=/bd/kafka

export PATH=$PATH:$KAFKA_HOME/bin

[hadoop@node01 bd]# source /etc/profile

?

6)分發安裝包

[hadoop@node01 etc]# xsync profile

[hadoop@hadoop102 module]$ xsync kafka/

7)分別在node02和node03上修改配置文件/bd/kafka/config/server.properties中的broker.id=1、broker.id=2

注:broker.id不得重復

8)啟動集群

依次在node01、node02、node03節點上啟動kafka

[hadoop@node01 kafka]$ bin/kafka-server-start.sh config/server.properties &

[hadoop@node02kafka]$ bin/kafka-server-start.sh config/server.properties &

[hadoop@node03kafka]$ bin/kafka-server-start.sh config/server.properties &

注:kafka后臺運行的方法:

配置kafka的一鍵啟停腳本,簡單易用?https://blog.csdn.net/qq_43412289/article/details/100633902

  • bin/kafka-server-start.sh config/server.properties?
  • bin/kafka-server-start.sh -daemon config/server.properties
  • bin/kafka-server-start.sh config/server.properties(先前臺運行,ctrl+z,bg命令后臺運行,fg命令還可以切換回前臺)
  • ?

    2.3 Kafka命令行操作

    1)查看當前服務器中的所有topic

    [hadoop@node01 kafka]$ bin/kafka-topics.sh --list --zookeeper node01:2181

    2)創建topic

    [hadoop@node01 kafka]$bin/kafka-topics.sh --create --zookeeper node02:2181 --replication-factor 3 --partitions 1 --topic first

    選項說明:

    --topic 定義topic名

    --replication-factor 定義副本數

    --partitions 定義分區數

    刪除topic?

    [hadoop@node01 kafka]$ bin/kafka-topics.sh --delete --zookeeper node01:2181 --topic first

    需要server.properties中設置delete.topic.enable=true否則只是標記刪除或者直接重啟。

    4)發送消息

    [hadoop@node01 kafka]$ bin/kafka-console-producer.sh --broker-list node01:9092 --topic first

    >hello world

    >hadoop hadoop

    5)消費消息

    [hadoop@node01 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server node01:9092 --from-beginning --topic first

    6)查看某個Topic的詳情

    [hadoop@node01 kafka]$ bin/kafka-topics.sh --topic first --describe --zookeeper node01:2181

    Kafka配置信息

    Broker配置信息

    屬性

    默認值

    描述

    broker.id

    ?

    必填參數,broker的唯一標識

    log.dirs

    /tmp/kafka-logs

    Kafka數據存放的目錄。可以指定多個目錄,中間用逗號分隔,當新partition被創建的時會被存放到當前存放partition最少的目錄。

    port

    9092

    BrokerServer接受客戶端連接的端口號

    zookeeper.connect

    null

    Zookeeper的連接串,格式為:hostname1:port1,hostname2:port2,hostname3:port3。可以填一個或多個,為了提高可靠性,建議都填上。注意,此配置允許我們指定一個zookeeper路徑來存放此kafka集群的所有數據,為了與其他應用集群區分開,建議在此配置中指定本集群存放目錄,格式為:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消費者的參數要和此參數一致。

    message.max.bytes

    1000000

    服務器可以接收到的最大的消息大小。注意此參數要和consumer的maximum.message.size大小一致,否則會因為生產者生產的消息太大導致消費者無法消費。

    num.io.threads

    8

    服務器用來執行讀寫請求的IO線程數,此參數的數量至少要等于服務器上磁盤的數量。

    queued.max.requests

    500

    I/O線程可以處理請求的隊列大小,若實際請求數超過此大小,網絡線程將停止接收新的請求。

    socket.send.buffer.bytes

    100 * 1024

    The SO_SNDBUFF buffer the server prefers for socket connections.

    socket.receive.buffer.bytes

    100 * 1024

    The SO_RCVBUFF buffer the server prefers for socket connections.

    socket.request.max.bytes

    100 * 1024 * 1024

    服務器允許請求的最大值, 用來防止內存溢出,其值應該小于 Java heap size.

    num.partitions

    1

    默認partition數量,如果topic在創建時沒有指定partition數量,默認使用此值,建議改為5

    log.segment.bytes

    1024 * 1024 * 1024

    Segment文件的大小,超過此值將會自動新建一個segment,此值可以被topic級別的參數覆蓋。

    log.roll.{ms,hours}

    24 * 7 hours

    新建segment文件的時間,此值可以被topic級別的參數覆蓋。

    log.retention.{ms,minutes,hours}

    7 days

    Kafka segment log的保存周期,保存周期超過此時間日志就會被刪除。此參數可以被topic級別參數覆蓋。數據量大時,建議減小此值。

    log.retention.bytes

    -1

    每個partition的最大容量,若數據量超過此值,partition數據將會被刪除。注意這個參數控制的是每個partition而不是topic。此參數可以被log級別參數覆蓋。

    log.retention.check.interval.ms

    5 minutes

    刪除策略的檢查周期

    auto.create.topics.enable

    true

    自動創建topic參數,建議此值設置為false,嚴格控制topic管理,防止生產者錯寫topic。

    default.replication.factor

    1

    默認副本數量,建議改為2。

    replica.lag.time.max.ms

    10000

    在此窗口時間內沒有收到follower的fetch請求,leader會將其從ISR(in-sync replicas)中移除。

    replica.lag.max.messages

    4000

    如果replica節點落后leader節點此值大小的消息數量,leader節點就會將其從ISR中移除。

    replica.socket.timeout.ms

    30 * 1000

    replica向leader發送請求的超時時間。

    replica.socket.receive.buffer.bytes

    64 * 1024

    The socket receive buffer for network requests to the leader for replicating data.

    replica.fetch.max.bytes

    1024 * 1024

    The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader.

    replica.fetch.wait.max.ms

    500

    The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader.

    num.replica.fetchers

    1

    Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker.

    fetch.purgatory.purge.interval.requests

    1000

    The purge interval (in number of requests) of the fetch request purgatory.

    zookeeper.session.timeout.ms

    6000

    ZooKeeper session 超時時間。如果在此時間內server沒有向zookeeper發送心跳,zookeeper就會認為此節點已掛掉。 此值太低導致節點容易被標記死亡;若太高,.會導致太遲發現節點死亡。

    zookeeper.connection.timeout.ms

    6000

    客戶端連接zookeeper的超時時間。

    zookeeper.sync.time.ms

    2000

    H ZK follower落后 ZK leader的時間。

    controlled.shutdown.enable

    true

    允許broker shutdown。如果啟用,broker在關閉自己之前會把它上面的所有leaders轉移到其它brokers上,建議啟用,增加集群穩定性。

    auto.leader.rebalance.enable

    true

    If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available.

    leader.imbalance.per.broker.percentage

    10

    The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker.

    leader.imbalance.check.interval.seconds

    300

    The frequency with which to check for leader imbalance.

    offset.metadata.max.bytes

    4096

    The maximum amount of metadata to allow clients to save with their offsets.

    connections.max.idle.ms

    600000

    Idle connections timeout: the server socket processor threads close the connections that idle more than this.

    num.recovery.threads.per.data.dir

    1

    The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.

    unclean.leader.election.enable

    true

    Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.

    delete.topic.enable

    false

    啟用deletetopic參數,建議設置為true。

    offsets.topic.num.partitions

    50

    The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200).

    offsets.topic.retention.minutes

    1440

    Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic.

    offsets.retention.check.interval.ms

    600000

    The frequency at which the offset manager checks for stale offsets.

    offsets.topic.replication.factor

    3

    The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas.

    offsets.topic.segment.bytes

    104857600

    Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads.

    offsets.load.buffer.size

    5242880

    An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache.

    offsets.commit.required.acks

    -1

    The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden.

    offsets.commit.timeout.ms

    5000

    The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout.

    Producer配置信息

    屬性

    默認值

    描述

    metadata.broker.list

    ?

    啟動時producer查詢brokers的列表,可以是集群中所有brokers的一個子集。注意,這個參數只是用來獲取topic的元信息用,producer會從元信息中挑選合適的broker并與之建立socket連接。格式是:host1:port1,host2:port2。

    request.required.acks

    0

    參見3.2節介紹

    request.timeout.ms

    10000

    Broker等待ack的超時時間,若等待時間超過此值,會返回客戶端錯誤信息。

    producer.type

    sync

    同步異步模式。async表示異步,sync表示同步。如果設置成異步模式,可以允許生產者以batch的形式push數據,這樣會極大的提高broker性能,推薦設置為異步。

    serializer.class

    kafka.serializer.DefaultEncoder

    序列號類,.默認序列化成 byte[] 。

    key.serializer.class

    ?

    Key的序列化類,默認同上。

    partitioner.class

    kafka.producer.DefaultPartitioner

    Partition類,默認對key進行hash。

    compression.codec

    none

    指定producer消息的壓縮格式,可選參數為: “none”, “gzip” and “snappy”。關于壓縮參見4.1節

    compressed.topics

    null

    啟用壓縮的topic名稱。若上面參數選擇了一個壓縮格式,那么壓縮僅對本參數指定的topic有效,若本參數為空,則對所有topic有效。

    message.send.max.retries

    3

    Producer發送失敗時重試次數。若網絡出現問題,可能會導致不斷重試。

    retry.backoff.ms

    100

    Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.

    topic.metadata.refresh.interval.ms

    600 * 1000

    The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available…). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed

    queue.buffering.max.ms

    5000

    啟用異步模式時,producer緩存消息的時間。比如我們設置成1000時,它會緩存1秒的數據再一次發送出去,這樣可以極大的增加broker吞吐量,但也會造成時效性的降低。

    queue.buffering.max.messages

    10000

    采用異步模式時producer buffer 隊列里最大緩存的消息數量,如果超過這個數值,producer就會阻塞或者丟掉消息。

    queue.enqueue.timeout.ms

    -1

    當達到上面參數值時producer阻塞等待的時間。如果值設置為0,buffer隊列滿時producer不會阻塞,消息直接被丟掉。若值設置為-1,producer會被阻塞,不會丟消息。

    batch.num.messages

    200

    采用異步模式時,一個batch緩存的消息數量。達到這個數量值時producer才會發送消息。

    send.buffer.bytes

    100 * 1024

    Socket write buffer size

    client.id

    “”

    The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.

    ?

    Consumer配置信息

    屬性

    默認值

    描述

    group.id

    ?

    Consumer的組ID,相同goup.id的consumer屬于同一個組。

    zookeeper.connect

    ?

    Consumer的zookeeper連接串,要和broker的配置一致。

    consumer.id

    null

    如果不設置會自動生成。

    socket.timeout.ms

    30 * 1000

    網絡請求的socket超時時間。實際超時時間由max.fetch.wait + socket.timeout.ms 確定。

    socket.receive.buffer.bytes

    64 * 1024

    The socket receive buffer for network requests.

    fetch.message.max.bytes

    1024 * 1024

    查詢topic-partition時允許的最大消息大小。consumer會為每個partition緩存此大小的消息到內存,因此,這個參數可以控制consumer的內存使用量。這個值應該至少比server允許的最大消息大小大,以免producer發送的消息大于consumer允許的消息。

    num.consumer.fetchers

    1

    The number fetcher threads used to fetch data.

    auto.commit.enable

    true

    如果此值設置為true,consumer會周期性的把當前消費的offset值保存到zookeeper。當consumer失敗重啟之后將會使用此值作為新開始消費的值。

    auto.commit.interval.ms

    60 * 1000

    Consumer提交offset值到zookeeper的周期。

    queued.max.message.chunks

    2

    用來被consumer消費的message chunks 數量, 每個chunk可以緩存fetch.message.max.bytes大小的數據量。

    auto.commit.interval.ms

    60 * 1000

    Consumer提交offset值到zookeeper的周期。

    queued.max.message.chunks

    2

    用來被consumer消費的message chunks 數量, 每個chunk可以緩存fetch.message.max.bytes大小的數據量。

    fetch.min.bytes

    1

    The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.

    fetch.wait.max.ms

    100

    The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes.

    rebalance.backoff.ms

    2000

    Backoff time between retries during rebalance.

    refresh.leader.backoff.ms

    200

    Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.

    auto.offset.reset

    largest

    What to do when there is no initial offset in ZooKeeper or if an offset is out of range ;smallest : automatically reset the offset to the smallest offset; largest : automatically reset the offset to the largest offset;anything else: throw exception to the consumer

    consumer.timeout.ms

    -1

    若在指定時間內沒有消息消費,consumer將會拋出異常。

    exclude.internal.topics

    true

    Whether messages from internal topics (such as offsets) should be exposed to the consumer.

    zookeeper.session.timeout.ms

    6000

    ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur.

    zookeeper.connection.timeout.ms

    6000

    The max time that the client waits while establishing a connection to zookeeper.

    zookeeper.sync.time.ms

    2000

    How far a ZK follower can be behind a ZK leader

    ?

    ?

    ?

    ?

    總結

    以上是生活随笔為你收集整理的Kafka集群部署详细步骤(包含zookeeper安装步骤)的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。