kafka 异步发送阻塞_Kafka学习一
一、github下載kafka的源碼
可以看到kafka的源碼開源社區是非常活躍的。
二、搭建kafka環境
構建kafka環境,首先需要安裝Scala和gradle,再安裝的scala插件需要和你的idea需要匹配,同時不要忘了安裝gradle時配置環境變量。
安裝完之后,你就可以修改gradle.properties
group=org.apache.kafka
#?NOTE:?When?you?change?this?version?number,?you?should?also?make?sure?to?update
#?the?version?numbers?in
#??-?docs/js/templateData.js
#??-?tests/kafkatest/__init__.py
#??-?tests/kafkatest/version.py?(variable?DEV_VERSION)
#??-?kafka-merge-pr.py
version=1.1.2-SNAPSHOT
scalaVersion=2.11.12
task=build
org.gradle.jvmargs=-Xmx2g?-Xss4m?-XX:+UseParallelGC
修改完之后,就可以進行構建了。此時你需要輸入gradle idea進行編譯,這里由于我編譯過了,所以時間較短,通常會較長時間。
重點關注example包:
三、生產者
生產者producer:
可以看到生產者里面有生產者、主題、是否是異步的相關變量,同時三個變量都是final,從而我們可以確切的直到它們都是在創建的時候就需要進行指定。構造函數中首先填充配置信息,將配置信息放入創建的kafka生產者中,將主題賦值給topic、生產者是否異步發送消息放入到isAsync。
重點在run方法中的send方法,發送方式分為同步和異步兩種方式。
/**
?*?生產者
?*/
public?class?Producer?extends?Thread?{
????//生產者
????private?final?KafkaProducer?producer;//主題private?final?String?topic;//是否是異步private?final?Boolean?isAsync;//構造函數:配置信息服務器配置、客戶端id、key序列化、value序列化、創建kafka生產者對象、主題、是否是異步public?Producer(String?topic,?Boolean?isAsync)?{
????????Properties?props?=?new?Properties();
????????props.put("bootstrap.servers",?KafkaProperties.KAFKA_SERVER_URL?+?":"?+?KafkaProperties.KAFKA_SERVER_PORT);
????????props.put("client.id",?"DemoProducer");
????????props.put("key.serializer",?"org.apache.kafka.common.serialization.IntegerSerializer");
????????props.put("value.serializer",?"org.apache.kafka.common.serialization.StringSerializer");//創建kafka生產者對象
????????producer?=?new?KafkaProducer<>(props);this.topic?=?topic;this.isAsync?=?isAsync;
????}//運行生產者public?void?run()?{int?messageNo?=?1;while?(true)?{//發送的消息信息:Message_1
????????????String?messageStr?=?"Message_"?+?messageNo;//開始時間long?startTime?=?System.currentTimeMillis();//是否異步if?(isAsync)?{?//?Send?asynchronously//生產者發送消息?異步發送
????????????????producer.send(new?ProducerRecord<>(topic,
????????????????????messageNo,
????????????????????messageStr),?new?DemoCallBack(startTime,?messageNo,?messageStr));
????????????}?else?{?//?Send?synchronouslytry?{//同步發送
????????????????????producer.send(new?ProducerRecord<>(topic,
????????????????????????messageNo,
????????????????????????messageStr)).get();
????????????????????System.out.println("Sent?message:?("?+?messageNo?+?",?"?+?messageStr?+?")");
????????????????}?catch?(InterruptedException?|?ExecutionException?e)?{
????????????????????e.printStackTrace();
????????????????}
????????????}
????????????++messageNo;
????????}
????}
}//進行回調class?DemoCallBack?implements?Callback?{//開始時間private?final?long?startTime;//keyprivate?final?int?key;//消息private?final?String?message;//構造函數public?DemoCallBack(long?startTime,?int?key,?String?message)?{this.startTime?=?startTime;this.key?=?key;this.message?=?message;
????}/**
?????*?A?callback?method?the?user?can?implement?to?provide?asynchronous?handling?of?request?completion.?This?method?will
?????*?be?called?when?the?record?sent?to?the?server?has?been?acknowledged.?Exactly?one?of?the?arguments?will?be
?????*?non-null.
?????*?用戶可以實現以提供對請求完成的異步處理的回調方法。?確認發送到服務器的消息后,將調用此方法。?確切地說,其中一個參數將為非null。
?????*?@param?metadata??The?metadata?for?the?record?that?was?sent?(i.e.?the?partition?and?offset).?Null?if?an?error
?????*??????????????????occurred.
?????*??????????????????發送的消息的元數據(即分區和偏移量)。?如果發生錯誤,則為Null。
?????*?@param?exception?The?exception?thrown?during?processing?of?this?record.?Null?if?no?error?occurred.
?????*???????????????????處理此消息期間引發的異常。?如果沒有發生錯誤,則為Null。
?????*/public?void?onCompletion(RecordMetadata?metadata,?Exception?exception)?{//經過的時間long?elapsedTime?=?System.currentTimeMillis()?-?startTime;//如果元數據不為空,則返回null//打印消息發送到的分區和偏移量在經過的時間if?(metadata?!=?null)?{
????????????System.out.println("message("?+?key?+?",?"?+?message?+?")?sent?to?partition("?+?metadata.partition()?+"),?"?+"offset("?+?metadata.offset()?+?")?in?"?+?elapsedTime?+?"?ms");
????????}?else?{
????????????exception.printStackTrace();
????????}
????}
}send方法中做了兩件事,一是對消息進行發送攔截,進行增強,同時進行消息發送。可以看到在send方法中有大段注釋,而無疑這些注釋是值得我們去讀的。這里我查看的是異步的方法,通常異步的話,會進行回調。
這里大段的注釋的大意:這里提到了消息發送和回調的方式,同時提到了存儲中偏移量和分區,提到了事務、冪等,同時對事務進行了詳細的介紹和在如果目標主題的消息格式未升級為0.11.0.0,則冪等和事務性生產請求將失敗。并出現{@link org.apache.kafka.common.errors.UnsupportedForMessageFormatException}錯誤。如果在事務處理期間遇到此問題,則可以中止并繼續。但是請注意,將來發送到同一主題的消息將繼續收到相同的異常,直到升級該主題為止。同時對于阻塞和計算量大的方法需要自己實現線程池的并行。
下面是它的詳細解釋:
1.異步發送消息到主題,并在確認發送后調用提供的回調。
2.發送是異步的,并且一旦消息已存儲在等待發送的消息緩沖區中,此方法將立即返回。這允許并行發送許多消息,而不會阻塞等待每條消息之后的響應。
3.發送的結果是{@link?RecordMetadata},指定消息發送到的分區,分配的偏移量和消息的時間戳。
???如果主題使用{@link?org.apache.kafka.common.record.TimestampType#CREATE_TIME CreateTime},則該時間戳記??將是用戶提供的時間戳記,或者如果用戶未為該消息指定時間戳記則是消息發送時間。
???如果將{@link?org.apache.kafka.common.record.TimestampType#LOG_APPEND_TIME LogAppendTime}用作主題,則時間戳將是附加消息時的Kafka代理本地時間。
4.由于send調用是異步的,因此它將為將分配給該消息的{@link?RecordMetadata}返回一個{@link?java.util.concurrent.Future Future}。在此將來調用{@link?java.util.concurrent.Future#get()get()}將會阻塞,直到關聯的請求完成,然后返回消息的元數據或引發在發送消息時發生的任何異常。???
5.如果要模擬一個簡單的阻塞調用,則可以立即調用get()方法
6.完全無阻塞的用法可以利用{@link?Callback}參數來提供將在請求完成后調用的回調
7.當用作事務的一部分時,不必為了檢測 send code>中的錯誤而定義回調或檢查將來的結果。
???如果任何發送調用失敗并出現不可恢復的錯誤,則最后一個{@link?#commitTransaction()}調用將失敗,并從上次失敗的發送中引發異常。
???發生這種情況時,您的應用程序應調用{@link?#abortTransaction()}以重置狀態并繼續發送數據。
8.某些事務發送錯誤無法通過調用{@link?#abortTransaction()}來解決。特別是,如果事務發送以{@link?ProducerFencedException},{@?link?org.apache.kafka.common.errors.OutOfOrderSequenceException},
{@?link?org.apache.kafka.common.errors.UnsupportedVersionException}結尾,?或{@link?org.apache.kafka.common.errors.AuthorizationException},那么剩下的唯一選擇就是調用{@link?#close()}。
???致命錯誤導致生產者進入已失效狀態,在這種狀態下,將來的API調用將繼續引發包裹在新{@link?KafkaException}中的相同的下標錯誤。???
9.這與啟用冪等性但未配置 transactional.id code>時相似。在這種情況下,{@ link org.apache.kafka.common.errors.UnsupportedVersionException}和{@link?org.apache.kafka.common.errors.AuthorizationException}被視為致命錯誤。但是,不需要處理{@link?ProducerFencedException}。?此外,可以繼續在收到{@link?org.apache.kafka.common.errors.OutOfOrderSequenceException}之后發送消息,但這樣做可能導致未決消息的發送順序混亂。?為了確保正確的訂閱,您應該關閉生產者并創建一個新實例。????
10.如果目標主題的消息格式未升級為0.11.0.0,則冪等和事務性生產請求將失敗,并出現{@link?org.apache.kafka.common.errors.UnsupportedForMessageFormatException}錯誤。如果在事務處理期間遇到此問題,則可以中止并繼續。?但是請注意,將來發送到同一主題的消息將繼續收到相同的異常,直到升級該主題為止。???
11. 注意,回調通常將在生產者的I/O線程中執行,因此應相當快,否則它們將延遲其他線程的消息發送。如果要執行阻塞或計算量大的回調,建議在回調主體中使用自己的{@link?java.util.concurrent.Executor}來并行化處理。???
@Override
public?Future?send(ProducerRecord?record,?Callback?callback)?{
????//?intercept?the?record,?which?can?be?potentially?modified;?this?method?does?not?throw?exceptions
????//攔截消息,進行增強
????ProducerRecord?interceptedRecord?=?this.interceptors.onSend(record);//發送消息return?doSend(interceptedRecord,?callback);
}攔截器攔截發送,進行自定義增強操作:
從注釋里面我們可以看到其進行攔截的時候不會拋出異常,因此需要自己去try…catch
public?ProducerRecord?onSend(ProducerRecord?record)?{
????ProducerRecord?interceptRecord?=?record;for?(ProducerInterceptor?interceptor?:?this.interceptors)?{try?{//在發送中進行攔截
????????????interceptRecord?=?interceptor.onSend(interceptRecord);
????????}?catch?(Exception?e)?{//?do?not?propagate?interceptor?exception,?log?and?continue?calling?other?interceptors//?be?careful?not?to?throw?exception?from?hereif?(record?!=?null)
????????????????log.warn("Error?executing?interceptor?onSend?callback?for?topic:?{},?partition:?{}",?record.topic(),?record.partition(),?e);else
????????????????log.warn("Error?executing?interceptor?onSend?callback",?e);
????????}
????}return?interceptRecord;
}追加消息攔截器
//發送消息中攔截
@Override
public?ProducerRecord?onSend(ProducerRecord?record)?{
????//計數器
????onSendCount++;
????//如果在發送中拋異常
????if?(throwExceptionOnSend)
????????throw?new?KafkaException("Injected?exception?in?AppendProducerInterceptor.onSend");
???//返回創建的生產者消息
????return?new?ProducerRecord<>(
????????????record.topic(),?record.partition(),?record.key(),?record.value().concat(appendStr));
}
生產者消息:包含的信息主題、時間戳、分區、k-v、消息頭
public?ProducerRecord(String?topic,?Integer?partition,?Long?timestamp,?K?key,?V?value,?Iterable?headers)?{
????//主題為空,拋異常
????if?(topic?==?null)
????????throw?new?IllegalArgumentException("Topic?cannot?be?null.");
????//時間戳不為空,或者時間戳小于0,則拋異常
????if?(timestamp?!=?null?&&?timestamp?0)
????????throw?new?IllegalArgumentException(
????????????????String.format("Invalid?timestamp:?%d.?Timestamp?should?always?be?non-negative?or?null.",?timestamp));
????//分區不為空,或者分區小于0,則拋異常
????if?(partition?!=?null?&&?partition?0)
????????throw?new?IllegalArgumentException(
????????????????String.format("Invalid?partition:?%d.?Partition?number?should?always?be?non-negative?or?null.",?partition));
????//主題topic
????this.topic?=?topic;
????//分區
????this.partition?=?partition;
????//key
????this.key?=?key;
????//值
????this.value?=?value;
????//時間戳
????this.timestamp?=?timestamp;
????//消息頭
????this.headers?=?new?RecordHeaders(headers);
}
doSend是我們需要關注的重點:
?/**
?????*?Implementation?of?asynchronously?send?a?record?to?a?topic.
?????*?實現的異步發送的記錄到一個主題中
?????*/
????private?Future?doSend(ProducerRecord?record,?Callback?callback)?{
????????TopicPartition?tp?=?null;
????????try?{
????????????//?first?make?sure?the?metadata?for?the?topic?is?available
????????????//首先確保元數據提供給topic是可用的??也即準備元數據階段
????????????ClusterAndWaitTime?clusterAndWaitTime?=?waitOnMetadata(record.topic(),?record.partition(),?maxBlockTimeMs);
????????????//記錄等待時間
????????????long?remainingWaitMs?=?Math.max(0,?maxBlockTimeMs?-?clusterAndWaitTime.waitedOnMetadataMs);
????????????//集群
????????????Cluster?cluster?=?clusterAndWaitTime.cluster;
????????????byte[]?serializedKey;
????????????try?{
????????????????//序列化key
????????????????serializedKey?=?keySerializer.serialize(record.topic(),?record.headers(),?record.key());
????????????}?catch?(ClassCastException?cce)?{
????????????????throw?new?SerializationException("Can't?convert?key?of?class?"?+?record.key().getClass().getName()?+
????????????????????????"?to?class?"?+?producerConfig.getClass(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG).getName()?+
????????????????????????"?specified?in?key.serializer",?cce);
????????????}
????????????byte[]?serializedValue;
????????????try?{
????????????????//序列化value
????????????????serializedValue?=?valueSerializer.serialize(record.topic(),?record.headers(),?record.value());
????????????}?catch?(ClassCastException?cce)?{
????????????????throw?new?SerializationException("Can't?convert?value?of?class?"?+?record.value().getClass().getName()?+
????????????????????????"?to?class?"?+?producerConfig.getClass(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG).getName()?+
????????????????????????"?specified?in?value.serializer",?cce);
????????????}
????????????//分區
????????????int?partition?=?partition(record,?serializedKey,?serializedValue,?cluster);
????????????tp?=?new?TopicPartition(record.topic(),?partition);
????????????//將消息頭轉成數組
????????????setReadOnly(record.headers());
????????????Header[]?headers?=?record.headers().toArray();
???????????//序列化大小
????????????int?serializedSize?=?AbstractRecords.estimateSizeInBytesUpperBound(apiVersions.maxUsableProduceMagic(),
????????????????????compressionType,?serializedKey,?serializedValue,?headers);
????????????//確保校驗消息大小
????????????ensureValidRecordSize(serializedSize);
????????????long?timestamp?=?record.timestamp()?==?null???time.milliseconds()?:?record.timestamp();
????????????log.trace("Sending?record?{}?with?callback?{}?to?topic?{}?partition?{}",?record,?callback,?record.topic(),?partition);
????????????//?producer?callback?will?make?sure?to?call?both?'callback'?and?interceptor?callback
????????????//生產者回調將確保同時調用“回調”和攔截器回調
????????????Callback?interceptCallback?=?new?InterceptorCallback<>(callback,?this.interceptors,?tp);
????????????//如果事務管理不為空同時是有事務的,則添加事務
????????????if?(transactionManager?!=?null?&&?transactionManager.isTransactional())
????????????????transactionManager.maybeAddPartitionToTransaction(tp);
????????????//在消息收集器中追加信息
????????????RecordAccumulator.RecordAppendResult?result?=?accumulator.append(tp,?timestamp,?serializedKey,
????????????????????serializedValue,?headers,?interceptCallback,?remainingWaitMs);
????????????//如果結果為空,則sender喚醒??重點
????????????if?(result.batchIsFull?||?result.newBatchCreated)?{
????????????????log.trace("Waking?up?the?sender?since?topic?{}?partition?{}?is?either?full?or?getting?a?new?batch",?record.topic(),?partition);
????????????????this.sender.wakeup();
????????????}
????????????return?result.future;
????????????//?handling?exceptions?and?record?the?errors;
????????????//?for?API?exceptions?return?them?in?the?future,
????????????//?for?other?exceptions?throw?directly
????????}?catch?(ApiException?e)?{
????????????//api接口異常
????????????log.debug("Exception?occurred?during?message?send:",?e);
????????????if?(callback?!=?null)
????????????????callback.onCompletion(null,?e);
????????????this.errors.record();
????????????this.interceptors.onSendError(record,?tp,?e);
????????????return?new?FutureFailure(e);
????????}?catch?(InterruptedException?e)?{
????????????//中斷異常
????????????this.errors.record();
????????????this.interceptors.onSendError(record,?tp,?e);
????????????throw?new?InterruptException(e);
????????}?catch?(BufferExhaustedException?e)?{
????????????//緩沖區耗盡異常
????????????this.errors.record();
????????????this.metrics.sensor("buffer-exhausted-records").record();
????????????this.interceptors.onSendError(record,?tp,?e);
????????????throw?e;
????????}?catch?(KafkaException?e)?{
????????????//kafka異常
????????????this.errors.record();
????????????this.interceptors.onSendError(record,?tp,?e);
????????????throw?e;
????????}?catch?(Exception?e)?{
????????????//?we?notify?interceptor?about?all?exceptions,?since?onSend?is?called?before?anything?else?in?this?method
????????????this.interceptors.onSendError(record,?tp,?e);
????????????throw?e;
????????}
????}
元數據準備階段
?*?Wait?for?cluster?metadata?including?partitions?for?the?given?topic?to?be?available.
?*?等待集群元數據包括給定主題的分區可用。
?*?@param?topic?The?topic?we?want?metadata?for
?*?@param?partition?A?specific?partition?expected?to?exist?in?metadata,?or?null?if?there's?no?preference
?*?@param?maxWaitMs?The?maximum?time?in?ms?for?waiting?on?the?metadata
?*?@return?The?cluster?containing?topic?metadata?and?the?amount?of?time?we?waited?in?ms
?*/
private?ClusterAndWaitTime?waitOnMetadata(String?topic,?Integer?partition,?long?maxWaitMs)?throws?InterruptedException?{
????//?add?topic?to?metadata?topic?list?if?it?is?not?there?already?and?reset?expiry
????//添加主題
????metadata.add(topic);
????//獲取當前的集群信息而不會阻塞
????Cluster?cluster?=?metadata.fetch();
????//統計分區數
????Integer?partitionsCount?=?cluster.partitionCountForTopic(topic);
????//?Return?cached?metadata?if?we?have?it,?and?if?the?record's?partition?is?either?undefined
????//?or?within?the?known?partition?range
????//返回緩存的元數據(如果有),并且記錄的分區未定義或在已知分區范圍內
????if?(partitionsCount?!=?null?&&?(partition?==?null?||?partition?????????return?new?ClusterAndWaitTime(cluster,?0);
????//開始時間
????long?begin?=?time.milliseconds();
????//記錄等待時間
????long?remainingWaitMs?=?maxWaitMs;
????long?elapsed;
????//?Issue?metadata?requests?until?we?have?metadata?for?the?topic?or?maxWaitTimeMs?is?exceeded.
????//?In?case?we?already?have?cached?metadata?for?the?topic,?but?the?requested?partition?is?greater
????//?than?expected,?issue?an?update?request?only?once.?This?is?necessary?in?case?the?metadata
????//?is?stale?and?the?number?of?partitions?for?this?topic?has?increased?in?the?meantime.
????/**
?????*發出元數據請求,直到超過該主題的元數據或maxWaitTimeMs。如果我們已經為該主題緩存了元數據,
?????*但是請求的分區大于預期,則僅發出一次更新請求。
?????*如果元數據過時并且與此主題相關的分區數量同時增加,則這是必需的。
?????*/
????do?{
????????log.trace("Requesting?metadata?update?for?topic?{}.",?topic);
????????//添加主題
????????metadata.add(topic);
????????//拿到版本
????????int?version?=?metadata.requestUpdate();
????????//喚醒sender
????????sender.wakeup();
????????try?{
????????????//等待更新
????????????metadata.awaitUpdate(version,?remainingWaitMs);
????????}?catch?(TimeoutException?ex)?{
????????????//?Rethrow?with?original?maxWaitMs?to?prevent?logging?exception?with?remainingWaitMs
????????????throw?new?TimeoutException("Failed?to?update?metadata?after?"?+?maxWaitMs?+?"?ms.");
????????}
????????//獲取集群數據
????????cluster?=?metadata.fetch();
????????//計算時間
????????elapsed?=?time.milliseconds()?-?begin;
????????if?(elapsed?>=?maxWaitMs)
????????????throw?new?TimeoutException("Failed?to?update?metadata?after?"?+?maxWaitMs?+?"?ms.");
????????if?(cluster.unauthorizedTopics().contains(topic))
????????????throw?new?TopicAuthorizationException(topic);
????????remainingWaitMs?=?maxWaitMs?-?elapsed;
????????partitionsCount?=?cluster.partitionCountForTopic(topic);
????}?while?(partitionsCount?==?null);
????if?(partition?!=?null?&&?partition?>=?partitionsCount)?{
????????throw?new?KafkaException(
????????????????String.format("Invalid?partition?given?with?record:?%d?is?not?in?the?range?[0...%d).",?partition,?partitionsCount));
????}
????return?new?ClusterAndWaitTime(cluster,?elapsed);
}
集群和等待時間
//集群和等待時間
private?static?class?ClusterAndWaitTime?{
????//集群
????final?Cluster?cluster;
????//等待在元數據的時間
????final?long?waitedOnMetadataMs;
????ClusterAndWaitTime(Cluster?cluster,?long?waitedOnMetadataMs)?{
????????this.cluster?=?cluster;
????????this.waitedOnMetadataMs?=?waitedOnMetadataMs;
????}
}
這里值得我們關注的sender:
sender相關變量和構造函數
public?class?Sender?implements?Runnable?{
???private?final?Logger?log;
????/*?the?state?of?each?nodes?connection?*/
????//kafka客戶端??每個節點連接的狀態
????private?final?KafkaClient?client;
????/*?the?record?accumulator?that?batches?records?*/
????//消息收集器?批量消息
????private?final?RecordAccumulator?accumulator;
????/*?the?metadata?for?the?client?*/
????//元數據
????private?final?Metadata?metadata;
????/*?the?flag?indicating?whether?the?producer?should?guarantee?the?message?order?on?the?broker?or?not.?*/
????//生產者是否應保證broker上的消息順序的標志
????private?final?boolean?guaranteeMessageOrder;
????/*?the?maximum?request?size?to?attempt?to?send?to?the?server?*/
????//嘗試發送到服務器的最大請求大小
????private?final?int?maxRequestSize;
????/*?the?number?of?acknowledgements?to?request?from?the?server?*/
????//要從服務器請求的確認數
????private?final?short?acks;
????/*?the?number?of?times?to?retry?a?failed?request?before?giving?up?*/
????//放棄之前重試失敗請求的次數
????private?final?int?retries;
????/*?the?clock?instance?used?for?getting?the?time?*/
????//用于獲取時間的時鐘實例
????private?final?Time?time;
????/*?true?while?the?sender?thread?is?still?running?*/
????//當發送方線程仍在運行時為true
????private?volatile?boolean?running;
????/*?true?when?the?caller?wants?to?ignore?all?unsent/inflight?messages?and?force?close.??*/
????//當caller想忽略所有未發送/正在進行的消息并強制關閉時為true
????private?volatile?boolean?forceClose;
????/*?metrics?*/
????//發送的度量信息?相關指標
????private?final?SenderMetrics?sensors;
????/*?the?max?time?to?wait?for?the?server?to?respond?to?the?request*/
????//等待服務器響應請求的最長時間
????private?final?int?requestTimeout;
????/*?The?max?time?to?wait?before?retrying?a?request?which?has?failed?*/
????//重試失敗的請求之前等待的最長時間
????private?final?long?retryBackoffMs;
????/*?current?request?API?versions?supported?by?the?known?brokers?*/
????//已知broker支持的當前請求API版本
????private?final?ApiVersions?apiVersions;
????/*?all?the?state?related?to?transactions,?in?particular?the?producer?id,?producer?epoch,?and?sequence?numbers?*/
????//與事務相關的所有狀態,特別是生產者ID,生產者時期和序列號
????private?final?TransactionManager?transactionManager;
??????//構造函數
????public?Sender(LogContext?logContext,
??????????????????KafkaClient?client,
??????????????????Metadata?metadata,
??????????????????RecordAccumulator?accumulator,boolean?guaranteeMessageOrder,int?maxRequestSize,short?acks,int?retries,
??????????????????SenderMetricsRegistry?metricsRegistry,
??????????????????Time?time,int?requestTimeout,long?retryBackoffMs,
??????????????????TransactionManager?transactionManager,
??????????????????ApiVersions?apiVersions)?{
????????//通過日志上下文拿到日志信息
????????this.log?=?logContext.logger(Sender.class);
????????//客戶端
????????this.client?=?client;
????????//消息收集器
????????this.accumulator?=?accumulator;
????????//元數據
????????this.metadata?=?metadata;
????????//保證消息有序
????????this.guaranteeMessageOrder?=?guaranteeMessageOrder;
????????//最大請求大小
????????this.maxRequestSize?=?maxRequestSize;
????????//運行
????????this.running?=?true;
????????//acks
????????this.acks?=?acks;
????????//重試次數
????????this.retries?=?retries;
????????//時間
????????this.time?=?time;
????????//發送指標信息
????????this.sensors?=?new?SenderMetrics(metricsRegistry);
????????//請求超時時間
????????this.requestTimeout?=?requestTimeout;
????????//重試間隔的最短時間
????????this.retryBackoffMs?=?retryBackoffMs;
????????//api版本信息
????????this.apiVersions?=?apiVersions;
????????//事務管理器
????????this.transactionManager?=?transactionManager;
????}
}
run方法:
/**
?*?The?main?run?loop?for?the?sender?thread
?*?發送消息線程的主運行循環
?*/
public?void?run()?{
????log.debug("Starting?Kafka?producer?I/O?thread.");
????//?main?loop,?runs?until?close?is?called
????while?(running)?{
????????try?{
????????????//執行run方法?重點
????????????run(time.milliseconds());
????????}?catch?(Exception?e)?{
????????????log.error("Uncaught?error?in?kafka?producer?I/O?thread:?",?e);
????????}
????}
????log.debug("Beginning?shutdown?of?Kafka?producer?I/O?thread,?sending?remaining?records.");
????//?okay?we?stopped?accepting?requests?but?there?may?still?be
????//?requests?in?the?accumulator?or?waiting?for?acknowledgment,
????//?wait?until?these?are?completed.
????while?(!forceClose?&&?(this.accumulator.hasUndrained()?||?this.client.inFlightRequestCount()?>?0))?{
????????try?{
?????????????//執行run方法?重點
????????????run(time.milliseconds());
????????}?catch?(Exception?e)?{
????????????log.error("Uncaught?error?in?kafka?producer?I/O?thread:?",?e);
????????}
????}
????if?(forceClose)?{
????????//?We?need?to?fail?all?the?incomplete?batches?and?wake?up?the?threads?waiting?on
????????//?the?futures.
????????log.debug("Aborting?incomplete?batches?due?to?forced?shutdown");
????????this.accumulator.abortIncompleteBatches();
????}
????try?{
????????this.client.close();
????}?catch?(Exception?e)?{
????????log.error("Failed?to?close?network?client",?e);
????}
????log.debug("Shutdown?of?Kafka?producer?I/O?thread?has?completed.");
}
執行發送數據
/**
?*?Run?a?single?iteration?of?sending
?*?運行一次發送
?*?@param?now?The?current?POSIX?time?in?milliseconds
?*/
void?run(long?now)?{
????//如果事務不為空,則放入事務信息
????if?(transactionManager?!=?null)?{
????????try?{
????????????if?(transactionManager.shouldResetProducerStateAfterResolvingSequences())
????????????????//?Check?if?the?previous?run?expired?batches?which?requires?a?reset?of?the?producer?state.
????????????????//重置生產者id
????????????????transactionManager.resetProducerId();
????????????if?(!transactionManager.isTransactional())?{
????????????????//?this?is?an?idempotent?producer,?so?make?sure?we?have?a?producer?id
????????????????maybeWaitForProducerId();
????????????}?else?if?(transactionManager.hasUnresolvedSequences()?&&?!transactionManager.hasFatalError())?{
????????????????transactionManager.transitionToFatalError(new?KafkaException("The?client?hasn't?received?acknowledgment?for?"?+
????????????????????????"some?previously?sent?messages?and?can?no?longer?retry?them.?It?isn't?safe?to?continue."));
????????????}?else?if?(transactionManager.hasInFlightTransactionalRequest()?||?maybeSendTransactionalRequest(now))?{
????????????????//?as?long?as?there?are?outstanding?transactional?requests,?we?simply?wait?for?them?to?return
????????????????client.poll(retryBackoffMs,?now);
????????????????return;
????????????}
????????????//?do?not?continue?sending?if?the?transaction?manager?is?in?a?failed?state?or?if?there
????????????//?is?no?producer?id?(for?the?idempotent?case).
????????????if?(transactionManager.hasFatalError()?||?!transactionManager.hasProducerId())?{
????????????????RuntimeException?lastError?=?transactionManager.lastError();
????????????????if?(lastError?!=?null)
????????????????????maybeAbortBatches(lastError);
????????????????client.poll(retryBackoffMs,?now);
????????????????return;
????????????}?else?if?(transactionManager.hasAbortableError())?{
????????????????accumulator.abortUndrainedBatches(transactionManager.lastError());
????????????}
????????}?catch?(AuthenticationException?e)?{
????????????//?This?is?already?logged?as?error,?but?propagated?here?to?perform?any?clean?ups.
????????????log.trace("Authentication?exception?while?processing?transactional?request:?{}",?e);
????????????transactionManager.authenticationFailed(e);
????????}
????}
????//發送生產者數據??重點
????long?pollTimeout?=?sendProducerData(now);
????//執行poll輪詢操作?進行讀取和寫入操作
????client.poll(pollTimeout,?now);
}
此時進入重要方法sendProduerData()方法和poll()方法。
總結
以上是生活随笔為你收集整理的kafka 异步发送阻塞_Kafka学习一的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 上海欢乐谷入园要身份证吗
- 下一篇: 通过身份证号提取性别_身份证号提取生日、