日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Kafka发送超过broker限定大小的消息时Client和Broker端各自会有什么异常?

發布時間:2024/4/11 编程问答 25 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Kafka发送超过broker限定大小的消息时Client和Broker端各自会有什么异常? 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

歡迎支持筆者新作:《深入理解Kafka:核心設計與實踐原理》和《RabbitMQ實戰指南》,同時歡迎關注筆者的微信公眾號:朱小廝的博客。


歡迎跳轉到本文的原文鏈接:https://honeypps.com/mq/what-if-message-size-beyond-kafka-limit/

前幾天遇到一個bug,查看發送日志發現java.io.IOException: Broken pipe的錯誤,通過深入了解發現當kafka producer發送的消息體大于Broker配置的默認值時就會報這個異常。如果僅發送一次是不會報這個異常的,要連續發送才會報這個異常。

本博文記錄一下當Kafka發送超過broker限定大小的消息時Client和Broker端各自會有什么異常。

Kafka Broker Configs中有一個參數:message.max.bytes——用來指定消息的大小。

當Producer向Broker發送一個比Kafka Broker配置的閾值還要大的一個消息時,Producer端和Broker端會有什么異常情況。
Producer端測試代碼:

public class Producer {public static final String brokerList = "xx.xx.197.59:9092";public static final String topic = "versionTopic";public static void main(String[] args) {Properties properties = new Properties();properties.put("serializer.class", "kafka.serializer.StringEncoder");properties.put("metadata.broker.list", brokerList);ProducerConfig config = new ProducerConfig(properties);kafka.javaapi.producer.Producer producer = new kafka.javaapi.producer.Producer<Integer, String>(config);String message = getMessage(1 * 1024 * 1024);for(int i=0;i<3;i++) {KeyedMessage<Integer, String> keyedMessage = new KeyedMessage<Integer, String>(topic, message);producer.send(keyedMessage);System.out.println("=============================");}try {TimeUnit.SECONDS.sleep(50);} catch (InterruptedException e) {e.printStackTrace();}}public static String getMessage(int msgSize) {StringBuilder stringBuilder = new StringBuilder();for(int i=0;i<msgSize;i++) {stringBuilder.append("x");}return stringBuilder.toString();} }

Producer端輸出:

2017-02-28 16:19:31 -[INFO] - [Verifying properties] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:31 -[INFO] - [Property metadata.broker.list is overridden to xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:31 -[INFO] - [Property serializer.class is overridden to kafka.serializer.StringEncoder] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:31 -[INFO] - [Fetching metadata from broker id:0,host:xx.xx.197.59,port:9092 with correlation id 0 for 1 topic(s) Set(versionTopic)] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:31 -[INFO] - [Connected to xx.xx.197.59:9092 for producing] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:31 -[INFO] - [Disconnecting from xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:31 -[INFO] - [Connected to xx.xx.197.59:9092 for producing] - [kafka.utils.Logging$class:68] ============================= 2017-02-28 16:19:34 -[INFO] - [Disconnecting from xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:34 -[WARN] - [Failed to send producer request with correlation id 4 to broker 0 with data for partitions [versionTopic,0]] - [kafka.utils.Logging$class:89] java.io.IOException: 你的主機中的軟件中止了一個已建立的連接。(ps:如果沒有中文,這里會出現“java.io.IOException: Broken pipe”的報錯。)at sun.nio.ch.SocketDispatcher.writev0(Native Method)at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:55)at sun.nio.ch.IOUtil.write(IOUtil.java:148)at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)at java.nio.channels.SocketChannel.write(SocketChannel.java:502)at kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:56)at kafka.network.Send$class.writeCompletely(Transmission.scala:75)at kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend.scala:26)at kafka.network.BlockingChannel.send(BlockingChannel.scala:103)at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)at kafka.producer.SyncProducer.send(SyncProducer.scala:101)at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)at kafka.producer.Producer.send(Producer.scala:77)at kafka.javaapi.producer.Producer.send(Producer.scala:33)at com.kafka.Producer.main(Producer.java:30) 2017-02-28 16:19:34 -[INFO] - [Back off for 100 ms before retrying send. Remaining retries = 3] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:34 -[INFO] - [Fetching metadata from broker id:0,host:xx.xx.197.59,port:9092 with correlation id 5 for 1 topic(s) Set(versionTopic)] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:34 -[INFO] - [Connected to xx.xx.197.59:9092 for producing] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:34 -[INFO] - [Disconnecting from xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:34 -[INFO] - [Disconnecting from xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:34 -[INFO] - [Connected to xx.xx.197.59:9092 for producing] - [kafka.utils.Logging$class:68] ============================= 2017-02-28 16:19:38 -[INFO] - [Disconnecting from xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:38 -[WARN] - [Failed to send producer request with correlation id 9 to broker 0 with data for partitions [versionTopic,0]] - [kafka.utils.Logging$class:89] java.io.IOException: 你的主機中的軟件中止了一個已建立的連接。(ps:如果沒有中文,這里會出現“java.io.IOException: Broken pipe”的報錯。)at sun.nio.ch.SocketDispatcher.writev0(Native Method)at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:55)at sun.nio.ch.IOUtil.write(IOUtil.java:148)at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)at java.nio.channels.SocketChannel.write(SocketChannel.java:502)at kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:56)at kafka.network.Send$class.writeCompletely(Transmission.scala:75)at kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend.scala:26)at kafka.network.BlockingChannel.send(BlockingChannel.scala:103)at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)at kafka.producer.SyncProducer.send(SyncProducer.scala:101)at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)at kafka.producer.Producer.send(Producer.scala:77)at kafka.javaapi.producer.Producer.send(Producer.scala:33)at com.kafka.Producer.main(Producer.java:30) 2017-02-28 16:19:38 -[INFO] - [Back off for 100 ms before retrying send. Remaining retries = 3] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:38 -[INFO] - [Fetching metadata from broker id:0,host:xx.xx.197.59,port:9092 with correlation id 10 for 1 topic(s) Set(versionTopic)] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:38 -[INFO] - [Connected to xx.xx.197.59:9092 for producing] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:38 -[INFO] - [Disconnecting from xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:38 -[INFO] - [Disconnecting from xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-02-28 16:19:38 -[INFO] - [Connected to xx.xx.197.59:9092 for producing] - [kafka.utils.Logging$class:68] =============================

注意輸出中的:java.io.IOException: 你的主機中的軟件中止了一個已建立的連接。(ps:如果沒有中文,這里會出現“java.io.IOException: Broken pipe”的報錯。)

而Broker端會有報錯:

[2017-02-28 16:04:03,384] INFO Closing socket connection to /xx.xx.48.240. (kafka.network.Processor) [2017-02-28 16:04:06,466] ERROR [KafkaApi-0] Error processing ProducerRequest with correlation id 2 from client on partition [versionTopic,0] (kafka.server.KafkaApis) kafka.common.MessageSizeTooLargeException: Message size is 1048602 bytes which exceeds the maximum configured message size of 1000012.at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:378)at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:361)at scala.collection.Iterator$class.foreach(Iterator.scala:727)at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)at kafka.log.Log.analyzeAndValidateMessageSet(Log.scala:361)at kafka.log.Log.append(Log.scala:257)at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:379)at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:365)at kafka.utils.Utils$.inLock(Utils.scala:535)at kafka.utils.Utils$.inReadLock(Utils.scala:541)at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:365)at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:291)at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:282)at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)at scala.collection.AbstractTraversable.map(Traversable.scala:105)at kafka.server.KafkaApis.appendToLocalLog(KafkaApis.scala:282)at kafka.server.KafkaApis.handleProducerOrOffsetCommitRequest(KafkaApis.scala:204)at kafka.server.KafkaApis.handle(KafkaApis.scala:59)at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)at java.lang.Thread.run(Thread.java:745) [2017-02-28 16:04:06,467] INFO [KafkaApi-0] Send the close connection response due to error handling produce request [clientId = , correlationId = 2, topicAndPartition = [versionTopic,0]] with Ack=0 (kafka.server.KafkaApis) [2017-02-28 16:04:06,629] INFO Closing socket connection to /xx.xx.48.240. (kafka.network.Processor) [2017-02-28 16:04:09,921] ERROR [KafkaApi-0] Error processing ProducerRequest with correlation id 7 from client on partition [versionTopic,0] (kafka.server.KafkaApis) kafka.common.MessageSizeTooLargeException: Message size is 1048602 bytes which exceeds the maximum configured message size of 1000012.at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:378)at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:361)at scala.collection.Iterator$class.foreach(Iterator.scala:727)at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)at kafka.log.Log.analyzeAndValidateMessageSet(Log.scala:361)at kafka.log.Log.append(Log.scala:257)at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:379)at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:365)at kafka.utils.Utils$.inLock(Utils.scala:535)at kafka.utils.Utils$.inReadLock(Utils.scala:541)at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:365)at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:291)at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:282)at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)at scala.collection.AbstractTraversable.map(Traversable.scala:105)at kafka.server.KafkaApis.appendToLocalLog(KafkaApis.scala:282)at kafka.server.KafkaApis.handleProducerOrOffsetCommitRequest(KafkaApis.scala:204)at kafka.server.KafkaApis.handle(KafkaApis.scala:59)at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)at java.lang.Thread.run(Thread.java:745) [2017-02-28 16:04:09,922] INFO [KafkaApi-0] Send the close connection response due to error handling produce request [clientId = , correlationId = 7, topicAndPartition = [versionTopic,0]] with Ack=0 (kafka.server.KafkaApis) [2017-02-28 16:04:10,096] INFO Closing socket connection to /xx.xx.48.240. (kafka.network.Processor) [2017-02-28 16:04:13,374] ERROR [KafkaApi-0] Error processing ProducerRequest with correlation id 12 from client on partition [versionTopic,0] (kafka.server.KafkaApis) kafka.common.MessageSizeTooLargeException: Message size is 1048602 bytes which exceeds the maximum configured message size of 1000012.at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:378)at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:361)at scala.collection.Iterator$class.foreach(Iterator.scala:727)at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)at kafka.log.Log.analyzeAndValidateMessageSet(Log.scala:361)at kafka.log.Log.append(Log.scala:257)at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:379)at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:365)at kafka.utils.Utils$.inLock(Utils.scala:535)at kafka.utils.Utils$.inReadLock(Utils.scala:541)at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:365)at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:291)at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:282)at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)at scala.collection.AbstractTraversable.map(Traversable.scala:105)at kafka.server.KafkaApis.appendToLocalLog(KafkaApis.scala:282)at kafka.server.KafkaApis.handleProducerOrOffsetCommitRequest(KafkaApis.scala:204)at kafka.server.KafkaApis.handle(KafkaApis.scala:59)at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)at java.lang.Thread.run(Thread.java:745) [2017-02-28 16:04:13,375] INFO [KafkaApi-0] Send the close connection response due to error handling produce request [clientId = , correlationId = 12, topicAndPartition = [versionTopic,0]] with Ack=0 (kafka.server.KafkaApis)

注意輸出中的:kafka.common.MessageSizeTooLargeException: Message size is 1048602 bytes which exceeds the maximum configured message size of 1000012.這句。


注意:當kafka一切正常,producer端發送也會出現這樣的INFO:

2017-03-07 20:06:03 -[INFO] - [Verifying properties] - [kafka.utils.Logging$class:68] 2017-03-07 20:06:04 -[INFO] - [Property metadata.broker.list is overridden to xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-03-07 20:06:04 -[INFO] - [Property serializer.class is overridden to kafka.serializer.StringEncoder] - [kafka.utils.Logging$class:68] 2017-03-07 20:06:04 -[INFO] - [Fetching metadata from broker id:0,host:xx.xx.197.59,port:9092 with correlation id 0 for 1 topic(s) Set(testTopic)] - [kafka.utils.Logging$class:68] 2017-03-07 20:06:04 -[INFO] - [Connected to xx.xx.197.59:9092 for producing] - [kafka.utils.Logging$class:68] 2017-03-07 20:06:04 -[INFO] - [Disconnecting from xx.xx.197.59:9092] - [kafka.utils.Logging$class:68] 2017-03-07 20:06:04 -[INFO] - [Connected to xx.xx.197.59:9092 for producing] - [kafka.utils.Logging$class:68] (之后producer發送數據)

看倒數三行,咋一看以為是出了異常,但事實上這是正常的INFO, 至于為什么先Connected又Disconnecting又Connected那就不得而知了,等博主翻閱了kafka的源碼之后再來解釋這個現象咯~

歡迎跳轉到本文的原文鏈接:https://honeypps.com/mq/what-if-message-size-beyond-kafka-limit/


歡迎支持筆者新作:《深入理解Kafka:核心設計與實踐原理》和《RabbitMQ實戰指南》,同時歡迎關注筆者的微信公眾號:朱小廝的博客。


總結

以上是生活随笔為你收集整理的Kafka发送超过broker限定大小的消息时Client和Broker端各自会有什么异常?的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。