日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

kafka 异常:ERROR Failed to clean up log for __consumer_offsets-30 in dir /tmp/kafka-logs due to IOExce

發布時間:2023/12/20 编程问答 49 豆豆
生活随笔 收集整理的這篇文章主要介紹了 kafka 异常:ERROR Failed to clean up log for __consumer_offsets-30 in dir /tmp/kafka-logs due to IOExce 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

?問題概述

kafka進程不定期掛掉。ERROR Failed to clean up log for __consumer_offsets-30 in dir /tmp/kafka-logs due to IOException (kafka.server.LogDirFailureChannel),報錯如下

[2020-12-07 16:12:36,803] ERROR Failed to clean up log for __consumer_offsets-7 in dir /tmp/kafka-logs due to IOException (kafka.server.LogDirFailureChannel) java.nio.file.NoSuchFileException: /tmp/kafka-logs/__consumer_offsets-7/00000000000000000000.logat sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409)at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)at java.nio.file.Files.move(Files.java:1395)at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:913)at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:227)at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:495)at kafka.log.Log.$anonfun$deleteSegmentFiles$1(Log.scala:2230)at kafka.log.Log.$anonfun$deleteSegmentFiles$1$adapted(Log.scala:2230)at scala.collection.immutable.List.foreach(List.scala:333)at kafka.log.Log.deleteSegmentFiles(Log.scala:2230)at kafka.log.Log.$anonfun$replaceSegments$6(Log.scala:2300)at kafka.log.Log.$anonfun$replaceSegments$6$adapted(Log.scala:2295)at scala.collection.immutable.List.foreach(List.scala:333)at kafka.log.Log.replaceSegments(Log.scala:2295)at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:606)at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:531)at kafka.log.Cleaner.doClean(LogCleaner.scala:530)at kafka.log.Cleaner.clean(LogCleaner.scala:504)at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:373)at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:345)at kafka.log.LogCleaner$CleanerThread.tryCleanFilthiestLog(LogCleaner.scala:325)at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:314)at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)Suppressed: java.nio.file.NoSuchFileException: /tmp/kafka-logs/__consumer_offsets-7/00000000000000000000.log -> /tmp/kafka-logs/__consumer_offsets-7/00000000000000000000.log.deletedat sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)at java.nio.file.Files.move(Files.java:1395)at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:910)... 19 more

問題分析

錯誤顯示沒找到文件導致報錯。linux會定時清理/tmp目錄下的文件,我的kafka日志文件目錄正是放在了/tmp/kafka-logs目錄下,導致被定時給清理掉了,所以kafka在嘗試讀取或追加日志時就會出錯。

grep log.dirs /opt/kafka_2.12-2.3.0/config/server.properties /tmp/kafka-logs

問題解決

第一種:修改 日志目錄,然后重啟kafka

log.dirs=/opt/kafka_2.12-2.3.0/kafka-logs/

第二種:添加kafka日志目錄到清理白名單中

centos7:centos7下/tmp目錄的清理由服務systemd負責,其相關配置文件在/usr/lib/tmpfiles.d目錄下,我們修改配置文件tmp.conf,將kafka日志目錄加進去,

#防止刪除kafka日志文件 X /tmp/kafka-logs

centos6:centos6下/tmp目錄的清理是通過tmpwatch來實現的,tmpwatch則依賴于cron的定時調度,調度文件為/etc/cron.daily/tmpwatch

#防止刪除kafka日志文件 X /tmp/kafka-logs

?

總結

以上是生活随笔為你收集整理的kafka 异常:ERROR Failed to clean up log for __consumer_offsets-30 in dir /tmp/kafka-logs due to IOExce的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。