您现在的位置是:首页 > 文章详情

kafka 0.11x 启动30秒后自动停止,报【另一个程序正在使用此文件,进程无法访问】

日期:2018-06-05点击:426

环境:kafka_2.11-1.1.0,win7_64,java8

现象:
启动30秒后自动停止,报【另一个程序正在使用此文件,进程无法访问】

[2018-06-06 14:32:46,784] INFO [Log partition=myTopic-0, dir=D:\kafka_2.11-1.1.0\kafka-logs] Scheduling log segment [bas eOffset 0, size 1599] for deletion. (kafka.log.Log) [2018-06-06 14:32:46,800] ERROR Error while deleting segments for myTopic-0 in dir D:\kafka_2.11-1.1.0\kafka-logs (kafka .server.LogDirFailureChannel) java.nio.file.FileSystemException: D:\kafka_2.11-1.1.0\kafka-logs\myTopic-0\00000000000000000000.log -> D:\kafka_2.11-1. 1.0\kafka-logs\myTopic-0\00000000000000000000.log.deleted: 另一个程序正在使用此文件,进程无法访问。 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387) at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) at java.nio.file.Files.move(Files.java:1395) at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697) at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212) at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415) at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1601) at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:1588) at kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) at kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170) at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) at kafka.log.Log.maybeHandleIOException(Log.scala:1678) at kafka.log.Log.deleteSegments(Log.scala:1161) at kafka.log.Log.deleteOldSegments(Log.scala:1156) at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228) at kafka.log.Log.deleteOldSegments(Log.scala:1222) at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854) at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) at scala.collection.immutable.List.foreach(List.scala:392) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) at kafka.log.LogManager.cleanupLogs(LogManager.scala:852) at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.j ava:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294 ) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Suppressed: java.nio.file.FileSystemException: D:\kafka_2.11-1.1.0\kafka-logs\myTopic-0\00000000000000000000.log -> D:\kafka_2.11-1.1.0\kafka-logs\myTopic-0\00000000000000000000.log.deleted: 另一个程序正在使用此文件,进程无法访问。 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) at java.nio.file.Files.move(Files.java:1395) at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694) ... 32 more 

通过日志可以看出,kafka的log清理线程在对log文件改名时发生了错误,log文件被其它线程占用了。

即使在server.properties文件中设置了log.cleaner.enable=false也不管用。
(无法关闭日志清理吗??)

可以手工删除kafka-logs下的文件,不过这样以前的消息就丢了。

另一种方法是,设置log.retention.hours=168000(默认168即7天),使日志不会过期。

原文链接:https://yq.aliyun.com/articles/623712
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章