You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by SenthilKumar K <se...@gmail.com> on 2019/09/04 11:53:46 UTC

Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

Hello Experts , We have deployed 10 node kafka cluster in production.
Recently two of the nodes went down due to network problem and we brought
it up after 24 hours. At the time of bootstrapping the  kafka service on
the failed nodes , we have seen the below error & broker failed to come up.

Kafka Version : kafka_2.11-2.2.0

JVM Options :
/a/java64/jdk1.8.0/bin/java -Xmx15G -Xms10G -server -XX:+UseG1GC
-XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
-XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true
-Xloggc:/a/opt/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M
-Davoid_insecure_jmxremote


[2019-09-03 10:54:10,630] ERROR Error while deleting the clean shutdown
file in dir /tmp/data (kafka.server.LogDirFailureChannel)
java.io.IOException: Map failed
    at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
    at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
    at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
    at kafka.log.LogSegment$.open(LogSegment.scala:632)
    at
kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:467)
    at
kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:454)
    at
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
    at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
    at
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
    at kafka.log.Log.kafka$log$Log$$loadSegmentFiles(Log.scala:454)
    at kafka.log.Log$$anonfun$loadSegments$1.apply$mcV$sp(Log.scala:565)
    at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
    at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
    at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2024)
    at kafka.log.Log.loadSegments(Log.scala:559)
    at kafka.log.Log.<init>(Log.scala:292)
    at kafka.log.Log$.apply(Log.scala:2157)
    at
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:265)
    at
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:345)
    at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
    at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Map failed
    at sun.nio.ch.FileChannelImpl.map0(Native Method)
    at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
    ... 25 more

Any hint to solve this problem ? Thanks in advance!

--Senthil

Re: Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

Posted by SenthilKumar K <se...@gmail.com>.
Thanks Karolis.

On Wed, 4 Sep, 2019, 5:57 PM Karolis Pocius,
<ka...@sentiance.com.invalid> wrote:

> I had the same issue which was solved by increasing max_map_count
> https://stackoverflow.com/a/43675621
>
>
> On Wed, Sep 4, 2019 at 2:59 PM SenthilKumar K <se...@gmail.com>
> wrote:
>
> > Hello Experts , We have deployed 10 node kafka cluster in production.
> > Recently two of the nodes went down due to network problem and we brought
> > it up after 24 hours. At the time of bootstrapping the  kafka service on
> > the failed nodes , we have seen the below error & broker failed to come
> up.
> >
> > Kafka Version : kafka_2.11-2.2.0
> >
> > JVM Options :
> > /a/java64/jdk1.8.0/bin/java -Xmx15G -Xms10G -server -XX:+UseG1GC
> > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
> > -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true
> > -Xloggc:/a/opt/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc
> > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
> > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
> -XX:GCLogFileSize=100M
> > -Davoid_insecure_jmxremote
> >
> >
> > [2019-09-03 10:54:10,630] ERROR Error while deleting the clean shutdown
> > file in dir /tmp/data (kafka.server.LogDirFailureChannel)
> > java.io.IOException: Map failed
> >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
> >     at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
> >     at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
> >     at kafka.log.LogSegment$.open(LogSegment.scala:632)
> >     at
> >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:467)
> >     at
> >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:454)
> >     at
> >
> >
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> >     at
> >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> >     at
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> >     at
> >
> >
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> >     at kafka.log.Log.kafka$log$Log$$loadSegmentFiles(Log.scala:454)
> >     at kafka.log.Log$$anonfun$loadSegments$1.apply$mcV$sp(Log.scala:565)
> >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> >     at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2024)
> >     at kafka.log.Log.loadSegments(Log.scala:559)
> >     at kafka.log.Log.<init>(Log.scala:292)
> >     at kafka.log.Log$.apply(Log.scala:2157)
> >     at
> > kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:265)
> >     at
> >
> >
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:345)
> >     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
> >     at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >     at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >     at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >     at java.lang.Thread.run(Thread.java:748)
> > Caused by: java.lang.OutOfMemoryError: Map failed
> >     at sun.nio.ch.FileChannelImpl.map0(Native Method)
> >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
> >     ... 25 more
> >
> > Any hint to solve this problem ? Thanks in advance!
> >
> > --Senthil
> >
>

Re: Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

Posted by SenthilKumar K <se...@gmail.com>.
Thanks

On Wed, 4 Sep, 2019, 6:24 PM Jonathan Santilli, <jo...@gmail.com>
wrote:

> Hello Senthil,
>
> I would recommend to not have the data in /tmp/data.
> Also as a recommendation, set to equal values the -Xmx and -Xms parameters.
>
>
> Cheers!
> --
> Jonathan
>
>
>
>
> On Wed, Sep 4, 2019 at 1:27 PM Karolis Pocius
> <ka...@sentiance.com.invalid> wrote:
>
> > I had the same issue which was solved by increasing max_map_count
> > https://stackoverflow.com/a/43675621
> >
> >
> > On Wed, Sep 4, 2019 at 2:59 PM SenthilKumar K <se...@gmail.com>
> > wrote:
> >
> > > Hello Experts , We have deployed 10 node kafka cluster in production.
> > > Recently two of the nodes went down due to network problem and we
> brought
> > > it up after 24 hours. At the time of bootstrapping the  kafka service
> on
> > > the failed nodes , we have seen the below error & broker failed to come
> > up.
> > >
> > > Kafka Version : kafka_2.11-2.2.0
> > >
> > > JVM Options :
> > > /a/java64/jdk1.8.0/bin/java -Xmx15G -Xms10G -server -XX:+UseG1GC
> > > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
> > > -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true
> > > -Xloggc:/a/opt/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc
> > > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
> > > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
> > -XX:GCLogFileSize=100M
> > > -Davoid_insecure_jmxremote
> > >
> > >
> > > [2019-09-03 10:54:10,630] ERROR Error while deleting the clean shutdown
> > > file in dir /tmp/data (kafka.server.LogDirFailureChannel)
> > > java.io.IOException: Map failed
> > >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
> > >     at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
> > >     at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
> > >     at kafka.log.LogSegment$.open(LogSegment.scala:632)
> > >     at
> > >
> > >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:467)
> > >     at
> > >
> > >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:454)
> > >     at
> > >
> > >
> >
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> > >     at
> > >
> > >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> > >     at
> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> > >     at
> > >
> > >
> >
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> > >     at kafka.log.Log.kafka$log$Log$$loadSegmentFiles(Log.scala:454)
> > >     at
> kafka.log.Log$$anonfun$loadSegments$1.apply$mcV$sp(Log.scala:565)
> > >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> > >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> > >     at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2024)
> > >     at kafka.log.Log.loadSegments(Log.scala:559)
> > >     at kafka.log.Log.<init>(Log.scala:292)
> > >     at kafka.log.Log$.apply(Log.scala:2157)
> > >     at
> > >
> kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:265)
> > >     at
> > >
> > >
> >
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:345)
> > >     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
> > >     at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> > >     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > >     at
> > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > >     at
> > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > >     at java.lang.Thread.run(Thread.java:748)
> > > Caused by: java.lang.OutOfMemoryError: Map failed
> > >     at sun.nio.ch.FileChannelImpl.map0(Native Method)
> > >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
> > >     ... 25 more
> > >
> > > Any hint to solve this problem ? Thanks in advance!
> > >
> > > --Senthil
> > >
> >
>
>
> --
> Santilli Jonathan
>

Re: Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

Posted by SenthilKumar K <se...@gmail.com>.
Hi Jonathan, Thanks for the inputs. Actually we are not storing data in
/tmp/directory :-) .
Sure I'll update heap settings as advised.

--Senthil

On Wed, 4 Sep, 2019, 6:24 PM Jonathan Santilli, <jo...@gmail.com>
wrote:

> Hello Senthil,
>
> I would recommend to not have the data in /tmp/data.
> Also as a recommendation, set to equal values the -Xmx and -Xms parameters.
>
>
> Cheers!
> --
> Jonathan
>
>
>
>
> On Wed, Sep 4, 2019 at 1:27 PM Karolis Pocius
> <ka...@sentiance.com.invalid> wrote:
>
> > I had the same issue which was solved by increasing max_map_count
> > https://stackoverflow.com/a/43675621
> >
> >
> > On Wed, Sep 4, 2019 at 2:59 PM SenthilKumar K <se...@gmail.com>
> > wrote:
> >
> > > Hello Experts , We have deployed 10 node kafka cluster in production.
> > > Recently two of the nodes went down due to network problem and we
> brought
> > > it up after 24 hours. At the time of bootstrapping the  kafka service
> on
> > > the failed nodes , we have seen the below error & broker failed to come
> > up.
> > >
> > > Kafka Version : kafka_2.11-2.2.0
> > >
> > > JVM Options :
> > > /a/java64/jdk1.8.0/bin/java -Xmx15G -Xms10G -server -XX:+UseG1GC
> > > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
> > > -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true
> > > -Xloggc:/a/opt/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc
> > > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
> > > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
> > -XX:GCLogFileSize=100M
> > > -Davoid_insecure_jmxremote
> > >
> > >
> > > [2019-09-03 10:54:10,630] ERROR Error while deleting the clean shutdown
> > > file in dir /tmp/data (kafka.server.LogDirFailureChannel)
> > > java.io.IOException: Map failed
> > >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
> > >     at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
> > >     at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
> > >     at kafka.log.LogSegment$.open(LogSegment.scala:632)
> > >     at
> > >
> > >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:467)
> > >     at
> > >
> > >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:454)
> > >     at
> > >
> > >
> >
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> > >     at
> > >
> > >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> > >     at
> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> > >     at
> > >
> > >
> >
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> > >     at kafka.log.Log.kafka$log$Log$$loadSegmentFiles(Log.scala:454)
> > >     at
> kafka.log.Log$$anonfun$loadSegments$1.apply$mcV$sp(Log.scala:565)
> > >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> > >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> > >     at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2024)
> > >     at kafka.log.Log.loadSegments(Log.scala:559)
> > >     at kafka.log.Log.<init>(Log.scala:292)
> > >     at kafka.log.Log$.apply(Log.scala:2157)
> > >     at
> > >
> kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:265)
> > >     at
> > >
> > >
> >
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:345)
> > >     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
> > >     at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> > >     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > >     at
> > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > >     at
> > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > >     at java.lang.Thread.run(Thread.java:748)
> > > Caused by: java.lang.OutOfMemoryError: Map failed
> > >     at sun.nio.ch.FileChannelImpl.map0(Native Method)
> > >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
> > >     ... 25 more
> > >
> > > Any hint to solve this problem ? Thanks in advance!
> > >
> > > --Senthil
> > >
> >
>
>
> --
> Santilli Jonathan
>

Re: Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

Posted by Jonathan Santilli <jo...@gmail.com>.
Hello Senthil,

I would recommend to not have the data in /tmp/data.
Also as a recommendation, set to equal values the -Xmx and -Xms parameters.


Cheers!
--
Jonathan




On Wed, Sep 4, 2019 at 1:27 PM Karolis Pocius
<ka...@sentiance.com.invalid> wrote:

> I had the same issue which was solved by increasing max_map_count
> https://stackoverflow.com/a/43675621
>
>
> On Wed, Sep 4, 2019 at 2:59 PM SenthilKumar K <se...@gmail.com>
> wrote:
>
> > Hello Experts , We have deployed 10 node kafka cluster in production.
> > Recently two of the nodes went down due to network problem and we brought
> > it up after 24 hours. At the time of bootstrapping the  kafka service on
> > the failed nodes , we have seen the below error & broker failed to come
> up.
> >
> > Kafka Version : kafka_2.11-2.2.0
> >
> > JVM Options :
> > /a/java64/jdk1.8.0/bin/java -Xmx15G -Xms10G -server -XX:+UseG1GC
> > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
> > -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true
> > -Xloggc:/a/opt/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc
> > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
> > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
> -XX:GCLogFileSize=100M
> > -Davoid_insecure_jmxremote
> >
> >
> > [2019-09-03 10:54:10,630] ERROR Error while deleting the clean shutdown
> > file in dir /tmp/data (kafka.server.LogDirFailureChannel)
> > java.io.IOException: Map failed
> >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
> >     at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
> >     at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
> >     at kafka.log.LogSegment$.open(LogSegment.scala:632)
> >     at
> >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:467)
> >     at
> >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:454)
> >     at
> >
> >
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> >     at
> >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> >     at
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> >     at
> >
> >
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> >     at kafka.log.Log.kafka$log$Log$$loadSegmentFiles(Log.scala:454)
> >     at kafka.log.Log$$anonfun$loadSegments$1.apply$mcV$sp(Log.scala:565)
> >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> >     at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2024)
> >     at kafka.log.Log.loadSegments(Log.scala:559)
> >     at kafka.log.Log.<init>(Log.scala:292)
> >     at kafka.log.Log$.apply(Log.scala:2157)
> >     at
> > kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:265)
> >     at
> >
> >
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:345)
> >     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
> >     at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >     at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >     at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >     at java.lang.Thread.run(Thread.java:748)
> > Caused by: java.lang.OutOfMemoryError: Map failed
> >     at sun.nio.ch.FileChannelImpl.map0(Native Method)
> >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
> >     ... 25 more
> >
> > Any hint to solve this problem ? Thanks in advance!
> >
> > --Senthil
> >
>


-- 
Santilli Jonathan

Re: Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

Posted by Karolis Pocius <ka...@sentiance.com.INVALID>.
I had the same issue which was solved by increasing max_map_count
https://stackoverflow.com/a/43675621


On Wed, Sep 4, 2019 at 2:59 PM SenthilKumar K <se...@gmail.com>
wrote:

> Hello Experts , We have deployed 10 node kafka cluster in production.
> Recently two of the nodes went down due to network problem and we brought
> it up after 24 hours. At the time of bootstrapping the  kafka service on
> the failed nodes , we have seen the below error & broker failed to come up.
>
> Kafka Version : kafka_2.11-2.2.0
>
> JVM Options :
> /a/java64/jdk1.8.0/bin/java -Xmx15G -Xms10G -server -XX:+UseG1GC
> -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
> -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true
> -Xloggc:/a/opt/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M
> -Davoid_insecure_jmxremote
>
>
> [2019-09-03 10:54:10,630] ERROR Error while deleting the clean shutdown
> file in dir /tmp/data (kafka.server.LogDirFailureChannel)
> java.io.IOException: Map failed
>     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
>     at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
>     at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
>     at kafka.log.LogSegment$.open(LogSegment.scala:632)
>     at
>
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:467)
>     at
>
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:454)
>     at
>
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>     at
>
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>     at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>     at
>
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>     at kafka.log.Log.kafka$log$Log$$loadSegmentFiles(Log.scala:454)
>     at kafka.log.Log$$anonfun$loadSegments$1.apply$mcV$sp(Log.scala:565)
>     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
>     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
>     at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2024)
>     at kafka.log.Log.loadSegments(Log.scala:559)
>     at kafka.log.Log.<init>(Log.scala:292)
>     at kafka.log.Log$.apply(Log.scala:2157)
>     at
> kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:265)
>     at
>
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:345)
>     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.OutOfMemoryError: Map failed
>     at sun.nio.ch.FileChannelImpl.map0(Native Method)
>     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
>     ... 25 more
>
> Any hint to solve this problem ? Thanks in advance!
>
> --Senthil
>