You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Sa Li <sa...@gmail.com> on 2015/01/06 20:58:53 UTC

no space left error

Hi, All

I am doing performance test on our new kafka production server, but after
sending some messages (even faked message by using bin/kafka-run-class.sh
org.apache.kafka.clients.tools.ProducerPerformance), it comes out the error
of connection, and shut down the brokers, after that, I see such errors,

conf-su: cannot create temp file for here-document: No space left on device

How can I fix it, I am concerning that will happen when we start to publish
real messages in kafka, and should I create some cron to regularly clean
certain directories?

thanks

-- 

Alec Li

Re: no space left error

Posted by Otis Gospodnetic <ot...@gmail.com>.
Hi,

Your disk is full.  You should probably have something that checks/monitors
disk space and alerts you when it's full.

Maybe you can point Kafka to a different, larger disk or partition.

Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/


On Tue, Jan 6, 2015 at 3:02 PM, Sa Li <sa...@gmail.com> wrote:

> Continue this issue, when I restart the server, like
> bin/kafka-server-start.sh config/server.properties
>
> it will fails to start the server, like
>
> [2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.lang.InternalError: a fault occurred in a recent unsafe memory access
> operation in compiled Java code
>         at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>         at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>         at
> kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
>         at
> kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
>         at
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
>         at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
>         at kafka.log.LogSegment.recover(LogSegment.scala:165)
>         at kafka.log.Log.recoverLog(Log.scala:179)
>         at kafka.log.Log.loadSegments(Log.scala:155)
>         at kafka.log.Log.<init>(Log.scala:64)
>         at
>
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
>         at
>
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
>         at
>
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>         at
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
>         at
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
>         at
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
>         at
>
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>         at
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
>         at kafka.log.LogManager.loadLogs(LogManager.scala:105)
>         at kafka.log.LogManager.<init>(LogManager.scala:57)
>         at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
>         at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
>         at
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
>         at kafka.Kafka$.main(Kafka.scala:46)
>         at kafka.Kafka.main(Kafka.scala)
> [2015-01-06 20:00:55,443] INFO [Kafka Server 100], shutting down
> (kafka.server.KafkaServer)
> [2015-01-06 20:00:55,444] INFO Terminate ZkClient event thread.
> (org.I0Itec.zkclient.ZkEventThread)
> [2015-01-06 20:00:55,446] INFO Session: 0x684a5ed9da3a1a0f closed
> (org.apache.zookeeper.ZooKeeper)
> [2015-01-06 20:00:55,446] INFO EventThread shut down
> (org.apache.zookeeper.ClientCnxn)
> [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shut down completed
> (kafka.server.KafkaServer)
> [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shutting down
> (kafka.server.KafkaServer)
>
> Any ideas
>
> On Tue, Jan 6, 2015 at 12:00 PM, Sa Li <sa...@gmail.com> wrote:
>
> > the complete error message:
> >
> > -su: cannot create temp file for here-document: No space left on device
> > OpenJDK 64-Bit Server VM warning: Insufficient space for shared memory
> > file:
> >    /tmp/hsperfdata_root/19721
> > Try using the -Djava.io.tmpdir= option to select an alternate temp
> > location.
> > [2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
> > java.io.FileNotFoundException: conf (No such file or directory)
> >         at java.io.FileInputStream.open(Native Method)
> >         at java.io.FileInputStream.<init>(FileInputStream.java:146)
> >         at java.io.FileInputStream.<init>(FileInputStream.java:101)
> >         at kafka.utils.Utils$.loadProps(Utils.scala:144)
> >         at kafka.Kafka$.main(Kafka.scala:34)
> >         at kafka.Kafka.main(Kafka.scala)
> >
> > On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <sa...@gmail.com> wrote:
> >
> >>
> >> Hi, All
> >>
> >> I am doing performance test on our new kafka production server, but
> after
> >> sending some messages (even faked message by using
> bin/kafka-run-class.sh
> >> org.apache.kafka.clients.tools.ProducerPerformance), it comes out the
> error
> >> of connection, and shut down the brokers, after that, I see such errors,
> >>
> >> conf-su: cannot create temp file for here-document: No space left on
> >> device
> >>
> >> How can I fix it, I am concerning that will happen when we start to
> >> publish real messages in kafka, and should I create some cron to
> regularly
> >> clean certain directories?
> >>
> >> thanks
> >>
> >> --
> >>
> >> Alec Li
> >>
> >
> >
> >
> > --
> >
> > Alec Li
> >
>
>
>
> --
>
> Alec Li
>

Re: no space left error

Posted by Joe Stein <jo...@stealth.ly>.
There are two parts to this

1) How to prevent Kafka from filling up disks which
https://issues.apache.org/jira/browse/KAFKA-1489 is trying to deal with (I
set the ticket to unassigned just now since i don't think anyone is working
on it and was assigned by default, could be wrong though so assign back if
I am wrong). I don't know the solution off the top of my head but I think
it is something we should strive for in 0.8.3 (worse case 0.9.0) as we need
a solution it happens frequently enough.

2) until then what to-do when it does happen

For #2 I have been pulled into the situation a number of times and honestly
the solution has been a bit different each time and not sure any one "tool"
or guideline is going to work honestly it is not always systematic... TBH
... but we could try some more guidelines and experiences that different
folks have had if someone already has this written up that would be great
otherwise I can carve out some time in the near future and do that (though
honestly I would rather effort go to #1 but it is a balance for sure.

For Sa's problem (where this thread started) << OpenJDK 64-Bit Server VM
warning: Insufficient space for shared memory file:
   /tmp/hsperfdata_root/19721

I don't think this is Kafka related though so what we are talking about
partition and retention is not applicable, even though important.

/*******************************************
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
********************************************/

On Tue, Jan 6, 2015 at 3:10 PM, David Birdsong <da...@gmail.com>
wrote:

> I'm keen to hear about how to work one's way out of a filled partition
> since I've run into this many times after having tuned retention bytes or
> retention (time?) incorrectly. The proper path to resolving this isn't
> obvious based on my many harried searches through documentation.
>
> I often end up stopping the particular broker, picking an unlucky
> topic/partition, deleting, modifying the any topics that consumed too much
> space by lowering their retention bytes, and restarting.
>
> On Tue, Jan 6, 2015 at 12:02 PM, Sa Li <sa...@gmail.com> wrote:
>
> > Continue this issue, when I restart the server, like
> > bin/kafka-server-start.sh config/server.properties
> >
> > it will fails to start the server, like
> >
> > [2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
> > startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> > java.lang.InternalError: a fault occurred in a recent unsafe memory
> access
> > operation in compiled Java code
> >         at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
> >         at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
> >         at
> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
> >         at
> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
> >         at
> > kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
> >         at
> kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
> >         at kafka.log.LogSegment.recover(LogSegment.scala:165)
> >         at kafka.log.Log.recoverLog(Log.scala:179)
> >         at kafka.log.Log.loadSegments(Log.scala:155)
> >         at kafka.log.Log.<init>(Log.scala:64)
> >         at
> >
> >
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
> >         at
> >
> >
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
> >         at
> >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> >         at
> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
> >         at
> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
> >         at
> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
> >         at
> >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> >         at
> > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
> >         at kafka.log.LogManager.loadLogs(LogManager.scala:105)
> >         at kafka.log.LogManager.<init>(LogManager.scala:57)
> >         at
> kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
> >         at
> > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> >         at kafka.Kafka$.main(Kafka.scala:46)
> >         at kafka.Kafka.main(Kafka.scala)
> > [2015-01-06 20:00:55,443] INFO [Kafka Server 100], shutting down
> > (kafka.server.KafkaServer)
> > [2015-01-06 20:00:55,444] INFO Terminate ZkClient event thread.
> > (org.I0Itec.zkclient.ZkEventThread)
> > [2015-01-06 20:00:55,446] INFO Session: 0x684a5ed9da3a1a0f closed
> > (org.apache.zookeeper.ZooKeeper)
> > [2015-01-06 20:00:55,446] INFO EventThread shut down
> > (org.apache.zookeeper.ClientCnxn)
> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shut down completed
> > (kafka.server.KafkaServer)
> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shutting down
> > (kafka.server.KafkaServer)
> >
> > Any ideas
> >
> > On Tue, Jan 6, 2015 at 12:00 PM, Sa Li <sa...@gmail.com> wrote:
> >
> > > the complete error message:
> > >
> > > -su: cannot create temp file for here-document: No space left on device
> > > OpenJDK 64-Bit Server VM warning: Insufficient space for shared memory
> > > file:
> > >    /tmp/hsperfdata_root/19721
> > > Try using the -Djava.io.tmpdir= option to select an alternate temp
> > > location.
> > > [2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
> > > java.io.FileNotFoundException: conf (No such file or directory)
> > >         at java.io.FileInputStream.open(Native Method)
> > >         at java.io.FileInputStream.<init>(FileInputStream.java:146)
> > >         at java.io.FileInputStream.<init>(FileInputStream.java:101)
> > >         at kafka.utils.Utils$.loadProps(Utils.scala:144)
> > >         at kafka.Kafka$.main(Kafka.scala:34)
> > >         at kafka.Kafka.main(Kafka.scala)
> > >
> > > On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <sa...@gmail.com> wrote:
> > >
> > >>
> > >> Hi, All
> > >>
> > >> I am doing performance test on our new kafka production server, but
> > after
> > >> sending some messages (even faked message by using
> > bin/kafka-run-class.sh
> > >> org.apache.kafka.clients.tools.ProducerPerformance), it comes out the
> > error
> > >> of connection, and shut down the brokers, after that, I see such
> errors,
> > >>
> > >> conf-su: cannot create temp file for here-document: No space left on
> > >> device
> > >>
> > >> How can I fix it, I am concerning that will happen when we start to
> > >> publish real messages in kafka, and should I create some cron to
> > regularly
> > >> clean certain directories?
> > >>
> > >> thanks
> > >>
> > >> --
> > >>
> > >> Alec Li
> > >>
> > >
> > >
> > >
> > > --
> > >
> > > Alec Li
> > >
> >
> >
> >
> > --
> >
> > Alec Li
> >
>

Re: no space left error

Posted by Jun Rao <ju...@confluent.io>.
Yes, that would be useful.

Thanks,

Jun

On Tue, Jan 6, 2015 at 2:07 PM, Sa Li <sa...@gmail.com> wrote:

> BTW, I found the the /kafka/logs also getting biger and bigger, like
> controller.log and state-change.logs. should I launch a cron the clean them
> up regularly or there is way to delete them regularly?
>
> thanks
>
> AL
>
> On Tue, Jan 6, 2015 at 2:01 PM, Sa Li <sa...@gmail.com> wrote:
>
> > Hi, All
> >
> > We fix the problem, I like to share the what the problem is in case
> > someone come across the similar issues. We add the data drive for each
> node
> > /dev/sdb1 , but specify the wrong path in server.properties, which means
> > the data was written into the wrong drive /dev/sda2, quickly eat up all
> the
> > space in sda2, now we change the path. The sdb1 has 15Tb, which allows us
> > to store data for a while and will be deleted in 1/2 weeks as config
> > mentioned.
> >
> > But I am kinda curious about David's comments,  "... after having tuned
> > retention bytes or retention (time?) incorrectly. .."  How do you guys
> set
> > log.retention.bytes?  I set log.retention.hours=336 (2 weeks), and should
> > I set log.retention.bytes as default -1 or some other amount?
> >
> > thanks
> >
> > AL
> >
> > On Tue, Jan 6, 2015 at 12:43 PM, Sa Li <sa...@gmail.com> wrote:
> >
> >> Thanks the reply, the disk is not full:
> >>
> >> root@exemplary-birds:~# df -h
> >> Filesystem      Size  Used Avail Use% Mounted on
> >> /dev/sda2       133G  3.4G  123G   3% /
> >> none            4.0K     0  4.0K   0% /sys/fs/cgroup
> >> udev             32G  4.0K   32G   1% /dev
> >> tmpfs           6.3G  764K  6.3G   1% /run
> >> none            5.0M     0  5.0M   0% /run/lock
> >> none             32G     0   32G   0% /run/shm
> >> none            100M     0  100M   0% /run/user
> >> /dev/sdb1        14T   15G   14T   1% /srv
> >>
> >> Neither the memory
> >>
> >> root@exemplary-birds:~# free
> >>              total       used       free     shared    buffers
>  cached
> >> Mem:      65963372    9698380   56264992        776     170668
> 7863812
> >> -/+ buffers/cache:    1663900   64299472
> >> Swap:       997372          0     997372
> >>
> >> thanks
> >>
> >>
> >> On Tue, Jan 6, 2015 at 12:10 PM, David Birdsong <
> david.birdsong@gmail.com
> >> > wrote:
> >>
> >>> I'm keen to hear about how to work one's way out of a filled partition
> >>> since I've run into this many times after having tuned retention bytes
> or
> >>> retention (time?) incorrectly. The proper path to resolving this isn't
> >>> obvious based on my many harried searches through documentation.
> >>>
> >>> I often end up stopping the particular broker, picking an unlucky
> >>> topic/partition, deleting, modifying the any topics that consumed too
> >>> much
> >>> space by lowering their retention bytes, and restarting.
> >>>
> >>> On Tue, Jan 6, 2015 at 12:02 PM, Sa Li <sa...@gmail.com> wrote:
> >>>
> >>> > Continue this issue, when I restart the server, like
> >>> > bin/kafka-server-start.sh config/server.properties
> >>> >
> >>> > it will fails to start the server, like
> >>> >
> >>> > [2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
> >>> > startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> >>> > java.lang.InternalError: a fault occurred in a recent unsafe memory
> >>> access
> >>> > operation in compiled Java code
> >>> >         at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
> >>> >         at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
> >>> >         at
> >>> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
> >>> >         at
> >>> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
> >>> >         at
> >>> >
> >>>
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
> >>> >         at
> >>> kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
> >>> >         at kafka.log.LogSegment.recover(LogSegment.scala:165)
> >>> >         at kafka.log.Log.recoverLog(Log.scala:179)
> >>> >         at kafka.log.Log.loadSegments(Log.scala:155)
> >>> >         at kafka.log.Log.<init>(Log.scala:64)
> >>> >         at
> >>> >
> >>> >
> >>>
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
> >>> >         at
> >>> >
> >>> >
> >>>
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
> >>> >         at
> >>> >
> >>> >
> >>>
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> >>> >         at
> >>> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
> >>> >         at
> >>> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
> >>> >         at
> >>> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
> >>> >         at
> >>> >
> >>> >
> >>>
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> >>> >         at
> >>> > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
> >>> >         at kafka.log.LogManager.loadLogs(LogManager.scala:105)
> >>> >         at kafka.log.LogManager.<init>(LogManager.scala:57)
> >>> >         at
> >>> kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
> >>> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
> >>> >         at
> >>> >
> >>>
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> >>> >         at kafka.Kafka$.main(Kafka.scala:46)
> >>> >         at kafka.Kafka.main(Kafka.scala)
> >>> > [2015-01-06 20:00:55,443] INFO [Kafka Server 100], shutting down
> >>> > (kafka.server.KafkaServer)
> >>> > [2015-01-06 20:00:55,444] INFO Terminate ZkClient event thread.
> >>> > (org.I0Itec.zkclient.ZkEventThread)
> >>> > [2015-01-06 20:00:55,446] INFO Session: 0x684a5ed9da3a1a0f closed
> >>> > (org.apache.zookeeper.ZooKeeper)
> >>> > [2015-01-06 20:00:55,446] INFO EventThread shut down
> >>> > (org.apache.zookeeper.ClientCnxn)
> >>> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shut down
> completed
> >>> > (kafka.server.KafkaServer)
> >>> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shutting down
> >>> > (kafka.server.KafkaServer)
> >>> >
> >>> > Any ideas
> >>> >
> >>> > On Tue, Jan 6, 2015 at 12:00 PM, Sa Li <sa...@gmail.com> wrote:
> >>> >
> >>> > > the complete error message:
> >>> > >
> >>> > > -su: cannot create temp file for here-document: No space left on
> >>> device
> >>> > > OpenJDK 64-Bit Server VM warning: Insufficient space for shared
> >>> memory
> >>> > > file:
> >>> > >    /tmp/hsperfdata_root/19721
> >>> > > Try using the -Djava.io.tmpdir= option to select an alternate temp
> >>> > > location.
> >>> > > [2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
> >>> > > java.io.FileNotFoundException: conf (No such file or directory)
> >>> > >         at java.io.FileInputStream.open(Native Method)
> >>> > >         at java.io.FileInputStream.<init>(FileInputStream.java:146)
> >>> > >         at java.io.FileInputStream.<init>(FileInputStream.java:101)
> >>> > >         at kafka.utils.Utils$.loadProps(Utils.scala:144)
> >>> > >         at kafka.Kafka$.main(Kafka.scala:34)
> >>> > >         at kafka.Kafka.main(Kafka.scala)
> >>> > >
> >>> > > On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <sa...@gmail.com> wrote:
> >>> > >
> >>> > >>
> >>> > >> Hi, All
> >>> > >>
> >>> > >> I am doing performance test on our new kafka production server,
> but
> >>> > after
> >>> > >> sending some messages (even faked message by using
> >>> > bin/kafka-run-class.sh
> >>> > >> org.apache.kafka.clients.tools.ProducerPerformance), it comes out
> >>> the
> >>> > error
> >>> > >> of connection, and shut down the brokers, after that, I see such
> >>> errors,
> >>> > >>
> >>> > >> conf-su: cannot create temp file for here-document: No space left
> on
> >>> > >> device
> >>> > >>
> >>> > >> How can I fix it, I am concerning that will happen when we start
> to
> >>> > >> publish real messages in kafka, and should I create some cron to
> >>> > regularly
> >>> > >> clean certain directories?
> >>> > >>
> >>> > >> thanks
> >>> > >>
> >>> > >> --
> >>> > >>
> >>> > >> Alec Li
> >>> > >>
> >>> > >
> >>> > >
> >>> > >
> >>> > > --
> >>> > >
> >>> > > Alec Li
> >>> > >
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> >
> >>> > Alec Li
> >>> >
> >>>
> >>
> >>
> >>
> >> --
> >>
> >> Alec Li
> >>
> >
> >
> >
> > --
> >
> > Alec Li
> >
>
>
>
> --
>
> Alec Li
>

Re: no space left error

Posted by Sa Li <sa...@gmail.com>.
BTW, I found the the /kafka/logs also getting biger and bigger, like
controller.log and state-change.logs. should I launch a cron the clean them
up regularly or there is way to delete them regularly?

thanks

AL

On Tue, Jan 6, 2015 at 2:01 PM, Sa Li <sa...@gmail.com> wrote:

> Hi, All
>
> We fix the problem, I like to share the what the problem is in case
> someone come across the similar issues. We add the data drive for each node
> /dev/sdb1 , but specify the wrong path in server.properties, which means
> the data was written into the wrong drive /dev/sda2, quickly eat up all the
> space in sda2, now we change the path. The sdb1 has 15Tb, which allows us
> to store data for a while and will be deleted in 1/2 weeks as config
> mentioned.
>
> But I am kinda curious about David's comments,  "... after having tuned
> retention bytes or retention (time?) incorrectly. .."  How do you guys set
> log.retention.bytes?  I set log.retention.hours=336 (2 weeks), and should
> I set log.retention.bytes as default -1 or some other amount?
>
> thanks
>
> AL
>
> On Tue, Jan 6, 2015 at 12:43 PM, Sa Li <sa...@gmail.com> wrote:
>
>> Thanks the reply, the disk is not full:
>>
>> root@exemplary-birds:~# df -h
>> Filesystem      Size  Used Avail Use% Mounted on
>> /dev/sda2       133G  3.4G  123G   3% /
>> none            4.0K     0  4.0K   0% /sys/fs/cgroup
>> udev             32G  4.0K   32G   1% /dev
>> tmpfs           6.3G  764K  6.3G   1% /run
>> none            5.0M     0  5.0M   0% /run/lock
>> none             32G     0   32G   0% /run/shm
>> none            100M     0  100M   0% /run/user
>> /dev/sdb1        14T   15G   14T   1% /srv
>>
>> Neither the memory
>>
>> root@exemplary-birds:~# free
>>              total       used       free     shared    buffers     cached
>> Mem:      65963372    9698380   56264992        776     170668    7863812
>> -/+ buffers/cache:    1663900   64299472
>> Swap:       997372          0     997372
>>
>> thanks
>>
>>
>> On Tue, Jan 6, 2015 at 12:10 PM, David Birdsong <david.birdsong@gmail.com
>> > wrote:
>>
>>> I'm keen to hear about how to work one's way out of a filled partition
>>> since I've run into this many times after having tuned retention bytes or
>>> retention (time?) incorrectly. The proper path to resolving this isn't
>>> obvious based on my many harried searches through documentation.
>>>
>>> I often end up stopping the particular broker, picking an unlucky
>>> topic/partition, deleting, modifying the any topics that consumed too
>>> much
>>> space by lowering their retention bytes, and restarting.
>>>
>>> On Tue, Jan 6, 2015 at 12:02 PM, Sa Li <sa...@gmail.com> wrote:
>>>
>>> > Continue this issue, when I restart the server, like
>>> > bin/kafka-server-start.sh config/server.properties
>>> >
>>> > it will fails to start the server, like
>>> >
>>> > [2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
>>> > startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
>>> > java.lang.InternalError: a fault occurred in a recent unsafe memory
>>> access
>>> > operation in compiled Java code
>>> >         at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>>> >         at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>>> >         at
>>> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
>>> >         at
>>> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
>>> >         at
>>> >
>>> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
>>> >         at
>>> kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
>>> >         at kafka.log.LogSegment.recover(LogSegment.scala:165)
>>> >         at kafka.log.Log.recoverLog(Log.scala:179)
>>> >         at kafka.log.Log.loadSegments(Log.scala:155)
>>> >         at kafka.log.Log.<init>(Log.scala:64)
>>> >         at
>>> >
>>> >
>>> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
>>> >         at
>>> >
>>> >
>>> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
>>> >         at
>>> >
>>> >
>>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>>> >         at
>>> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
>>> >         at
>>> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
>>> >         at
>>> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
>>> >         at
>>> >
>>> >
>>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>>> >         at
>>> > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
>>> >         at kafka.log.LogManager.loadLogs(LogManager.scala:105)
>>> >         at kafka.log.LogManager.<init>(LogManager.scala:57)
>>> >         at
>>> kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
>>> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
>>> >         at
>>> >
>>> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
>>> >         at kafka.Kafka$.main(Kafka.scala:46)
>>> >         at kafka.Kafka.main(Kafka.scala)
>>> > [2015-01-06 20:00:55,443] INFO [Kafka Server 100], shutting down
>>> > (kafka.server.KafkaServer)
>>> > [2015-01-06 20:00:55,444] INFO Terminate ZkClient event thread.
>>> > (org.I0Itec.zkclient.ZkEventThread)
>>> > [2015-01-06 20:00:55,446] INFO Session: 0x684a5ed9da3a1a0f closed
>>> > (org.apache.zookeeper.ZooKeeper)
>>> > [2015-01-06 20:00:55,446] INFO EventThread shut down
>>> > (org.apache.zookeeper.ClientCnxn)
>>> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shut down completed
>>> > (kafka.server.KafkaServer)
>>> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shutting down
>>> > (kafka.server.KafkaServer)
>>> >
>>> > Any ideas
>>> >
>>> > On Tue, Jan 6, 2015 at 12:00 PM, Sa Li <sa...@gmail.com> wrote:
>>> >
>>> > > the complete error message:
>>> > >
>>> > > -su: cannot create temp file for here-document: No space left on
>>> device
>>> > > OpenJDK 64-Bit Server VM warning: Insufficient space for shared
>>> memory
>>> > > file:
>>> > >    /tmp/hsperfdata_root/19721
>>> > > Try using the -Djava.io.tmpdir= option to select an alternate temp
>>> > > location.
>>> > > [2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
>>> > > java.io.FileNotFoundException: conf (No such file or directory)
>>> > >         at java.io.FileInputStream.open(Native Method)
>>> > >         at java.io.FileInputStream.<init>(FileInputStream.java:146)
>>> > >         at java.io.FileInputStream.<init>(FileInputStream.java:101)
>>> > >         at kafka.utils.Utils$.loadProps(Utils.scala:144)
>>> > >         at kafka.Kafka$.main(Kafka.scala:34)
>>> > >         at kafka.Kafka.main(Kafka.scala)
>>> > >
>>> > > On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <sa...@gmail.com> wrote:
>>> > >
>>> > >>
>>> > >> Hi, All
>>> > >>
>>> > >> I am doing performance test on our new kafka production server, but
>>> > after
>>> > >> sending some messages (even faked message by using
>>> > bin/kafka-run-class.sh
>>> > >> org.apache.kafka.clients.tools.ProducerPerformance), it comes out
>>> the
>>> > error
>>> > >> of connection, and shut down the brokers, after that, I see such
>>> errors,
>>> > >>
>>> > >> conf-su: cannot create temp file for here-document: No space left on
>>> > >> device
>>> > >>
>>> > >> How can I fix it, I am concerning that will happen when we start to
>>> > >> publish real messages in kafka, and should I create some cron to
>>> > regularly
>>> > >> clean certain directories?
>>> > >>
>>> > >> thanks
>>> > >>
>>> > >> --
>>> > >>
>>> > >> Alec Li
>>> > >>
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > >
>>> > > Alec Li
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> > Alec Li
>>> >
>>>
>>
>>
>>
>> --
>>
>> Alec Li
>>
>
>
>
> --
>
> Alec Li
>



-- 

Alec Li

Re: no space left error

Posted by Sa Li <sa...@gmail.com>.
Hi, All

We fix the problem, I like to share the what the problem is in case someone
come across the similar issues. We add the data drive for each node
/dev/sdb1 , but specify the wrong path in server.properties, which means
the data was written into the wrong drive /dev/sda2, quickly eat up all the
space in sda2, now we change the path. The sdb1 has 15Tb, which allows us
to store data for a while and will be deleted in 1/2 weeks as config
mentioned.

But I am kinda curious about David's comments,  "... after having tuned
retention bytes or retention (time?) incorrectly. .."  How do you guys set
log.retention.bytes?  I set log.retention.hours=336 (2 weeks), and should I
set log.retention.bytes as default -1 or some other amount?

thanks

AL

On Tue, Jan 6, 2015 at 12:43 PM, Sa Li <sa...@gmail.com> wrote:

> Thanks the reply, the disk is not full:
>
> root@exemplary-birds:~# df -h
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda2       133G  3.4G  123G   3% /
> none            4.0K     0  4.0K   0% /sys/fs/cgroup
> udev             32G  4.0K   32G   1% /dev
> tmpfs           6.3G  764K  6.3G   1% /run
> none            5.0M     0  5.0M   0% /run/lock
> none             32G     0   32G   0% /run/shm
> none            100M     0  100M   0% /run/user
> /dev/sdb1        14T   15G   14T   1% /srv
>
> Neither the memory
>
> root@exemplary-birds:~# free
>              total       used       free     shared    buffers     cached
> Mem:      65963372    9698380   56264992        776     170668    7863812
> -/+ buffers/cache:    1663900   64299472
> Swap:       997372          0     997372
>
> thanks
>
>
> On Tue, Jan 6, 2015 at 12:10 PM, David Birdsong <da...@gmail.com>
> wrote:
>
>> I'm keen to hear about how to work one's way out of a filled partition
>> since I've run into this many times after having tuned retention bytes or
>> retention (time?) incorrectly. The proper path to resolving this isn't
>> obvious based on my many harried searches through documentation.
>>
>> I often end up stopping the particular broker, picking an unlucky
>> topic/partition, deleting, modifying the any topics that consumed too much
>> space by lowering their retention bytes, and restarting.
>>
>> On Tue, Jan 6, 2015 at 12:02 PM, Sa Li <sa...@gmail.com> wrote:
>>
>> > Continue this issue, when I restart the server, like
>> > bin/kafka-server-start.sh config/server.properties
>> >
>> > it will fails to start the server, like
>> >
>> > [2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
>> > startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
>> > java.lang.InternalError: a fault occurred in a recent unsafe memory
>> access
>> > operation in compiled Java code
>> >         at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>> >         at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>> >         at
>> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
>> >         at
>> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
>> >         at
>> > kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
>> >         at
>> kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
>> >         at kafka.log.LogSegment.recover(LogSegment.scala:165)
>> >         at kafka.log.Log.recoverLog(Log.scala:179)
>> >         at kafka.log.Log.loadSegments(Log.scala:155)
>> >         at kafka.log.Log.<init>(Log.scala:64)
>> >         at
>> >
>> >
>> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
>> >         at
>> >
>> >
>> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
>> >         at
>> >
>> >
>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>> >         at
>> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
>> >         at
>> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
>> >         at
>> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
>> >         at
>> >
>> >
>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>> >         at
>> > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
>> >         at kafka.log.LogManager.loadLogs(LogManager.scala:105)
>> >         at kafka.log.LogManager.<init>(LogManager.scala:57)
>> >         at
>> kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
>> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
>> >         at
>> > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
>> >         at kafka.Kafka$.main(Kafka.scala:46)
>> >         at kafka.Kafka.main(Kafka.scala)
>> > [2015-01-06 20:00:55,443] INFO [Kafka Server 100], shutting down
>> > (kafka.server.KafkaServer)
>> > [2015-01-06 20:00:55,444] INFO Terminate ZkClient event thread.
>> > (org.I0Itec.zkclient.ZkEventThread)
>> > [2015-01-06 20:00:55,446] INFO Session: 0x684a5ed9da3a1a0f closed
>> > (org.apache.zookeeper.ZooKeeper)
>> > [2015-01-06 20:00:55,446] INFO EventThread shut down
>> > (org.apache.zookeeper.ClientCnxn)
>> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shut down completed
>> > (kafka.server.KafkaServer)
>> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shutting down
>> > (kafka.server.KafkaServer)
>> >
>> > Any ideas
>> >
>> > On Tue, Jan 6, 2015 at 12:00 PM, Sa Li <sa...@gmail.com> wrote:
>> >
>> > > the complete error message:
>> > >
>> > > -su: cannot create temp file for here-document: No space left on
>> device
>> > > OpenJDK 64-Bit Server VM warning: Insufficient space for shared memory
>> > > file:
>> > >    /tmp/hsperfdata_root/19721
>> > > Try using the -Djava.io.tmpdir= option to select an alternate temp
>> > > location.
>> > > [2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
>> > > java.io.FileNotFoundException: conf (No such file or directory)
>> > >         at java.io.FileInputStream.open(Native Method)
>> > >         at java.io.FileInputStream.<init>(FileInputStream.java:146)
>> > >         at java.io.FileInputStream.<init>(FileInputStream.java:101)
>> > >         at kafka.utils.Utils$.loadProps(Utils.scala:144)
>> > >         at kafka.Kafka$.main(Kafka.scala:34)
>> > >         at kafka.Kafka.main(Kafka.scala)
>> > >
>> > > On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <sa...@gmail.com> wrote:
>> > >
>> > >>
>> > >> Hi, All
>> > >>
>> > >> I am doing performance test on our new kafka production server, but
>> > after
>> > >> sending some messages (even faked message by using
>> > bin/kafka-run-class.sh
>> > >> org.apache.kafka.clients.tools.ProducerPerformance), it comes out the
>> > error
>> > >> of connection, and shut down the brokers, after that, I see such
>> errors,
>> > >>
>> > >> conf-su: cannot create temp file for here-document: No space left on
>> > >> device
>> > >>
>> > >> How can I fix it, I am concerning that will happen when we start to
>> > >> publish real messages in kafka, and should I create some cron to
>> > regularly
>> > >> clean certain directories?
>> > >>
>> > >> thanks
>> > >>
>> > >> --
>> > >>
>> > >> Alec Li
>> > >>
>> > >
>> > >
>> > >
>> > > --
>> > >
>> > > Alec Li
>> > >
>> >
>> >
>> >
>> > --
>> >
>> > Alec Li
>> >
>>
>
>
>
> --
>
> Alec Li
>



-- 

Alec Li

Re: no space left error

Posted by Sa Li <sa...@gmail.com>.
Thanks the reply, the disk is not full:

root@exemplary-birds:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       133G  3.4G  123G   3% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev             32G  4.0K   32G   1% /dev
tmpfs           6.3G  764K  6.3G   1% /run
none            5.0M     0  5.0M   0% /run/lock
none             32G     0   32G   0% /run/shm
none            100M     0  100M   0% /run/user
/dev/sdb1        14T   15G   14T   1% /srv

Neither the memory

root@exemplary-birds:~# free
             total       used       free     shared    buffers     cached
Mem:      65963372    9698380   56264992        776     170668    7863812
-/+ buffers/cache:    1663900   64299472
Swap:       997372          0     997372

thanks


On Tue, Jan 6, 2015 at 12:10 PM, David Birdsong <da...@gmail.com>
wrote:

> I'm keen to hear about how to work one's way out of a filled partition
> since I've run into this many times after having tuned retention bytes or
> retention (time?) incorrectly. The proper path to resolving this isn't
> obvious based on my many harried searches through documentation.
>
> I often end up stopping the particular broker, picking an unlucky
> topic/partition, deleting, modifying the any topics that consumed too much
> space by lowering their retention bytes, and restarting.
>
> On Tue, Jan 6, 2015 at 12:02 PM, Sa Li <sa...@gmail.com> wrote:
>
> > Continue this issue, when I restart the server, like
> > bin/kafka-server-start.sh config/server.properties
> >
> > it will fails to start the server, like
> >
> > [2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
> > startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> > java.lang.InternalError: a fault occurred in a recent unsafe memory
> access
> > operation in compiled Java code
> >         at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
> >         at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
> >         at
> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
> >         at
> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
> >         at
> > kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
> >         at
> kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
> >         at kafka.log.LogSegment.recover(LogSegment.scala:165)
> >         at kafka.log.Log.recoverLog(Log.scala:179)
> >         at kafka.log.Log.loadSegments(Log.scala:155)
> >         at kafka.log.Log.<init>(Log.scala:64)
> >         at
> >
> >
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
> >         at
> >
> >
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
> >         at
> >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> >         at
> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
> >         at
> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
> >         at
> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
> >         at
> >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> >         at
> > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
> >         at kafka.log.LogManager.loadLogs(LogManager.scala:105)
> >         at kafka.log.LogManager.<init>(LogManager.scala:57)
> >         at
> kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
> >         at
> > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> >         at kafka.Kafka$.main(Kafka.scala:46)
> >         at kafka.Kafka.main(Kafka.scala)
> > [2015-01-06 20:00:55,443] INFO [Kafka Server 100], shutting down
> > (kafka.server.KafkaServer)
> > [2015-01-06 20:00:55,444] INFO Terminate ZkClient event thread.
> > (org.I0Itec.zkclient.ZkEventThread)
> > [2015-01-06 20:00:55,446] INFO Session: 0x684a5ed9da3a1a0f closed
> > (org.apache.zookeeper.ZooKeeper)
> > [2015-01-06 20:00:55,446] INFO EventThread shut down
> > (org.apache.zookeeper.ClientCnxn)
> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shut down completed
> > (kafka.server.KafkaServer)
> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shutting down
> > (kafka.server.KafkaServer)
> >
> > Any ideas
> >
> > On Tue, Jan 6, 2015 at 12:00 PM, Sa Li <sa...@gmail.com> wrote:
> >
> > > the complete error message:
> > >
> > > -su: cannot create temp file for here-document: No space left on device
> > > OpenJDK 64-Bit Server VM warning: Insufficient space for shared memory
> > > file:
> > >    /tmp/hsperfdata_root/19721
> > > Try using the -Djava.io.tmpdir= option to select an alternate temp
> > > location.
> > > [2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
> > > java.io.FileNotFoundException: conf (No such file or directory)
> > >         at java.io.FileInputStream.open(Native Method)
> > >         at java.io.FileInputStream.<init>(FileInputStream.java:146)
> > >         at java.io.FileInputStream.<init>(FileInputStream.java:101)
> > >         at kafka.utils.Utils$.loadProps(Utils.scala:144)
> > >         at kafka.Kafka$.main(Kafka.scala:34)
> > >         at kafka.Kafka.main(Kafka.scala)
> > >
> > > On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <sa...@gmail.com> wrote:
> > >
> > >>
> > >> Hi, All
> > >>
> > >> I am doing performance test on our new kafka production server, but
> > after
> > >> sending some messages (even faked message by using
> > bin/kafka-run-class.sh
> > >> org.apache.kafka.clients.tools.ProducerPerformance), it comes out the
> > error
> > >> of connection, and shut down the brokers, after that, I see such
> errors,
> > >>
> > >> conf-su: cannot create temp file for here-document: No space left on
> > >> device
> > >>
> > >> How can I fix it, I am concerning that will happen when we start to
> > >> publish real messages in kafka, and should I create some cron to
> > regularly
> > >> clean certain directories?
> > >>
> > >> thanks
> > >>
> > >> --
> > >>
> > >> Alec Li
> > >>
> > >
> > >
> > >
> > > --
> > >
> > > Alec Li
> > >
> >
> >
> >
> > --
> >
> > Alec Li
> >
>



-- 

Alec Li

Re: no space left error

Posted by David Birdsong <da...@gmail.com>.
I'm keen to hear about how to work one's way out of a filled partition
since I've run into this many times after having tuned retention bytes or
retention (time?) incorrectly. The proper path to resolving this isn't
obvious based on my many harried searches through documentation.

I often end up stopping the particular broker, picking an unlucky
topic/partition, deleting, modifying the any topics that consumed too much
space by lowering their retention bytes, and restarting.

On Tue, Jan 6, 2015 at 12:02 PM, Sa Li <sa...@gmail.com> wrote:

> Continue this issue, when I restart the server, like
> bin/kafka-server-start.sh config/server.properties
>
> it will fails to start the server, like
>
> [2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.lang.InternalError: a fault occurred in a recent unsafe memory access
> operation in compiled Java code
>         at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>         at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>         at
> kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
>         at
> kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
>         at
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
>         at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
>         at kafka.log.LogSegment.recover(LogSegment.scala:165)
>         at kafka.log.Log.recoverLog(Log.scala:179)
>         at kafka.log.Log.loadSegments(Log.scala:155)
>         at kafka.log.Log.<init>(Log.scala:64)
>         at
>
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
>         at
>
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
>         at
>
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>         at
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
>         at
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
>         at
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
>         at
>
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>         at
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
>         at kafka.log.LogManager.loadLogs(LogManager.scala:105)
>         at kafka.log.LogManager.<init>(LogManager.scala:57)
>         at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
>         at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
>         at
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
>         at kafka.Kafka$.main(Kafka.scala:46)
>         at kafka.Kafka.main(Kafka.scala)
> [2015-01-06 20:00:55,443] INFO [Kafka Server 100], shutting down
> (kafka.server.KafkaServer)
> [2015-01-06 20:00:55,444] INFO Terminate ZkClient event thread.
> (org.I0Itec.zkclient.ZkEventThread)
> [2015-01-06 20:00:55,446] INFO Session: 0x684a5ed9da3a1a0f closed
> (org.apache.zookeeper.ZooKeeper)
> [2015-01-06 20:00:55,446] INFO EventThread shut down
> (org.apache.zookeeper.ClientCnxn)
> [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shut down completed
> (kafka.server.KafkaServer)
> [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shutting down
> (kafka.server.KafkaServer)
>
> Any ideas
>
> On Tue, Jan 6, 2015 at 12:00 PM, Sa Li <sa...@gmail.com> wrote:
>
> > the complete error message:
> >
> > -su: cannot create temp file for here-document: No space left on device
> > OpenJDK 64-Bit Server VM warning: Insufficient space for shared memory
> > file:
> >    /tmp/hsperfdata_root/19721
> > Try using the -Djava.io.tmpdir= option to select an alternate temp
> > location.
> > [2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
> > java.io.FileNotFoundException: conf (No such file or directory)
> >         at java.io.FileInputStream.open(Native Method)
> >         at java.io.FileInputStream.<init>(FileInputStream.java:146)
> >         at java.io.FileInputStream.<init>(FileInputStream.java:101)
> >         at kafka.utils.Utils$.loadProps(Utils.scala:144)
> >         at kafka.Kafka$.main(Kafka.scala:34)
> >         at kafka.Kafka.main(Kafka.scala)
> >
> > On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <sa...@gmail.com> wrote:
> >
> >>
> >> Hi, All
> >>
> >> I am doing performance test on our new kafka production server, but
> after
> >> sending some messages (even faked message by using
> bin/kafka-run-class.sh
> >> org.apache.kafka.clients.tools.ProducerPerformance), it comes out the
> error
> >> of connection, and shut down the brokers, after that, I see such errors,
> >>
> >> conf-su: cannot create temp file for here-document: No space left on
> >> device
> >>
> >> How can I fix it, I am concerning that will happen when we start to
> >> publish real messages in kafka, and should I create some cron to
> regularly
> >> clean certain directories?
> >>
> >> thanks
> >>
> >> --
> >>
> >> Alec Li
> >>
> >
> >
> >
> > --
> >
> > Alec Li
> >
>
>
>
> --
>
> Alec Li
>

Re: no space left error

Posted by Sa Li <sa...@gmail.com>.
Continue this issue, when I restart the server, like
bin/kafka-server-start.sh config/server.properties

it will fails to start the server, like

[2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.InternalError: a fault occurred in a recent unsafe memory access
operation in compiled Java code
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
        at
kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
        at
kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
        at
kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
        at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
        at kafka.log.LogSegment.recover(LogSegment.scala:165)
        at kafka.log.Log.recoverLog(Log.scala:179)
        at kafka.log.Log.loadSegments(Log.scala:155)
        at kafka.log.Log.<init>(Log.scala:64)
        at
kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
        at
kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
        at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
        at
kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
        at
kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
        at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at
scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
        at kafka.log.LogManager.loadLogs(LogManager.scala:105)
        at kafka.log.LogManager.<init>(LogManager.scala:57)
        at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
        at
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
        at kafka.Kafka$.main(Kafka.scala:46)
        at kafka.Kafka.main(Kafka.scala)
[2015-01-06 20:00:55,443] INFO [Kafka Server 100], shutting down
(kafka.server.KafkaServer)
[2015-01-06 20:00:55,444] INFO Terminate ZkClient event thread.
(org.I0Itec.zkclient.ZkEventThread)
[2015-01-06 20:00:55,446] INFO Session: 0x684a5ed9da3a1a0f closed
(org.apache.zookeeper.ZooKeeper)
[2015-01-06 20:00:55,446] INFO EventThread shut down
(org.apache.zookeeper.ClientCnxn)
[2015-01-06 20:00:55,447] INFO [Kafka Server 100], shut down completed
(kafka.server.KafkaServer)
[2015-01-06 20:00:55,447] INFO [Kafka Server 100], shutting down
(kafka.server.KafkaServer)

Any ideas

On Tue, Jan 6, 2015 at 12:00 PM, Sa Li <sa...@gmail.com> wrote:

> the complete error message:
>
> -su: cannot create temp file for here-document: No space left on device
> OpenJDK 64-Bit Server VM warning: Insufficient space for shared memory
> file:
>    /tmp/hsperfdata_root/19721
> Try using the -Djava.io.tmpdir= option to select an alternate temp
> location.
> [2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
> java.io.FileNotFoundException: conf (No such file or directory)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(FileInputStream.java:146)
>         at java.io.FileInputStream.<init>(FileInputStream.java:101)
>         at kafka.utils.Utils$.loadProps(Utils.scala:144)
>         at kafka.Kafka$.main(Kafka.scala:34)
>         at kafka.Kafka.main(Kafka.scala)
>
> On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <sa...@gmail.com> wrote:
>
>>
>> Hi, All
>>
>> I am doing performance test on our new kafka production server, but after
>> sending some messages (even faked message by using bin/kafka-run-class.sh
>> org.apache.kafka.clients.tools.ProducerPerformance), it comes out the error
>> of connection, and shut down the brokers, after that, I see such errors,
>>
>> conf-su: cannot create temp file for here-document: No space left on
>> device
>>
>> How can I fix it, I am concerning that will happen when we start to
>> publish real messages in kafka, and should I create some cron to regularly
>> clean certain directories?
>>
>> thanks
>>
>> --
>>
>> Alec Li
>>
>
>
>
> --
>
> Alec Li
>



-- 

Alec Li

Re: no space left error

Posted by Sa Li <sa...@gmail.com>.
the complete error message:

-su: cannot create temp file for here-document: No space left on device
OpenJDK 64-Bit Server VM warning: Insufficient space for shared memory file:
   /tmp/hsperfdata_root/19721
Try using the -Djava.io.tmpdir= option to select an alternate temp location.
[2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
java.io.FileNotFoundException: conf (No such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:146)
        at java.io.FileInputStream.<init>(FileInputStream.java:101)
        at kafka.utils.Utils$.loadProps(Utils.scala:144)
        at kafka.Kafka$.main(Kafka.scala:34)
        at kafka.Kafka.main(Kafka.scala)

On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <sa...@gmail.com> wrote:

>
> Hi, All
>
> I am doing performance test on our new kafka production server, but after
> sending some messages (even faked message by using bin/kafka-run-class.sh
> org.apache.kafka.clients.tools.ProducerPerformance), it comes out the error
> of connection, and shut down the brokers, after that, I see such errors,
>
> conf-su: cannot create temp file for here-document: No space left on device
>
> How can I fix it, I am concerning that will happen when we start to
> publish real messages in kafka, and should I create some cron to regularly
> clean certain directories?
>
> thanks
>
> --
>
> Alec Li
>



-- 

Alec Li