You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by "McKoy, Nick" <Ni...@washpost.com> on 2016/04/18 21:41:45 UTC

Out of memory - Java Heap space

Hey all,

I have a kafka cluster of 5 nodes that’s working really hard. CPU is around 40% idle daily.

I looked at the file descriptor note on this documentation page http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap and decided to give it a shot on one instance in the cluster just to see how it performed. I increased this number to 1048576.

I kept getting this error from the kafka logs:
ERROR [ReplicaFetcherThread--1-6], Error due to (kafka.server.ReplicaFetcherThread) java.lang.OutOfMemoryError: Java heap space

I increased heap to see if that would help and I kept seeing these errors. Could the file descriptor change have something related to this?



—
Nicholas McKoy
Engineering – Big Data and Personalization
Washington Post Media

One Franklin Square, Washington, DC 20001
Email: nicholas.mckoy@washpost.com<ma...@washpost.com>


Re: Out of memory - Java Heap space

Posted by Spico Florin <sp...@gmail.com>.
HI!
  You can set up your kafka process to dump the stack trace in case of the
OOM by providing the flags:(
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts001.html
)

   -

   xx:HeapDumpPath=path

   This option can be used to specify a location for the heap dump, see The
   -XX:HeapDumpOnOutOfMemoryError Option
   <https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts001.html#CHDFDIJI>
   .
   -

   -XX:MaxPermSize=size

   This option can be used to specify the size of the permanent generation
   memory, see Exception in thread thread_name: java.lang.OutOfMemoryError:
   GC Overhead limit exceeded
   <https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html#tahiti1150092>
   .


 What Java version are you using?
I suggest to work with jdk 8 and the G1 garbage collector. I've heard some
Kafka engineers that promoted this advice.

I hope that these help.
  Regards,
  Florin



On Wed, Apr 27, 2016 at 12:02 PM, Jaikiran Pai <ja...@gmail.com>
wrote:

> Have you tried getting the memory usage output using tool like jmap and
> seeing what's consuming the memory? Also, what are you heap sizes for the
> process?
>
> -Jaikiran
>
>
> On Tuesday 19 April 2016 02:31 AM, McKoy, Nick wrote:
>
>> To follow up with my last email, I have been looking into
>> socket.receive.buffer.byte as well as socket.send.buffer.bytes. Would it
>> help to increase the buffer for OOM issue?
>>
>> All help is appreciated!
>>
>> Thanks!
>>
>>
>> -nick
>>
>>
>> From: "McKoy, Nick" <Nicholas.McKoy@washpost.com<mailto:
>> Nicholas.McKoy@washpost.com>>
>> Date: Monday, April 18, 2016 at 3:41 PM
>> To: "users@kafka.apache.org<ma...@kafka.apache.org>" <
>> users@kafka.apache.org<ma...@kafka.apache.org>>
>> Subject: Out of memory - Java Heap space
>>
>> Hey all,
>>
>> I have a kafka cluster of 5 nodes that’s working really hard. CPU is
>> around 40% idle daily.
>>
>> I looked at the file descriptor note on this documentation page
>> http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap
>> and decided to give it a shot on one instance in the cluster just to see
>> how it performed. I increased this number to 1048576.
>>
>> I kept getting this error from the kafka logs:
>> ERROR [ReplicaFetcherThread--1-6], Error due to
>> (kafka.server.ReplicaFetcherThread) java.lang.OutOfMemoryError: Java heap
>> space
>>
>> I increased heap to see if that would help and I kept seeing these
>> errors. Could the file descriptor change have something related to this?
>>
>>
>>
>> —
>> Nicholas McKoy
>> Engineering – Big Data and Personalization
>> Washington Post Media
>>
>> One Franklin Square, Washington, DC 20001
>> Email: nicholas.mckoy@washpost.com<ma...@washpost.com>
>>
>>
>

Re: Out of memory - Java Heap space

Posted by Jaikiran Pai <ja...@gmail.com>.
Have you tried getting the memory usage output using tool like jmap and 
seeing what's consuming the memory? Also, what are you heap sizes for 
the process?

-Jaikiran

On Tuesday 19 April 2016 02:31 AM, McKoy, Nick wrote:
> To follow up with my last email, I have been looking into socket.receive.buffer.byte as well as socket.send.buffer.bytes. Would it help to increase the buffer for OOM issue?
>
> All help is appreciated!
>
> Thanks!
>
>
> -nick
>
>
> From: "McKoy, Nick" <Ni...@washpost.com>>
> Date: Monday, April 18, 2016 at 3:41 PM
> To: "users@kafka.apache.org<ma...@kafka.apache.org>" <us...@kafka.apache.org>>
> Subject: Out of memory - Java Heap space
>
> Hey all,
>
> I have a kafka cluster of 5 nodes that’s working really hard. CPU is around 40% idle daily.
>
> I looked at the file descriptor note on this documentation page http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap and decided to give it a shot on one instance in the cluster just to see how it performed. I increased this number to 1048576.
>
> I kept getting this error from the kafka logs:
> ERROR [ReplicaFetcherThread--1-6], Error due to (kafka.server.ReplicaFetcherThread) java.lang.OutOfMemoryError: Java heap space
>
> I increased heap to see if that would help and I kept seeing these errors. Could the file descriptor change have something related to this?
>
>
>
> —
> Nicholas McKoy
> Engineering – Big Data and Personalization
> Washington Post Media
>
> One Franklin Square, Washington, DC 20001
> Email: nicholas.mckoy@washpost.com<ma...@washpost.com>
>


Re: Out of memory - Java Heap space

Posted by "McKoy, Nick" <Ni...@washpost.com>.
To follow up with my last email, I have been looking into socket.receive.buffer.byte as well as socket.send.buffer.bytes. Would it help to increase the buffer for OOM issue?

All help is appreciated!

Thanks!


-nick


From: "McKoy, Nick" <Ni...@washpost.com>>
Date: Monday, April 18, 2016 at 3:41 PM
To: "users@kafka.apache.org<ma...@kafka.apache.org>" <us...@kafka.apache.org>>
Subject: Out of memory - Java Heap space

Hey all,

I have a kafka cluster of 5 nodes that’s working really hard. CPU is around 40% idle daily.

I looked at the file descriptor note on this documentation page http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap and decided to give it a shot on one instance in the cluster just to see how it performed. I increased this number to 1048576.

I kept getting this error from the kafka logs:
ERROR [ReplicaFetcherThread--1-6], Error due to (kafka.server.ReplicaFetcherThread) java.lang.OutOfMemoryError: Java heap space

I increased heap to see if that would help and I kept seeing these errors. Could the file descriptor change have something related to this?



—
Nicholas McKoy
Engineering – Big Data and Personalization
Washington Post Media

One Franklin Square, Washington, DC 20001
Email: nicholas.mckoy@washpost.com<ma...@washpost.com>