You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Amol Zambare <am...@gmail.com> on 2018/07/27 18:53:50 UTC

Need help for setting offheap memory

Hi,

We are using ignite to share in memory data across the spark jobs. .

I am using below configuration to set ignite offheap memory. I would like
to set it as 100 gb.

However when I print the node statistics using visor it shows offheap max
memory as 1 gb.

Please suggest.

Apache Ignite version 2.3

<property name="dataStorageConfiguration">
          <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
                <!-- Redefining the default region's settings -->
                <property name="defaultDataRegionConfiguration">
                  <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="name" value="Default_Region"/>
                        <!-- Setting the size of the default region to
100GB. -->
                        <property name="maxSize" value="#{100L * 1024 *
1024 * 1024}"/>

                  </bean>
                </property>
          </bean>
   </property>

Thanks,
Amol

Re: Need help for setting offheap memory

Posted by Denis Mekhanikov <dm...@gmail.com>.
So, Amol,

Did you look at the heap dump?

Denis

пн, 6 авг. 2018 г. в 18:46, Amol Zambare <am...@gmail.com>:

> Hi Alex,
>
> Here is the full stack trace
>
> [INFO][tcp-disco-sock-reader-#130][TcpDiscoverySpi] Finished serving
> remote node connection
> [INFO][tcp-disco-sock-reader-#653][TcpDiscoverySpi] Started serving remote
> node connection
> [SEVERE][tcp-disco-sock-reader-#130][TcpDiscoverySpi] Runtime error caught
> during grid runnable execution: Socket reader [id=313,
> name=tcp-disco-sock-reader-#130,
> nodeId=35a7ca47-3245-4f9f-8114-9b65c6d5e9bf]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Arrays.copyOf(Arrays.java:3332)
>         at
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>         at
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596)
>         at java.lang.StringBuilder.append(StringBuilder.java:190)
>         at
> java.io.ObjectInputStream$BlockDataInputStream.readUTFSpan(ObjectInputStream.java:3450)
>         at
> java.io.ObjectInputStream$BlockDataInputStream.readUTFBody(ObjectInputStream.java:3358)
>         at
> java.io.ObjectInputStream$BlockDataInputStream.readUTF(ObjectInputStream.java:3170)
>         at
> java.io.ObjectInputStream.readString(ObjectInputStream.java:1850)
>         at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1527)
>         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:423)
>         at
> org.apache.ignite.internal.util.IgniteUtils.readMap(IgniteUtils.java:5146)
>         at
> org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode.readExternal(TcpDiscoveryNode.java:617)
>         at
> java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2063)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2012)
>         at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
>         at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2232)
>         at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2156)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2014)
>         at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
>         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:423)
>         at java.util.ArrayList.readObject(ArrayList.java:791)
>         at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
>         at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2123)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2014)
>         at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
>         at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2232)
>         at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2156)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2014)
>         at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted incoming
> connection
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted incoming
> connection
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning a new
> thread for connection
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted incoming
> connection
> [INFO][tcp-disco-sock-reader-#654][TcpDiscoverySpi] Started serving remote
> node connection
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning a new
> thread for connection
>
> Thanks,
> Amol
>
>
>
> On Sat, Aug 4, 2018 at 3:28 AM, Alex Plehanov <pl...@gmail.com>
> wrote:
>
>> Offheap and heap memory regions are used for different purposes and can't
>> replace each other. You can't get rid of OOME in heap by increasing offheap
>> memory.
>> Can you provide full exception stack trace?
>>
>> 2018-08-03 20:55 GMT+03:00 Amol Zambare <am...@gmail.com>:
>>
>>> Thanks Alex and Denis
>>>
>>> We have configured off heap memory to 100GB and we have 10 nodes ignite
>>> cluster. However when we are running spark job we see following error in
>>> the ignite logs. When we run the spark job heap utilization on most of the
>>> ignite nodes is increasing significantly though we are using off heap
>>> storage. We have set JVM heap size on each ignite node to 50GB. Please
>>> suggest.
>>>
>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>              at java.util.Arrays.copyOf(Arrays.java:3332)
>>>              at
>>> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>>>
>>>
>>> On Fri, Aug 3, 2018 at 4:16 AM, Alex Plehanov <pl...@gmail.com>
>>> wrote:
>>>
>>>>  "Non-heap memory ..." metrics in visor have nothing to do with
>>>> offheap memory allocated for data regions. "Non-heap memory" returned
>>>> by visor it's JVM managed memory regions other then heap used for internal
>>>> JVM purposes (JIT compiler, etc., see [1]). Memory allocated in offheap by
>>>> Ignite for data regions (via "unsafe") not included into this metrics. Some
>>>> data region related metrics in visor were implemented in Ignite 2.4.
>>>>
>>>> [1]
>>>> https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html
>>>>
>>>
>>>
>>
>

Re: Need help for setting offheap memory

Posted by Amol Zambare <am...@gmail.com>.
Hi Alex,

Here is the full stack trace

[INFO][tcp-disco-sock-reader-#130][TcpDiscoverySpi] Finished serving remote
node connection
[INFO][tcp-disco-sock-reader-#653][TcpDiscoverySpi] Started serving remote
node connection
[SEVERE][tcp-disco-sock-reader-#130][TcpDiscoverySpi] Runtime error caught
during grid runnable execution: Socket reader [id=313,
name=tcp-disco-sock-reader-#130,
nodeId=35a7ca47-3245-4f9f-8114-9b65c6d5e9bf]
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.Arrays.copyOf(Arrays.java:3332)
        at
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
        at
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596)
        at java.lang.StringBuilder.append(StringBuilder.java:190)
        at
java.io.ObjectInputStream$BlockDataInputStream.readUTFSpan(ObjectInputStream.java:3450)
        at
java.io.ObjectInputStream$BlockDataInputStream.readUTFBody(ObjectInputStream.java:3358)
        at
java.io.ObjectInputStream$BlockDataInputStream.readUTF(ObjectInputStream.java:3170)
        at java.io.ObjectInputStream.readString(ObjectInputStream.java:1850)
        at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1527)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:423)
        at
org.apache.ignite.internal.util.IgniteUtils.readMap(IgniteUtils.java:5146)
        at
org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode.readExternal(TcpDiscoveryNode.java:617)
        at
java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2063)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2012)
        at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
        at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2232)
        at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2156)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2014)
        at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:423)
        at java.util.ArrayList.readObject(ArrayList.java:791)
        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
        at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2123)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2014)
        at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
        at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2232)
        at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2156)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2014)
        at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
[INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted incoming
connection
[INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted incoming
connection
[INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning a new
thread for connection
[INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted incoming
connection
[INFO][tcp-disco-sock-reader-#654][TcpDiscoverySpi] Started serving remote
node connection
[INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning a new
thread for connection

Thanks,
Amol



On Sat, Aug 4, 2018 at 3:28 AM, Alex Plehanov <pl...@gmail.com>
wrote:

> Offheap and heap memory regions are used for different purposes and can't
> replace each other. You can't get rid of OOME in heap by increasing offheap
> memory.
> Can you provide full exception stack trace?
>
> 2018-08-03 20:55 GMT+03:00 Amol Zambare <am...@gmail.com>:
>
>> Thanks Alex and Denis
>>
>> We have configured off heap memory to 100GB and we have 10 nodes ignite
>> cluster. However when we are running spark job we see following error in
>> the ignite logs. When we run the spark job heap utilization on most of the
>> ignite nodes is increasing significantly though we are using off heap
>> storage. We have set JVM heap size on each ignite node to 50GB. Please
>> suggest.
>>
>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>              at java.util.Arrays.copyOf(Arrays.java:3332)
>>              at java.lang.AbstractStringBuilde
>> r.ensureCapacityInternal(AbstractStringBuilder.java:124)
>>
>>
>> On Fri, Aug 3, 2018 at 4:16 AM, Alex Plehanov <pl...@gmail.com>
>> wrote:
>>
>>>  "Non-heap memory ..." metrics in visor have nothing to do with offheap
>>> memory allocated for data regions. "Non-heap memory" returned by visor
>>> it's JVM managed memory regions other then heap used for internal JVM
>>> purposes (JIT compiler, etc., see [1]). Memory allocated in offheap by
>>> Ignite for data regions (via "unsafe") not included into this metrics. Some
>>> data region related metrics in visor were implemented in Ignite 2.4.
>>>
>>> [1] https://docs.oracle.com/javase/8/docs/api/java/lang/manageme
>>> nt/MemoryMXBean.html
>>>
>>
>>
>

Re: Need help for setting offheap memory

Posted by Denis Mekhanikov <dm...@gmail.com>.
Amol,

Data is pulled onto heap every time you use it.
So, if your Spark jobs operate over big amount of data, then heap memory
utilization will be high.
Take a heap dump next time you encounter OutOfMemoryError.
You can make Java take a heap dump every time it fails with OOME:
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts001.html

You'll be able to tell, what causes the failure by analyzing the heap dump.

Denis

сб, 4 авг. 2018 г. в 11:29, Alex Plehanov <pl...@gmail.com>:

> Offheap and heap memory regions are used for different purposes and can't
> replace each other. You can't get rid of OOME in heap by increasing offheap
> memory.
> Can you provide full exception stack trace?
>
> 2018-08-03 20:55 GMT+03:00 Amol Zambare <am...@gmail.com>:
>
>> Thanks Alex and Denis
>>
>> We have configured off heap memory to 100GB and we have 10 nodes ignite
>> cluster. However when we are running spark job we see following error in
>> the ignite logs. When we run the spark job heap utilization on most of the
>> ignite nodes is increasing significantly though we are using off heap
>> storage. We have set JVM heap size on each ignite node to 50GB. Please
>> suggest.
>>
>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>              at java.util.Arrays.copyOf(Arrays.java:3332)
>>              at
>> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>>
>>
>> On Fri, Aug 3, 2018 at 4:16 AM, Alex Plehanov <pl...@gmail.com>
>> wrote:
>>
>>>  "Non-heap memory ..." metrics in visor have nothing to do with offheap
>>> memory allocated for data regions. "Non-heap memory" returned by visor
>>> it's JVM managed memory regions other then heap used for internal JVM
>>> purposes (JIT compiler, etc., see [1]). Memory allocated in offheap by
>>> Ignite for data regions (via "unsafe") not included into this metrics. Some
>>> data region related metrics in visor were implemented in Ignite 2.4.
>>>
>>> [1]
>>> https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html
>>>
>>
>>
>

Re: Need help for setting offheap memory

Posted by Alex Plehanov <pl...@gmail.com>.
Offheap and heap memory regions are used for different purposes and can't
replace each other. You can't get rid of OOME in heap by increasing offheap
memory.
Can you provide full exception stack trace?

2018-08-03 20:55 GMT+03:00 Amol Zambare <am...@gmail.com>:

> Thanks Alex and Denis
>
> We have configured off heap memory to 100GB and we have 10 nodes ignite
> cluster. However when we are running spark job we see following error in
> the ignite logs. When we run the spark job heap utilization on most of the
> ignite nodes is increasing significantly though we are using off heap
> storage. We have set JVM heap size on each ignite node to 50GB. Please
> suggest.
>
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>              at java.util.Arrays.copyOf(Arrays.java:3332)
>              at java.lang.AbstractStringBuilder.ensureCapacityInternal(
> AbstractStringBuilder.java:124)
>
>
> On Fri, Aug 3, 2018 at 4:16 AM, Alex Plehanov <pl...@gmail.com>
> wrote:
>
>>  "Non-heap memory ..." metrics in visor have nothing to do with offheap
>> memory allocated for data regions. "Non-heap memory" returned by visor
>> it's JVM managed memory regions other then heap used for internal JVM
>> purposes (JIT compiler, etc., see [1]). Memory allocated in offheap by
>> Ignite for data regions (via "unsafe") not included into this metrics. Some
>> data region related metrics in visor were implemented in Ignite 2.4.
>>
>> [1] https://docs.oracle.com/javase/8/docs/api/java/lang/manageme
>> nt/MemoryMXBean.html
>>
>
>

Re: Need help for setting offheap memory

Posted by Amol Zambare <am...@gmail.com>.
Thanks Alex and Denis

We have configured off heap memory to 100GB and we have 10 nodes ignite
cluster. However when we are running spark job we see following error in
the ignite logs. When we run the spark job heap utilization on most of the
ignite nodes is increasing significantly though we are using off heap
storage. We have set JVM heap size on each ignite node to 50GB. Please
suggest.

java.lang.OutOfMemoryError: GC overhead limit exceeded
             at java.util.Arrays.copyOf(Arrays.java:3332)
             at
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)


On Fri, Aug 3, 2018 at 4:16 AM, Alex Plehanov <pl...@gmail.com>
wrote:

>  "Non-heap memory ..." metrics in visor have nothing to do with offheap
> memory allocated for data regions. "Non-heap memory" returned by visor
> it's JVM managed memory regions other then heap used for internal JVM
> purposes (JIT compiler, etc., see [1]). Memory allocated in offheap by
> Ignite for data regions (via "unsafe") not included into this metrics. Some
> data region related metrics in visor were implemented in Ignite 2.4.
>
> [1] https://docs.oracle.com/javase/8/docs/api/java/lang/
> management/MemoryMXBean.html
>

Re: Need help for setting offheap memory

Posted by Alex Plehanov <pl...@gmail.com>.
  "Non-heap memory ..." metrics in visor have nothing to do with offheap
memory allocated for data regions. "Non-heap memory" returned by visor it's
JVM managed memory regions other then heap used for internal JVM purposes
(JIT compiler, etc., see [1]). Memory allocated in offheap by Ignite for
data regions (via "unsafe") not included into this metrics. Some data
region related metrics in visor were implemented in Ignite 2.4.

[1]
https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html

Re: Need help for setting offheap memory

Posted by Amol Zambare <am...@gmail.com>.
Hi Denis.

I am using node command on visor and referring to  "Non-heap memory
maximum" metric

| Non-heap memory initialized | 2mb
| Non-heap memory used        | 64mb
| Non-heap memory committed   | 66mb
|* Non-heap memory maximum     | 1gb  *

Thanks,
Amol

On Tue, Jul 31, 2018 at 12:21 PM, Denis Mekhanikov <dm...@gmail.com>
wrote:

> Amol,
>
> The configuration looks correct at least the piece, that you provided. Do
> you start the server nodes with this config?
> Which visor metric do you use to verify the off-heap size?
>
> Denis
>
>
> On Fri, Jul 27, 2018, 21:53 Amol Zambare <am...@gmail.com> wrote:
>
>> Hi,
>>
>> We are using ignite to share in memory data across the spark jobs. .
>>
>> I am using below configuration to set ignite offheap memory. I would like
>> to set it as 100 gb.
>>
>> However when I print the node statistics using visor it shows offheap max
>> memory as 1 gb.
>>
>> Please suggest.
>>
>> Apache Ignite version 2.3
>>
>> <property name="dataStorageConfiguration">
>>           <bean class="org.apache.ignite.configuration.
>> DataStorageConfiguration">
>>                 <!-- Redefining the default region's settings -->
>>                 <property name="defaultDataRegionConfiguration">
>>                   <bean class="org.apache.ignite.configuration.
>> DataRegionConfiguration">
>>                         <property name="name" value="Default_Region"/>
>>                         <!-- Setting the size of the default region to
>> 100GB. -->
>>                         <property name="maxSize" value="#{100L * 1024 *
>> 1024 * 1024}"/>
>>
>>                   </bean>
>>                 </property>
>>           </bean>
>>    </property>
>>
>> Thanks,
>> Amol
>>
>

Re: Need help for setting offheap memory

Posted by Denis Mekhanikov <dm...@gmail.com>.
Amol,

The configuration looks correct at least the piece, that you provided. Do
you start the server nodes with this config?
Which visor metric do you use to verify the off-heap size?

Denis

On Fri, Jul 27, 2018, 21:53 Amol Zambare <am...@gmail.com> wrote:

> Hi,
>
> We are using ignite to share in memory data across the spark jobs. .
>
> I am using below configuration to set ignite offheap memory. I would like
> to set it as 100 gb.
>
> However when I print the node statistics using visor it shows offheap max
> memory as 1 gb.
>
> Please suggest.
>
> Apache Ignite version 2.3
>
> <property name="dataStorageConfiguration">
>           <bean
> class="org.apache.ignite.configuration.DataStorageConfiguration">
>                 <!-- Redefining the default region's settings -->
>                 <property name="defaultDataRegionConfiguration">
>                   <bean
> class="org.apache.ignite.configuration.DataRegionConfiguration">
>                         <property name="name" value="Default_Region"/>
>                         <!-- Setting the size of the default region to
> 100GB. -->
>                         <property name="maxSize" value="#{100L * 1024 *
> 1024 * 1024}"/>
>
>                   </bean>
>                 </property>
>           </bean>
>    </property>
>
> Thanks,
> Amol
>