You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Swetad90 <sw...@gmail.com> on 2017/05/02 18:22:03 UTC

Re: Combined off heap size?

Hi,

I have a physical 8x32 server and I am using 4 data nodes in one cluster
group(G1)

I have the below config:
<property name="memoryMode" value="ONHEAP_TIERED"/> 

<property name="cacheMode" value="REPLICATED"/>

<property name="offHeapMaxMemory" value="#{10 * 1024L * 1024L * 1024L}"/>

<property name="evictionPolicy">
 <bean class="org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy">

<property name="maxSize" value="10000"/>
</bean>
</property>

So basically I am assigning 40GB of off-heap memory. I know that this is
just assigned, nodes will use that memory as and when data comes in. Based
on this scenario -

1. What will happen once I have exhausted 32 GB of memory. I don't have SWAP
memory configured. How will Ignite behave?

2. If I have a separate cluster group of another 3 nodes(G2) on the same
server being used by another application. They will also be affected as the
current 4 nodes(in G1) will utilize all the memory of the server. Is there
any other workaround/ best practices to handle such multi tenancy in ignite?

3. When it comes to this memory allocation, has OS anything to with it or
everything is taken care by the JVM/ignite instance only.

Thanks.



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Combined-off-heap-size-tp7137p12353.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Combined off heap size?

Posted by Andrey Mashenkov <an...@gmail.com>.
Hi,

1. Ignite will throw GridOffHeapOutOfMemoryException if you try to allocate
more than 10G, and will remove entries from cache if it number exceeds 10k
(according to eviction policy).
If you want Ignite remove entries when its occupy more than 10G, then you
need to configure eviction policy maxMemory limit.
Also, offHeapMaxMemory is limit for cache data, but Ignite can allocate a
little more memory for system structures.

2. Are you sure it is possible to run out of 10Gb with 10k entries limit in
your case?
In 1.x version, each cache has its own offheap memory pool, this limitation
has been removed in 2.0 release.
However, there is no way to handle such multi tenancy in ignite as nodes
will be run in separate processes with own virtual memory.

Seems, LRUEvictionPolicy with maxMemory limit can fit your needs.

3. OS will start swapping if it is possible, otherwise JVM process should
end up with allocation failure.


On Tue, May 2, 2017 at 9:22 PM, Swetad90 <sw...@gmail.com> wrote:

> Hi,
>
> I have a physical 8x32 server and I am using 4 data nodes in one cluster
> group(G1)
>
> I have the below config:
> <property name="memoryMode" value="ONHEAP_TIERED"/>
>
> <property name="cacheMode" value="REPLICATED"/>
>
> <property name="offHeapMaxMemory" value="#{10 * 1024L * 1024L * 1024L}"/>
>
> <property name="evictionPolicy">
>  <bean class="org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy">
>
> <property name="maxSize" value="10000"/>
> </bean>
> </property>
>
> So basically I am assigning 40GB of off-heap memory. I know that this is
> just assigned, nodes will use that memory as and when data comes in. Based
> on this scenario -
>
> 1. What will happen once I have exhausted 32 GB of memory. I don't have
> SWAP
> memory configured. How will Ignite behave?
>
> 2. If I have a separate cluster group of another 3 nodes(G2) on the same
> server being used by another application. They will also be affected as the
> current 4 nodes(in G1) will utilize all the memory of the server. Is there
> any other workaround/ best practices to handle such multi tenancy in
> ignite?
>
> 3. When it comes to this memory allocation, has OS anything to with it or
> everything is taken care by the JVM/ignite instance only.
>
> Thanks.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Combined-off-heap-size-tp7137p12353.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov