You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@directmemory.apache.org by Rahul Thakur <ra...@gmail.com> on 2011/11/23 04:55:57 UTC

Why is Cache.init(...) limited by the amount of memory available to VM

Greetings everyone!

Its good to see Direct Memory a member project at Apache \o/. I am very
keen to see it graduate to top-level project, so please keep up the good
work!

I have been consuming snapshots of DirectMemory for a data intensive app
requiring off-heap storage. I have hit a couple of blockers:

(a)  If the idea behind off-heap storage is not to be limited by the RAM
available to VM, then why can't the buffer size provided to Cache.init(...)
be more than the RAM available to VM.

(b)  Application requirement in my case is to have a concurrent Cache that
can handle ~100 hits. How can I achieve that with DirectMemory Cache?

Sorry about cross-posting this - just wanted to make sure this doesn't drop
off the radar :)

Look forward to you responses.

Many thanks,

Rahul

Re: Why is Cache.init(...) limited by the amount of memory available to VM

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
Thanks to you for your attention.

Ciao,
    R
Il giorno 26/nov/2011 14:12, "Rahul Thakur" <ra...@gmail.com>
ha scritto:

> Thanks for the responses, guys!
>
> I will explore other options you have mentioned.
>
> Cheers,
>
> Rahul
>
>
>
> On Fri, Nov 25, 2011 at 10:55 AM, Ashish <pa...@gmail.com> wrote:
>
>> @rahul - IMHO DirectMemory shall never leverage Virtual Memory. The
>> moment you start hitting disk, the latencies shoots up and the whole
>> purpose of DirectMemory is defeated. As Raffaele has already indicated
>> that DirectMemory intends to use memory outside JVM Heap (RAM) to
>> avoid potential GC issues, and other cache systems(JCS/Ehcache)
>> already have Disk stores.
>>
>> HTH !
>>
>> On Fri, Nov 25, 2011 at 1:05 AM, Raffaele P. Guidi
>> <ra...@gmail.com> wrote:
>> > I'm not sure I get it... if you need more memory than your ram you
>> should
>> > use JCS (or ehcache) with the overflow to disk option. DirectMemory's
>> only
>> > goal is to improve the way your RAM is used, not to "enlarge" it
>> (although
>> > "enlarge your memory" is a nicer payoff than "an off-heap cache harness
>> for
>> > the JVM"! :D).
>> > Ciao,
>> >    Raffaele
>> >
>> > On Thu, Nov 24, 2011 at 1:50 PM, Rahul Thakur <
>> rahul.thakur.xdev@gmail.com>
>> > wrote:
>> >>
>> >> Thanks, guys - this makes sense.
>> >> Are there any plans to make DirectMemory leverage Virtual memory? My
>> >> experience with NIO is fairly limited. Having said, is it possible to
>> use
>> >> Memory Mapped Files?
>> >>
>> >> Again, thanks heaps for the prompt responses.
>> >> Cheers,
>> >>
>> >> Rahul
>> >>
>> >> On Wed, Nov 23, 2011 at 11:40 PM, Raffaele P. Guidi
>> >> <ra...@gmail.com> wrote:
>> >>>
>> >>> Mir is totally right. The size of the direct memory should be set to
>> >>> something less than physical memory - o/s required memory - heap
>> size, if
>> >>> you try to use more the o/s goes swapping on disk degrading
>> performance
>> >>> _dramatically_ - simply don't do it. Also for MappedByteBuffers -
>> yes, they
>> >>> are into the heap, and can be useful but not in this case :)
>> >>> Ciao,
>> >>>     R
>> >>>
>> >>> On Wed, Nov 23, 2011 at 6:25 PM, Mir Tanvir Hossain
>> >>> <mi...@gmail.com> wrote:
>> >>>>
>> >>>> Hi Rahul,
>> >>>>
>> >>>> a: for slowness I am guessing that you are hitting the swap since you
>> >>>> are trying to use more physical ram than what you have installed on
>> your
>> >>>> machine.
>> >>>>
>> >>>> b:  I think DirectMemory is trying to optimize for physical RAM. As a
>> >>>> result, I don't think MappedByteBuffer would be helpful since doing
>> so would
>> >>>> be using the file system.
>> >>>> DirectMemory as it stands right now is only a off heap cache. JCS,
>> >>>> another apache project, is a heap cache. I am working on to bridge
>> JCS and
>> >>>> DirectMemory so that JCS can be used as L1 cache, and DirectMemory
>> as L2.
>> >>>> After that, you can use combination of JCS and DirectMemory for your
>> >>>> purpose.
>> >>>> This are just my opinions. Raffaele can shed more light on it.
>> >>>> Thanks,
>> >>>> Mir
>> >>>> On Wed, Nov 23, 2011 at 9:10 AM, Rahul Thakur
>> >>>> <ra...@gmail.com> wrote:
>> >>>>>
>> >>>>> Thanks Raffaele.
>> >>>>> I tried your suggestions, I have couple of more queries on which I'd
>> >>>>> really appreciate your inputs:
>> >>>>> (a)  I set the -XX:MaxDirectMemorySize=6g on my machine which has
>> max
>> >>>>> 4Gb of RAM, the machine goes dog slow when I try to run tests with
>> any value
>> >>>>> larger than the physical RAM available. It seems like the JVM
>> initialization
>> >>>>> takes a relatively long time with that flag value set; the actual
>> JUnit
>> >>>>> tests seem to run quickly though.
>> >>>>> (b)  I noticed in OffHeapMemoryBuffer implementation,
>> >>>>> ByteBuffer.allocateDirect(int) is used to allocate memory. Would
>> using a
>> >>>>> MappedByteBuffer help rather than ByteBuffer?
>> >>>>> The application is expected to use data which can be hundreds
>> (possibly
>> >>>>> thousands) of Gigabytes and it would be preferable to keep only a
>> subset of
>> >>>>> data loaded in the JVM heap.
>> >>>>> Look forward to your suggestions.
>> >>>>> Thanks,
>> >>>>> Rahul
>> >>>>>
>> >>>>>
>> >>>>> On Wed, Nov 23, 2011 at 12:36 PM, Raffaele P. Guidi
>> >>>>> <ra...@gmail.com> wrote:
>> >>>>>>
>> >>>>>> Well regarding your first point - it is not, you just have to
>> specify
>> >>>>>> the -XXDirectMemorySize jvm option to whatever you need (the
>> default is the
>> >>>>>> heap size, just increase it) and split it in <2gb chunks.
>> Regarding the
>> >>>>>> second one there should be no problem even we try to keep locks at
>> the
>> >>>>>> minimum, should you have trouble there can be some tuning to put
>> in place,
>> >>>>>> let us know.
>> >>>>>>
>> >>>>>> Ciao,
>> >>>>>>     R
>> >>>>>>
>> >>>>>> Il giorno 23/nov/2011 03:56, "Rahul Thakur"
>> >>>>>> <ra...@gmail.com> ha scritto:
>> >>>>>>>
>> >>>>>>> Greetings everyone!
>> >>>>>>> Its good to see Direct Memory a member project at Apache \o/. I am
>> >>>>>>> very keen to see it graduate to top-level project, so please keep
>> up the
>> >>>>>>> good work!
>> >>>>>>> I have been consuming snapshots of DirectMemory for a data
>> intensive
>> >>>>>>> app requiring off-heap storage. I have hit a couple of blockers:
>> >>>>>>> (a)  If the idea behind off-heap storage is not to be limited by
>> the
>> >>>>>>> RAM available to VM, then why can't the buffer size provided to
>> >>>>>>> Cache.init(...) be more than the RAM available to VM.
>> >>>>>>> (b)  Application requirement in my case is to have a concurrent
>> Cache
>> >>>>>>> that can handle ~100 hits. How can I achieve that with
>> DirectMemory Cache?
>> >>>>>>> Sorry about cross-posting this - just wanted to make sure this
>> >>>>>>> doesn't drop off the radar :)
>> >>>>>>> Look forward to you responses.
>> >>>>>>> Many thanks,
>> >>>>>>>
>> >>>>>>> Rahul
>> >>>>>>>
>> >>>>>
>> >>>>
>> >>>
>> >>
>> >
>> >
>>
>>
>>
>> --
>> thanks
>> ashish
>>
>> Blog: http://www.ashishpaliwal.com/blog
>> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>>
>
>

Re: Why is Cache.init(...) limited by the amount of memory available to VM

Posted by Rahul Thakur <ra...@gmail.com>.
Thanks for the responses, guys!

I will explore other options you have mentioned.

Cheers,

Rahul



On Fri, Nov 25, 2011 at 10:55 AM, Ashish <pa...@gmail.com> wrote:

> @rahul - IMHO DirectMemory shall never leverage Virtual Memory. The
> moment you start hitting disk, the latencies shoots up and the whole
> purpose of DirectMemory is defeated. As Raffaele has already indicated
> that DirectMemory intends to use memory outside JVM Heap (RAM) to
> avoid potential GC issues, and other cache systems(JCS/Ehcache)
> already have Disk stores.
>
> HTH !
>
> On Fri, Nov 25, 2011 at 1:05 AM, Raffaele P. Guidi
> <ra...@gmail.com> wrote:
> > I'm not sure I get it... if you need more memory than your ram you should
> > use JCS (or ehcache) with the overflow to disk option. DirectMemory's
> only
> > goal is to improve the way your RAM is used, not to "enlarge" it
> (although
> > "enlarge your memory" is a nicer payoff than "an off-heap cache harness
> for
> > the JVM"! :D).
> > Ciao,
> >    Raffaele
> >
> > On Thu, Nov 24, 2011 at 1:50 PM, Rahul Thakur <
> rahul.thakur.xdev@gmail.com>
> > wrote:
> >>
> >> Thanks, guys - this makes sense.
> >> Are there any plans to make DirectMemory leverage Virtual memory? My
> >> experience with NIO is fairly limited. Having said, is it possible to
> use
> >> Memory Mapped Files?
> >>
> >> Again, thanks heaps for the prompt responses.
> >> Cheers,
> >>
> >> Rahul
> >>
> >> On Wed, Nov 23, 2011 at 11:40 PM, Raffaele P. Guidi
> >> <ra...@gmail.com> wrote:
> >>>
> >>> Mir is totally right. The size of the direct memory should be set to
> >>> something less than physical memory - o/s required memory - heap size,
> if
> >>> you try to use more the o/s goes swapping on disk degrading performance
> >>> _dramatically_ - simply don't do it. Also for MappedByteBuffers - yes,
> they
> >>> are into the heap, and can be useful but not in this case :)
> >>> Ciao,
> >>>     R
> >>>
> >>> On Wed, Nov 23, 2011 at 6:25 PM, Mir Tanvir Hossain
> >>> <mi...@gmail.com> wrote:
> >>>>
> >>>> Hi Rahul,
> >>>>
> >>>> a: for slowness I am guessing that you are hitting the swap since you
> >>>> are trying to use more physical ram than what you have installed on
> your
> >>>> machine.
> >>>>
> >>>> b:  I think DirectMemory is trying to optimize for physical RAM. As a
> >>>> result, I don't think MappedByteBuffer would be helpful since doing
> so would
> >>>> be using the file system.
> >>>> DirectMemory as it stands right now is only a off heap cache. JCS,
> >>>> another apache project, is a heap cache. I am working on to bridge
> JCS and
> >>>> DirectMemory so that JCS can be used as L1 cache, and DirectMemory as
> L2.
> >>>> After that, you can use combination of JCS and DirectMemory for your
> >>>> purpose.
> >>>> This are just my opinions. Raffaele can shed more light on it.
> >>>> Thanks,
> >>>> Mir
> >>>> On Wed, Nov 23, 2011 at 9:10 AM, Rahul Thakur
> >>>> <ra...@gmail.com> wrote:
> >>>>>
> >>>>> Thanks Raffaele.
> >>>>> I tried your suggestions, I have couple of more queries on which I'd
> >>>>> really appreciate your inputs:
> >>>>> (a)  I set the -XX:MaxDirectMemorySize=6g on my machine which has max
> >>>>> 4Gb of RAM, the machine goes dog slow when I try to run tests with
> any value
> >>>>> larger than the physical RAM available. It seems like the JVM
> initialization
> >>>>> takes a relatively long time with that flag value set; the actual
> JUnit
> >>>>> tests seem to run quickly though.
> >>>>> (b)  I noticed in OffHeapMemoryBuffer implementation,
> >>>>> ByteBuffer.allocateDirect(int) is used to allocate memory. Would
> using a
> >>>>> MappedByteBuffer help rather than ByteBuffer?
> >>>>> The application is expected to use data which can be hundreds
> (possibly
> >>>>> thousands) of Gigabytes and it would be preferable to keep only a
> subset of
> >>>>> data loaded in the JVM heap.
> >>>>> Look forward to your suggestions.
> >>>>> Thanks,
> >>>>> Rahul
> >>>>>
> >>>>>
> >>>>> On Wed, Nov 23, 2011 at 12:36 PM, Raffaele P. Guidi
> >>>>> <ra...@gmail.com> wrote:
> >>>>>>
> >>>>>> Well regarding your first point - it is not, you just have to
> specify
> >>>>>> the -XXDirectMemorySize jvm option to whatever you need (the
> default is the
> >>>>>> heap size, just increase it) and split it in <2gb chunks. Regarding
> the
> >>>>>> second one there should be no problem even we try to keep locks at
> the
> >>>>>> minimum, should you have trouble there can be some tuning to put in
> place,
> >>>>>> let us know.
> >>>>>>
> >>>>>> Ciao,
> >>>>>>     R
> >>>>>>
> >>>>>> Il giorno 23/nov/2011 03:56, "Rahul Thakur"
> >>>>>> <ra...@gmail.com> ha scritto:
> >>>>>>>
> >>>>>>> Greetings everyone!
> >>>>>>> Its good to see Direct Memory a member project at Apache \o/. I am
> >>>>>>> very keen to see it graduate to top-level project, so please keep
> up the
> >>>>>>> good work!
> >>>>>>> I have been consuming snapshots of DirectMemory for a data
> intensive
> >>>>>>> app requiring off-heap storage. I have hit a couple of blockers:
> >>>>>>> (a)  If the idea behind off-heap storage is not to be limited by
> the
> >>>>>>> RAM available to VM, then why can't the buffer size provided to
> >>>>>>> Cache.init(...) be more than the RAM available to VM.
> >>>>>>> (b)  Application requirement in my case is to have a concurrent
> Cache
> >>>>>>> that can handle ~100 hits. How can I achieve that with
> DirectMemory Cache?
> >>>>>>> Sorry about cross-posting this - just wanted to make sure this
> >>>>>>> doesn't drop off the radar :)
> >>>>>>> Look forward to you responses.
> >>>>>>> Many thanks,
> >>>>>>>
> >>>>>>> Rahul
> >>>>>>>
> >>>>>
> >>>>
> >>>
> >>
> >
> >
>
>
>
> --
> thanks
> ashish
>
> Blog: http://www.ashishpaliwal.com/blog
> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>

Re: Why is Cache.init(...) limited by the amount of memory available to VM

Posted by Ashish <pa...@gmail.com>.
@rahul - IMHO DirectMemory shall never leverage Virtual Memory. The
moment you start hitting disk, the latencies shoots up and the whole
purpose of DirectMemory is defeated. As Raffaele has already indicated
that DirectMemory intends to use memory outside JVM Heap (RAM) to
avoid potential GC issues, and other cache systems(JCS/Ehcache)
already have Disk stores.

HTH !

On Fri, Nov 25, 2011 at 1:05 AM, Raffaele P. Guidi
<ra...@gmail.com> wrote:
> I'm not sure I get it... if you need more memory than your ram you should
> use JCS (or ehcache) with the overflow to disk option. DirectMemory's only
> goal is to improve the way your RAM is used, not to "enlarge" it (although
> "enlarge your memory" is a nicer payoff than "an off-heap cache harness for
> the JVM"! :D).
> Ciao,
>    Raffaele
>
> On Thu, Nov 24, 2011 at 1:50 PM, Rahul Thakur <ra...@gmail.com>
> wrote:
>>
>> Thanks, guys - this makes sense.
>> Are there any plans to make DirectMemory leverage Virtual memory? My
>> experience with NIO is fairly limited. Having said, is it possible to use
>> Memory Mapped Files?
>>
>> Again, thanks heaps for the prompt responses.
>> Cheers,
>>
>> Rahul
>>
>> On Wed, Nov 23, 2011 at 11:40 PM, Raffaele P. Guidi
>> <ra...@gmail.com> wrote:
>>>
>>> Mir is totally right. The size of the direct memory should be set to
>>> something less than physical memory - o/s required memory - heap size, if
>>> you try to use more the o/s goes swapping on disk degrading performance
>>> _dramatically_ - simply don't do it. Also for MappedByteBuffers - yes, they
>>> are into the heap, and can be useful but not in this case :)
>>> Ciao,
>>>     R
>>>
>>> On Wed, Nov 23, 2011 at 6:25 PM, Mir Tanvir Hossain
>>> <mi...@gmail.com> wrote:
>>>>
>>>> Hi Rahul,
>>>>
>>>> a: for slowness I am guessing that you are hitting the swap since you
>>>> are trying to use more physical ram than what you have installed on your
>>>> machine.
>>>>
>>>> b:  I think DirectMemory is trying to optimize for physical RAM. As a
>>>> result, I don't think MappedByteBuffer would be helpful since doing so would
>>>> be using the file system.
>>>> DirectMemory as it stands right now is only a off heap cache. JCS,
>>>> another apache project, is a heap cache. I am working on to bridge JCS and
>>>> DirectMemory so that JCS can be used as L1 cache, and DirectMemory as L2.
>>>> After that, you can use combination of JCS and DirectMemory for your
>>>> purpose.
>>>> This are just my opinions. Raffaele can shed more light on it.
>>>> Thanks,
>>>> Mir
>>>> On Wed, Nov 23, 2011 at 9:10 AM, Rahul Thakur
>>>> <ra...@gmail.com> wrote:
>>>>>
>>>>> Thanks Raffaele.
>>>>> I tried your suggestions, I have couple of more queries on which I'd
>>>>> really appreciate your inputs:
>>>>> (a)  I set the -XX:MaxDirectMemorySize=6g on my machine which has max
>>>>> 4Gb of RAM, the machine goes dog slow when I try to run tests with any value
>>>>> larger than the physical RAM available. It seems like the JVM initialization
>>>>> takes a relatively long time with that flag value set; the actual JUnit
>>>>> tests seem to run quickly though.
>>>>> (b)  I noticed in OffHeapMemoryBuffer implementation,
>>>>> ByteBuffer.allocateDirect(int) is used to allocate memory. Would using a
>>>>> MappedByteBuffer help rather than ByteBuffer?
>>>>> The application is expected to use data which can be hundreds (possibly
>>>>> thousands) of Gigabytes and it would be preferable to keep only a subset of
>>>>> data loaded in the JVM heap.
>>>>> Look forward to your suggestions.
>>>>> Thanks,
>>>>> Rahul
>>>>>
>>>>>
>>>>> On Wed, Nov 23, 2011 at 12:36 PM, Raffaele P. Guidi
>>>>> <ra...@gmail.com> wrote:
>>>>>>
>>>>>> Well regarding your first point - it is not, you just have to specify
>>>>>> the -XXDirectMemorySize jvm option to whatever you need (the default is the
>>>>>> heap size, just increase it) and split it in <2gb chunks. Regarding the
>>>>>> second one there should be no problem even we try to keep locks at the
>>>>>> minimum, should you have trouble there can be some tuning to put in place,
>>>>>> let us know.
>>>>>>
>>>>>> Ciao,
>>>>>>     R
>>>>>>
>>>>>> Il giorno 23/nov/2011 03:56, "Rahul Thakur"
>>>>>> <ra...@gmail.com> ha scritto:
>>>>>>>
>>>>>>> Greetings everyone!
>>>>>>> Its good to see Direct Memory a member project at Apache \o/. I am
>>>>>>> very keen to see it graduate to top-level project, so please keep up the
>>>>>>> good work!
>>>>>>> I have been consuming snapshots of DirectMemory for a data intensive
>>>>>>> app requiring off-heap storage. I have hit a couple of blockers:
>>>>>>> (a)  If the idea behind off-heap storage is not to be limited by the
>>>>>>> RAM available to VM, then why can't the buffer size provided to
>>>>>>> Cache.init(...) be more than the RAM available to VM.
>>>>>>> (b)  Application requirement in my case is to have a concurrent Cache
>>>>>>> that can handle ~100 hits. How can I achieve that with DirectMemory Cache?
>>>>>>> Sorry about cross-posting this - just wanted to make sure this
>>>>>>> doesn't drop off the radar :)
>>>>>>> Look forward to you responses.
>>>>>>> Many thanks,
>>>>>>>
>>>>>>> Rahul
>>>>>>>
>>>>>
>>>>
>>>
>>
>
>



-- 
thanks
ashish

Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal

Re: Why is Cache.init(...) limited by the amount of memory available to VM

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
I'm not sure I get it... if you need more memory than your ram you should
use JCS (or ehcache) with the overflow to disk option. DirectMemory's only
goal is to improve the way your RAM is used, not to "enlarge" it (although
"enlarge your memory" is a nicer payoff than "an off-heap cache harness for
the JVM"! :D).

Ciao,
   Raffaele

On Thu, Nov 24, 2011 at 1:50 PM, Rahul Thakur
<ra...@gmail.com>wrote:

> Thanks, guys - this makes sense.
>
> Are there any plans to make DirectMemory leverage Virtual memory? My
> experience with NIO is fairly limited. Having said, is it possible to use
> Memory Mapped Files?
>
> Again, thanks heaps for the prompt responses.
>
> Cheers,
>
> Rahul
>
>
> On Wed, Nov 23, 2011 at 11:40 PM, Raffaele P. Guidi <
> raffaele.p.guidi@gmail.com> wrote:
>
>> Mir is totally right. The size of the direct memory should be set to
>> something less than physical memory - o/s required memory - heap size, if
>> you try to use more the o/s goes swapping on disk degrading performance
>> _dramatically_ - simply don't do it. Also for MappedByteBuffers - yes, they
>> are into the heap, and can be useful but not in this case :)
>>
>> Ciao,
>>     R
>>
>>
>> On Wed, Nov 23, 2011 at 6:25 PM, Mir Tanvir Hossain <
>> mir.tanvir.hossain@gmail.com> wrote:
>>
>>> Hi Rahul,
>>>
>>> a: for slowness I am guessing that you are hitting the swap since you
>>> are trying to use more physical ram than what you have installed on your
>>> machine.
>>>
>>> b:  I think DirectMemory is trying to optimize for physical RAM. As a
>>> result, I don't think MappedByteBuffer would be helpful since doing so
>>> would be using the file system.
>>>
>>> DirectMemory as it stands right now is only a off heap cache. JCS,
>>> another apache project, is a heap cache. I am working on to bridge JCS and
>>> DirectMemory so that JCS can be used as L1 cache, and DirectMemory as L2.
>>> After that, you can use combination of JCS and DirectMemory for your
>>> purpose.
>>>
>>> This are just my opinions. Raffaele can shed more light on it.
>>>
>>> Thanks,
>>> Mir
>>>
>>> On Wed, Nov 23, 2011 at 9:10 AM, Rahul Thakur <
>>> rahul.thakur.xdev@gmail.com> wrote:
>>>
>>>> Thanks Raffaele.
>>>>
>>>> I tried your suggestions, I have couple of more queries on which I'd
>>>> really appreciate your inputs:
>>>>
>>>> (a)  I set the -XX:MaxDirectMemorySize=6g on my machine which has max
>>>> 4Gb of RAM, the machine goes dog slow when I try to run tests with any
>>>> value larger than the physical RAM available. It seems like the JVM
>>>> initialization takes a relatively long time with that flag value set; the
>>>> actual JUnit tests seem to run quickly though.
>>>>
>>>> (b)  I noticed in OffHeapMemoryBuffer implementation,
>>>> ByteBuffer.allocateDirect(int) is used to allocate memory. Would using a
>>>> MappedByteBuffer help rather than ByteBuffer?
>>>>
>>>> The application is expected to use data which can be hundreds (possibly
>>>> thousands) of Gigabytes and it would be preferable to keep only a subset of
>>>> data loaded in the JVM heap.
>>>>
>>>> Look forward to your suggestions.
>>>>
>>>> Thanks,
>>>>
>>>> Rahul
>>>>
>>>>
>>>>
>>>> On Wed, Nov 23, 2011 at 12:36 PM, Raffaele P. Guidi <
>>>> raffaele.p.guidi@gmail.com> wrote:
>>>>
>>>>> Well regarding your first point - it is not, you just have to specify
>>>>> the -XXDirectMemorySize jvm option to whatever you need (the default is the
>>>>> heap size, just increase it) and split it in <2gb chunks. Regarding the
>>>>> second one there should be no problem even we try to keep locks at the
>>>>> minimum, should you have trouble there can be some tuning to put in place,
>>>>> let us know.
>>>>>
>>>>> Ciao,
>>>>>     R
>>>>> Il giorno 23/nov/2011 03:56, "Rahul Thakur" <
>>>>> rahul.thakur.xdev@gmail.com> ha scritto:
>>>>>
>>>>> Greetings everyone!
>>>>>>
>>>>>> Its good to see Direct Memory a member project at Apache \o/. I am
>>>>>> very keen to see it graduate to top-level project, so please keep up the
>>>>>> good work!
>>>>>>
>>>>>> I have been consuming snapshots of DirectMemory for a data intensive
>>>>>> app requiring off-heap storage. I have hit a couple of blockers:
>>>>>>
>>>>>> (a)  If the idea behind off-heap storage is not to be limited by the
>>>>>> RAM available to VM, then why can't the buffer size provided to
>>>>>> Cache.init(...) be more than the RAM available to VM.
>>>>>>
>>>>>> (b)  Application requirement in my case is to have a concurrent Cache
>>>>>> that can handle ~100 hits. How can I achieve that with DirectMemory Cache?
>>>>>>
>>>>>> Sorry about cross-posting this - just wanted to make sure this
>>>>>> doesn't drop off the radar :)
>>>>>>
>>>>>> Look forward to you responses.
>>>>>>
>>>>>> Many thanks,
>>>>>>
>>>>>> Rahul
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>
>>
>

Re: Why is Cache.init(...) limited by the amount of memory available to VM

Posted by Rahul Thakur <ra...@gmail.com>.
Thanks, guys - this makes sense.

Are there any plans to make DirectMemory leverage Virtual memory? My
experience with NIO is fairly limited. Having said, is it possible to use
Memory Mapped Files?

Again, thanks heaps for the prompt responses.

Cheers,

Rahul


On Wed, Nov 23, 2011 at 11:40 PM, Raffaele P. Guidi <
raffaele.p.guidi@gmail.com> wrote:

> Mir is totally right. The size of the direct memory should be set to
> something less than physical memory - o/s required memory - heap size, if
> you try to use more the o/s goes swapping on disk degrading performance
> _dramatically_ - simply don't do it. Also for MappedByteBuffers - yes, they
> are into the heap, and can be useful but not in this case :)
>
> Ciao,
>     R
>
>
> On Wed, Nov 23, 2011 at 6:25 PM, Mir Tanvir Hossain <
> mir.tanvir.hossain@gmail.com> wrote:
>
>> Hi Rahul,
>>
>> a: for slowness I am guessing that you are hitting the swap since you are
>> trying to use more physical ram than what you have installed on your
>> machine.
>>
>> b:  I think DirectMemory is trying to optimize for physical RAM. As a
>> result, I don't think MappedByteBuffer would be helpful since doing so
>> would be using the file system.
>>
>> DirectMemory as it stands right now is only a off heap cache. JCS,
>> another apache project, is a heap cache. I am working on to bridge JCS and
>> DirectMemory so that JCS can be used as L1 cache, and DirectMemory as L2.
>> After that, you can use combination of JCS and DirectMemory for your
>> purpose.
>>
>> This are just my opinions. Raffaele can shed more light on it.
>>
>> Thanks,
>> Mir
>>
>> On Wed, Nov 23, 2011 at 9:10 AM, Rahul Thakur <
>> rahul.thakur.xdev@gmail.com> wrote:
>>
>>> Thanks Raffaele.
>>>
>>> I tried your suggestions, I have couple of more queries on which I'd
>>> really appreciate your inputs:
>>>
>>> (a)  I set the -XX:MaxDirectMemorySize=6g on my machine which has max
>>> 4Gb of RAM, the machine goes dog slow when I try to run tests with any
>>> value larger than the physical RAM available. It seems like the JVM
>>> initialization takes a relatively long time with that flag value set; the
>>> actual JUnit tests seem to run quickly though.
>>>
>>> (b)  I noticed in OffHeapMemoryBuffer implementation,
>>> ByteBuffer.allocateDirect(int) is used to allocate memory. Would using a
>>> MappedByteBuffer help rather than ByteBuffer?
>>>
>>> The application is expected to use data which can be hundreds (possibly
>>> thousands) of Gigabytes and it would be preferable to keep only a subset of
>>> data loaded in the JVM heap.
>>>
>>> Look forward to your suggestions.
>>>
>>> Thanks,
>>>
>>> Rahul
>>>
>>>
>>>
>>> On Wed, Nov 23, 2011 at 12:36 PM, Raffaele P. Guidi <
>>> raffaele.p.guidi@gmail.com> wrote:
>>>
>>>> Well regarding your first point - it is not, you just have to specify
>>>> the -XXDirectMemorySize jvm option to whatever you need (the default is the
>>>> heap size, just increase it) and split it in <2gb chunks. Regarding the
>>>> second one there should be no problem even we try to keep locks at the
>>>> minimum, should you have trouble there can be some tuning to put in place,
>>>> let us know.
>>>>
>>>> Ciao,
>>>>     R
>>>> Il giorno 23/nov/2011 03:56, "Rahul Thakur" <
>>>> rahul.thakur.xdev@gmail.com> ha scritto:
>>>>
>>>> Greetings everyone!
>>>>>
>>>>> Its good to see Direct Memory a member project at Apache \o/. I am
>>>>> very keen to see it graduate to top-level project, so please keep up the
>>>>> good work!
>>>>>
>>>>> I have been consuming snapshots of DirectMemory for a data intensive
>>>>> app requiring off-heap storage. I have hit a couple of blockers:
>>>>>
>>>>> (a)  If the idea behind off-heap storage is not to be limited by the
>>>>> RAM available to VM, then why can't the buffer size provided to
>>>>> Cache.init(...) be more than the RAM available to VM.
>>>>>
>>>>> (b)  Application requirement in my case is to have a concurrent Cache
>>>>> that can handle ~100 hits. How can I achieve that with DirectMemory Cache?
>>>>>
>>>>> Sorry about cross-posting this - just wanted to make sure this doesn't
>>>>> drop off the radar :)
>>>>>
>>>>> Look forward to you responses.
>>>>>
>>>>> Many thanks,
>>>>>
>>>>> Rahul
>>>>>
>>>>>
>>>>>
>>>
>>
>

Re: Why is Cache.init(...) limited by the amount of memory available to VM

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
Mir is totally right. The size of the direct memory should be set to
something less than physical memory - o/s required memory - heap size, if
you try to use more the o/s goes swapping on disk degrading performance
_dramatically_ - simply don't do it. Also for MappedByteBuffers - yes, they
are into the heap, and can be useful but not in this case :)

Ciao,
    R

On Wed, Nov 23, 2011 at 6:25 PM, Mir Tanvir Hossain <
mir.tanvir.hossain@gmail.com> wrote:

> Hi Rahul,
>
> a: for slowness I am guessing that you are hitting the swap since you are
> trying to use more physical ram than what you have installed on your
> machine.
>
> b:  I think DirectMemory is trying to optimize for physical RAM. As a
> result, I don't think MappedByteBuffer would be helpful since doing so
> would be using the file system.
>
> DirectMemory as it stands right now is only a off heap cache. JCS, another
> apache project, is a heap cache. I am working on to bridge JCS and
> DirectMemory so that JCS can be used as L1 cache, and DirectMemory as L2.
> After that, you can use combination of JCS and DirectMemory for your
> purpose.
>
> This are just my opinions. Raffaele can shed more light on it.
>
> Thanks,
> Mir
>
> On Wed, Nov 23, 2011 at 9:10 AM, Rahul Thakur <rahul.thakur.xdev@gmail.com
> > wrote:
>
>> Thanks Raffaele.
>>
>> I tried your suggestions, I have couple of more queries on which I'd
>> really appreciate your inputs:
>>
>> (a)  I set the -XX:MaxDirectMemorySize=6g on my machine which has max 4Gb
>> of RAM, the machine goes dog slow when I try to run tests with any value
>> larger than the physical RAM available. It seems like the JVM
>> initialization takes a relatively long time with that flag value set; the
>> actual JUnit tests seem to run quickly though.
>>
>> (b)  I noticed in OffHeapMemoryBuffer implementation,
>> ByteBuffer.allocateDirect(int) is used to allocate memory. Would using a
>> MappedByteBuffer help rather than ByteBuffer?
>>
>> The application is expected to use data which can be hundreds (possibly
>> thousands) of Gigabytes and it would be preferable to keep only a subset of
>> data loaded in the JVM heap.
>>
>> Look forward to your suggestions.
>>
>> Thanks,
>>
>> Rahul
>>
>>
>>
>> On Wed, Nov 23, 2011 at 12:36 PM, Raffaele P. Guidi <
>> raffaele.p.guidi@gmail.com> wrote:
>>
>>> Well regarding your first point - it is not, you just have to specify
>>> the -XXDirectMemorySize jvm option to whatever you need (the default is the
>>> heap size, just increase it) and split it in <2gb chunks. Regarding the
>>> second one there should be no problem even we try to keep locks at the
>>> minimum, should you have trouble there can be some tuning to put in place,
>>> let us know.
>>>
>>> Ciao,
>>>     R
>>> Il giorno 23/nov/2011 03:56, "Rahul Thakur" <ra...@gmail.com>
>>> ha scritto:
>>>
>>> Greetings everyone!
>>>>
>>>> Its good to see Direct Memory a member project at Apache \o/. I am very
>>>> keen to see it graduate to top-level project, so please keep up the good
>>>> work!
>>>>
>>>> I have been consuming snapshots of DirectMemory for a data intensive
>>>> app requiring off-heap storage. I have hit a couple of blockers:
>>>>
>>>> (a)  If the idea behind off-heap storage is not to be limited by the
>>>> RAM available to VM, then why can't the buffer size provided to
>>>> Cache.init(...) be more than the RAM available to VM.
>>>>
>>>> (b)  Application requirement in my case is to have a concurrent Cache
>>>> that can handle ~100 hits. How can I achieve that with DirectMemory Cache?
>>>>
>>>> Sorry about cross-posting this - just wanted to make sure this doesn't
>>>> drop off the radar :)
>>>>
>>>> Look forward to you responses.
>>>>
>>>> Many thanks,
>>>>
>>>> Rahul
>>>>
>>>>
>>>>
>>
>

Re: Why is Cache.init(...) limited by the amount of memory available to VM

Posted by Mir Tanvir Hossain <mi...@gmail.com>.
Hi Rahul,

a: for slowness I am guessing that you are hitting the swap since you are
trying to use more physical ram than what you have installed on your
machine.

b:  I think DirectMemory is trying to optimize for physical RAM. As a
result, I don't think MappedByteBuffer would be helpful since doing so
would be using the file system.

DirectMemory as it stands right now is only a off heap cache. JCS, another
apache project, is a heap cache. I am working on to bridge JCS and
DirectMemory so that JCS can be used as L1 cache, and DirectMemory as L2.
After that, you can use combination of JCS and DirectMemory for your
purpose.

This are just my opinions. Raffaele can shed more light on it.

Thanks,
Mir
On Wed, Nov 23, 2011 at 9:10 AM, Rahul Thakur
<ra...@gmail.com>wrote:

> Thanks Raffaele.
>
> I tried your suggestions, I have couple of more queries on which I'd
> really appreciate your inputs:
>
> (a)  I set the -XX:MaxDirectMemorySize=6g on my machine which has max 4Gb
> of RAM, the machine goes dog slow when I try to run tests with any value
> larger than the physical RAM available. It seems like the JVM
> initialization takes a relatively long time with that flag value set; the
> actual JUnit tests seem to run quickly though.
>
> (b)  I noticed in OffHeapMemoryBuffer implementation,
> ByteBuffer.allocateDirect(int) is used to allocate memory. Would using a
> MappedByteBuffer help rather than ByteBuffer?
>
> The application is expected to use data which can be hundreds (possibly
> thousands) of Gigabytes and it would be preferable to keep only a subset of
> data loaded in the JVM heap.
>
> Look forward to your suggestions.
>
> Thanks,
>
> Rahul
>
>
>
> On Wed, Nov 23, 2011 at 12:36 PM, Raffaele P. Guidi <
> raffaele.p.guidi@gmail.com> wrote:
>
>> Well regarding your first point - it is not, you just have to specify the
>> -XXDirectMemorySize jvm option to whatever you need (the default is the
>> heap size, just increase it) and split it in <2gb chunks. Regarding the
>> second one there should be no problem even we try to keep locks at the
>> minimum, should you have trouble there can be some tuning to put in place,
>> let us know.
>>
>> Ciao,
>>     R
>> Il giorno 23/nov/2011 03:56, "Rahul Thakur" <ra...@gmail.com>
>> ha scritto:
>>
>> Greetings everyone!
>>>
>>> Its good to see Direct Memory a member project at Apache \o/. I am very
>>> keen to see it graduate to top-level project, so please keep up the good
>>> work!
>>>
>>> I have been consuming snapshots of DirectMemory for a data intensive app
>>> requiring off-heap storage. I have hit a couple of blockers:
>>>
>>> (a)  If the idea behind off-heap storage is not to be limited by the RAM
>>> available to VM, then why can't the buffer size provided to Cache.init(...)
>>> be more than the RAM available to VM.
>>>
>>> (b)  Application requirement in my case is to have a concurrent Cache
>>> that can handle ~100 hits. How can I achieve that with DirectMemory Cache?
>>>
>>> Sorry about cross-posting this - just wanted to make sure this doesn't
>>> drop off the radar :)
>>>
>>> Look forward to you responses.
>>>
>>> Many thanks,
>>>
>>> Rahul
>>>
>>>
>>>
>

Re: Why is Cache.init(...) limited by the amount of memory available to VM

Posted by Rahul Thakur <ra...@gmail.com>.
Thanks Raffaele.

I tried your suggestions, I have couple of more queries on which I'd really
appreciate your inputs:

(a)  I set the -XX:MaxDirectMemorySize=6g on my machine which has max 4Gb
of RAM, the machine goes dog slow when I try to run tests with any value
larger than the physical RAM available. It seems like the JVM
initialization takes a relatively long time with that flag value set; the
actual JUnit tests seem to run quickly though.

(b)  I noticed in OffHeapMemoryBuffer implementation,
ByteBuffer.allocateDirect(int) is used to allocate memory. Would using a
MappedByteBuffer help rather than ByteBuffer?

The application is expected to use data which can be hundreds (possibly
thousands) of Gigabytes and it would be preferable to keep only a subset of
data loaded in the JVM heap.

Look forward to your suggestions.

Thanks,

Rahul



On Wed, Nov 23, 2011 at 12:36 PM, Raffaele P. Guidi <
raffaele.p.guidi@gmail.com> wrote:

> Well regarding your first point - it is not, you just have to specify the
> -XXDirectMemorySize jvm option to whatever you need (the default is the
> heap size, just increase it) and split it in <2gb chunks. Regarding the
> second one there should be no problem even we try to keep locks at the
> minimum, should you have trouble there can be some tuning to put in place,
> let us know.
>
> Ciao,
>     R
> Il giorno 23/nov/2011 03:56, "Rahul Thakur" <ra...@gmail.com>
> ha scritto:
>
> Greetings everyone!
>>
>> Its good to see Direct Memory a member project at Apache \o/. I am very
>> keen to see it graduate to top-level project, so please keep up the good
>> work!
>>
>> I have been consuming snapshots of DirectMemory for a data intensive app
>> requiring off-heap storage. I have hit a couple of blockers:
>>
>> (a)  If the idea behind off-heap storage is not to be limited by the RAM
>> available to VM, then why can't the buffer size provided to Cache.init(...)
>> be more than the RAM available to VM.
>>
>> (b)  Application requirement in my case is to have a concurrent Cache
>> that can handle ~100 hits. How can I achieve that with DirectMemory Cache?
>>
>> Sorry about cross-posting this - just wanted to make sure this doesn't
>> drop off the radar :)
>>
>> Look forward to you responses.
>>
>> Many thanks,
>>
>> Rahul
>>
>>
>>

Re: Why is Cache.init(...) limited by the amount of memory available to VM

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
Well regarding your first point - it is not, you just have to specify the
-XXDirectMemorySize jvm option to whatever you need (the default is the
heap size, just increase it) and split it in <2gb chunks. Regarding the
second one there should be no problem even we try to keep locks at the
minimum, should you have trouble there can be some tuning to put in place,
let us know.

Ciao,
    R
Il giorno 23/nov/2011 03:56, "Rahul Thakur" <ra...@gmail.com>
ha scritto:

> Greetings everyone!
>
> Its good to see Direct Memory a member project at Apache \o/. I am very
> keen to see it graduate to top-level project, so please keep up the good
> work!
>
> I have been consuming snapshots of DirectMemory for a data intensive app
> requiring off-heap storage. I have hit a couple of blockers:
>
> (a)  If the idea behind off-heap storage is not to be limited by the RAM
> available to VM, then why can't the buffer size provided to Cache.init(...)
> be more than the RAM available to VM.
>
> (b)  Application requirement in my case is to have a concurrent Cache that
> can handle ~100 hits. How can I achieve that with DirectMemory Cache?
>
> Sorry about cross-posting this - just wanted to make sure this doesn't
> drop off the radar :)
>
> Look forward to you responses.
>
> Many thanks,
>
> Rahul
>
>
>