You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@directmemory.apache.org by Akash Ashok <th...@gmail.com> on 2011/10/20 09:31:26 UTC

Memory Deallocation Issue

I currently see that we have

Cache.java
public static void clear() {
for (OffHeapMemoryBuffer buffer : buffers) {
buffer.clear();
}
activeBuffer = buffers.get(0);
}

OffHeapMemoryBuffer.java

public void clear()  {
allocationErrors = 0;
pointers.clear();
createAndAddFirstPointer();
buffer.clear();
used.set(0);
}

buffer.clear() doesn't actually deallocate the memory. I assume currently we
dnt have the feature of de-allocating memory which means that we can't
resize the cache once instantiated even if we want to. I feel we should have
this feature  which would give deeper control on the Cache.



Cheers,
Akash A

Re: Memory Deallocation Issue

Posted by Akash Ashok <th...@gmail.com>.
On Thu, Oct 20, 2011 at 4:54 PM, Tim Williams <wi...@gmail.com> wrote:

> On Thu, Oct 20, 2011 at 4:08 AM, Akash Ashok <th...@gmail.com>
> wrote:
> > There is a way to free this memory. We will need to extend ByteBuffer to
> do
> > this. finalize() method actually deallocates memory. But its a protected
> > member.
> >
> http://stackoverflow.com/questions/1854398/how-to-garbage-collect-a-direct-buffer-java
> >
> > So we might have to allocate a new Buffer and terminate the old one. This
> is
> > gonna be expensive but since this is not done too often it should be ok.
> >
> > Have opened a JIRA for this.
> > https://issues.apache.org/jira/browse/DIRECTMEMORY-24
> >
> >  Also could you please give me developer access and add me as a
> contributor
> > so that I can assign tickets and submit patches ?
>
> Hi Akash,
> I just added you as a contributor in JIRA...
>
> Thanks a lot Tim.

Re: Memory Deallocation Issue

Posted by Tim Williams <wi...@gmail.com>.
On Thu, Oct 20, 2011 at 4:08 AM, Akash Ashok <th...@gmail.com> wrote:
> There is a way to free this memory. We will need to extend ByteBuffer to do
> this. finalize() method actually deallocates memory. But its a protected
> member.
> http://stackoverflow.com/questions/1854398/how-to-garbage-collect-a-direct-buffer-java
>
> So we might have to allocate a new Buffer and terminate the old one. This is
> gonna be expensive but since this is not done too often it should be ok.
>
> Have opened a JIRA for this.
> https://issues.apache.org/jira/browse/DIRECTMEMORY-24
>
>  Also could you please give me developer access and add me as a contributor
> so that I can assign tickets and submit patches ?

Hi Akash,
I just added you as a contributor in JIRA...

Thanks,
--tim

Re: Memory Deallocation Issue

Posted by Akash Ashok <th...@gmail.com>.
Just came across a better way to deallocate memory other than finalization.
Its the Cleaner class. It uses phantom references and invoked direct by the
reference handler thus is super lightweight.. Shall use this for the
deallocation.

For more information
http://www.docjar.org/html/api/sun/misc/Cleaner.java.html

Cheer,
Akash A

On Thu, Oct 20, 2011 at 10:46 PM, Akash Ashok <th...@gmail.com>wrote:

>
>
> On Thu, Oct 20, 2011 at 3:13 PM, Ashish <pa...@gmail.com> wrote:
>
>> On Thu, Oct 20, 2011 at 2:32 PM, Akash Ashok <th...@gmail.com>
>> wrote:
>> > On Thu, Oct 20, 2011 at 2:13 PM, Ashish <pa...@gmail.com>
>> wrote:
>> >
>> >> On Thu, Oct 20, 2011 at 1:57 PM, Raffaele P. Guidi
>> >> <ra...@gmail.com> wrote:
>> >> > Gooood news! As with every not-so-well documented piece of software I
>> >> should
>> >> > have read the code before taking wrong assumptions (or at least take
>> a
>> >> look
>> >> > at stackoverflow ;) ). I think we should ask our mentors to assign
>> >> developer
>> >> > rights. Or is it to be filed to INFRA? Sorry, I'm still an ASF rookie
>> ;)
>> >> >
>> >> > Thanks,
>> >> >    Raffaele
>> >>
>> >> I again have to disagree on this feature. Why would you need to have
>> >> to deallocate memory? You should know how much you need.
>> >> Its always better to have a contiguous memory allocated. It works
>> >> well. Dynamically resizing will pose challenges and add to performance
>> >> issues.
>> >>
>> >> From cache perspective, we are anyways clearing the element, making
>> >> way for new element.
>> >>
>> >> IMHO, I see offheap as a big chunk of memory which is pre-allocated
>> >> and the MemoryManager we have written should deal with all the object
>> >> management. We can manage memory as chunks or realize maps in native
>> >> memory, is all upto the design we choose.
>> >>
>> >> This is very much an importand feature I believe. Assume in production
>> you
>> > made the
>> > wrong decision of how much memory was pre-allocated, then you shouldn't
>> be
>> > charged
>> > with the penalty of being unable to use that memory right ? Even though
>> this
>> > is very expensive
>> > the feature should be available but documented well enough, warning
>> against
>> > its use.
>>
>> Well as far as I handle Ops, this is not how things work.
>> Ops goes through a detailed capacity planning. Even before Ops, things
>> are tested in staging environment.
>>
>> So when I take caches to production, I always calculate
>> 1. How many elements do I need to store
>> 2. What's the average size of each element and extrapolate to how much
>> memory is needed
>> 3. What level to set the eviction and how much to evict
>> 4. Do I need expiry
>> 5. For read-only caches, don't want eviction to happen, so tune them
>> accordingly
>>
>> Need less to mention the GC tuning part.
>>
>> Again this is not hard written rule. We all have our preferences &
>> experiences.
>>
>> From an end user perspective, I want to use DirectMemory so
>> a. need a stable release
>> b. need some benchmark numbers
>>
>  +1 on these recommendations. We should never sacrifice on stability at any
> cost.
>
>
>>
>> And I am not against this feature, so please go ahead and implement it :)
>> I feel we can take an approach of benchmarking these and write
>> recommendations to the wiki. wdyt?
>>
>> Sounds Cool
> 1. Benchmarking and Recommendations
> 2. Examples and How to
> Quite crucial.
>
>  >
>> > If you are concerned about memory fragmentation, It wouldn't lead to a
>> lot
>> > of fragmentaion
>> > if we deallocate and re allocate contiguous blocks right. I am under the
>> >  assumption that
>> >  allocateDirect allocates contiguous blocks of memory.
>>
>> It tried to allocate. Say if I ask for 64G of direct memory, it shall
>> try to allocate contiguous memory.
>> Now if we allocate and deallocate, it may not be contiguous. As OS
>> can't predict when next allotment request shall come in.
>>
>> Hmmmm. Something to think about :)
>
> Cheers,
> Akash A
>

Re: Memory Deallocation Issue

Posted by Akash Ashok <th...@gmail.com>.
On Thu, Oct 20, 2011 at 3:13 PM, Ashish <pa...@gmail.com> wrote:

> On Thu, Oct 20, 2011 at 2:32 PM, Akash Ashok <th...@gmail.com>
> wrote:
> > On Thu, Oct 20, 2011 at 2:13 PM, Ashish <pa...@gmail.com> wrote:
> >
> >> On Thu, Oct 20, 2011 at 1:57 PM, Raffaele P. Guidi
> >> <ra...@gmail.com> wrote:
> >> > Gooood news! As with every not-so-well documented piece of software I
> >> should
> >> > have read the code before taking wrong assumptions (or at least take a
> >> look
> >> > at stackoverflow ;) ). I think we should ask our mentors to assign
> >> developer
> >> > rights. Or is it to be filed to INFRA? Sorry, I'm still an ASF rookie
> ;)
> >> >
> >> > Thanks,
> >> >    Raffaele
> >>
> >> I again have to disagree on this feature. Why would you need to have
> >> to deallocate memory? You should know how much you need.
> >> Its always better to have a contiguous memory allocated. It works
> >> well. Dynamically resizing will pose challenges and add to performance
> >> issues.
> >>
> >> From cache perspective, we are anyways clearing the element, making
> >> way for new element.
> >>
> >> IMHO, I see offheap as a big chunk of memory which is pre-allocated
> >> and the MemoryManager we have written should deal with all the object
> >> management. We can manage memory as chunks or realize maps in native
> >> memory, is all upto the design we choose.
> >>
> >> This is very much an importand feature I believe. Assume in production
> you
> > made the
> > wrong decision of how much memory was pre-allocated, then you shouldn't
> be
> > charged
> > with the penalty of being unable to use that memory right ? Even though
> this
> > is very expensive
> > the feature should be available but documented well enough, warning
> against
> > its use.
>
> Well as far as I handle Ops, this is not how things work.
> Ops goes through a detailed capacity planning. Even before Ops, things
> are tested in staging environment.
>
> So when I take caches to production, I always calculate
> 1. How many elements do I need to store
> 2. What's the average size of each element and extrapolate to how much
> memory is needed
> 3. What level to set the eviction and how much to evict
> 4. Do I need expiry
> 5. For read-only caches, don't want eviction to happen, so tune them
> accordingly
>
> Need less to mention the GC tuning part.
>
> Again this is not hard written rule. We all have our preferences &
> experiences.
>
> From an end user perspective, I want to use DirectMemory so
> a. need a stable release
> b. need some benchmark numbers
>
 +1 on these recommendations. We should never sacrifice on stability at any
cost.


>
> And I am not against this feature, so please go ahead and implement it :)
> I feel we can take an approach of benchmarking these and write
> recommendations to the wiki. wdyt?
>
> Sounds Cool
1. Benchmarking and Recommendations
2. Examples and How to
Quite crucial.

>
> > If you are concerned about memory fragmentation, It wouldn't lead to a
> lot
> > of fragmentaion
> > if we deallocate and re allocate contiguous blocks right. I am under the
> >  assumption that
> >  allocateDirect allocates contiguous blocks of memory.
>
> It tried to allocate. Say if I ask for 64G of direct memory, it shall
> try to allocate contiguous memory.
> Now if we allocate and deallocate, it may not be contiguous. As OS
> can't predict when next allotment request shall come in.
>
> Hmmmm. Something to think about :)

Cheers,
Akash A

Re: Memory Deallocation Issue

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
Guys, you are both right - the current allocation strategy has far superior
performance than per-item allocation - but not being able to de-allocate
memory is of course bad in real life (and in a always-on/hot deployment
environment such as OSGi is even worse). This change (great job finding it)
will allow us to add a dispose() or finalize() method that will give back
memory to the O/S between deploys making it more stable and safe to use in
enterprise setups :)

Ciao,
    R

On Thursday, October 20, 2011, Ashish <pa...@gmail.com> wrote:
> On Thu, Oct 20, 2011 at 2:32 PM, Akash Ashok <th...@gmail.com>
wrote:
>> On Thu, Oct 20, 2011 at 2:13 PM, Ashish <pa...@gmail.com> wrote:
>>
>>> On Thu, Oct 20, 2011 at 1:57 PM, Raffaele P. Guidi
>>> <ra...@gmail.com> wrote:
>>> > Gooood news! As with every not-so-well documented piece of software I
>>> should
>>> > have read the code before taking wrong assumptions (or at least take a
>>> look
>>> > at stackoverflow ;) ). I think we should ask our mentors to assign
>>> developer
>>> > rights. Or is it to be filed to INFRA? Sorry, I'm still an ASF rookie
;)
>>> >
>>> > Thanks,
>>> >    Raffaele
>>>
>>> I again have to disagree on this feature. Why would you need to have
>>> to deallocate memory? You should know how much you need.
>>> Its always better to have a contiguous memory allocated. It works
>>> well. Dynamically resizing will pose challenges and add to performance
>>> issues.
>>>
>>> From cache perspective, we are anyways clearing the element, making
>>> way for new element.
>>>
>>> IMHO, I see offheap as a big chunk of memory which is pre-allocated
>>> and the MemoryManager we have written should deal with all the object
>>> management. We can manage memory as chunks or realize maps in native
>>> memory, is all upto the design we choose.
>>>
>>> This is very much an importand feature I believe. Assume in production
you
>> made the
>> wrong decision of how much memory was pre-allocated, then you shouldn't
be
>> charged
>> with the penalty of being unable to use that memory right ? Even though
this
>> is very expensive
>> the feature should be available but documented well enough, warning
against
>> its use.
>
> Well as far as I handle Ops, this is not how things work.
> Ops goes through a detailed capacity planning. Even before Ops, things
> are tested in staging environment.
>
> So when I take caches to production, I always calculate
> 1. How many elements do I need to store
> 2. What's the average size of each element and extrapolate to how much
> memory is needed
> 3. What level to set the eviction and how much to evict
> 4. Do I need expiry
> 5. For read-only caches, don't want eviction to happen, so tune them
accordingly
>
> Need less to mention the GC tuning part.
>
> Again this is not hard written rule. We all have our preferences &
> experiences.
>
> From an end user perspective, I want to use DirectMemory so
> a. need a stable release
> b. need some benchmark numbers
>
> And I am not against this feature, so please go ahead and implement it :)
> I feel we can take an approach of benchmarking these and write
> recommendations to the wiki. wdyt?
>
>>
>> If you are concerned about memory fragmentation, It wouldn't lead to a
lot
>> of fragmentaion
>> if we deallocate and re allocate contiguous blocks right. I am under the
>>  assumption that
>>  allocateDirect allocates contiguous blocks of memory.
>
> It tried to allocate. Say if I ask for 64G of direct memory, it shall
> try to allocate contiguous memory.
> Now if we allocate and deallocate, it may not be contiguous. As OS
> can't predict when next allotment request shall come in.
>
>>
>>
>> @ashok - You don't need dev access to submit patches. Create a JIRA
>>> account and submit patches. New committers @ASF are voted by PMC based
>>> on their contribution :)
>>>
>>> I never asked to add me as a committer :) I asked for contributor access
>> where in
>> I have the permission to assign a ticket to myself and SubmitPatch option
on
>> JIRA.
>> On HBase only after I was added as a contributor I got these options.
This
>> was the
>> rationale behind my request.
>>
>
> Ahh.. my mistake.. Sorry ..
>
> cheers
> ashish
>

Re: Memory Deallocation Issue

Posted by Ashish <pa...@gmail.com>.
On Thu, Oct 20, 2011 at 2:32 PM, Akash Ashok <th...@gmail.com> wrote:
> On Thu, Oct 20, 2011 at 2:13 PM, Ashish <pa...@gmail.com> wrote:
>
>> On Thu, Oct 20, 2011 at 1:57 PM, Raffaele P. Guidi
>> <ra...@gmail.com> wrote:
>> > Gooood news! As with every not-so-well documented piece of software I
>> should
>> > have read the code before taking wrong assumptions (or at least take a
>> look
>> > at stackoverflow ;) ). I think we should ask our mentors to assign
>> developer
>> > rights. Or is it to be filed to INFRA? Sorry, I'm still an ASF rookie ;)
>> >
>> > Thanks,
>> >    Raffaele
>>
>> I again have to disagree on this feature. Why would you need to have
>> to deallocate memory? You should know how much you need.
>> Its always better to have a contiguous memory allocated. It works
>> well. Dynamically resizing will pose challenges and add to performance
>> issues.
>>
>> From cache perspective, we are anyways clearing the element, making
>> way for new element.
>>
>> IMHO, I see offheap as a big chunk of memory which is pre-allocated
>> and the MemoryManager we have written should deal with all the object
>> management. We can manage memory as chunks or realize maps in native
>> memory, is all upto the design we choose.
>>
>> This is very much an importand feature I believe. Assume in production you
> made the
> wrong decision of how much memory was pre-allocated, then you shouldn't be
> charged
> with the penalty of being unable to use that memory right ? Even though this
> is very expensive
> the feature should be available but documented well enough, warning against
> its use.

Well as far as I handle Ops, this is not how things work.
Ops goes through a detailed capacity planning. Even before Ops, things
are tested in staging environment.

So when I take caches to production, I always calculate
1. How many elements do I need to store
2. What's the average size of each element and extrapolate to how much
memory is needed
3. What level to set the eviction and how much to evict
4. Do I need expiry
5. For read-only caches, don't want eviction to happen, so tune them accordingly

Need less to mention the GC tuning part.

Again this is not hard written rule. We all have our preferences &
experiences.

>From an end user perspective, I want to use DirectMemory so
a. need a stable release
b. need some benchmark numbers

And I am not against this feature, so please go ahead and implement it :)
I feel we can take an approach of benchmarking these and write
recommendations to the wiki. wdyt?

>
> If you are concerned about memory fragmentation, It wouldn't lead to a lot
> of fragmentaion
> if we deallocate and re allocate contiguous blocks right. I am under the
>  assumption that
>  allocateDirect allocates contiguous blocks of memory.

It tried to allocate. Say if I ask for 64G of direct memory, it shall
try to allocate contiguous memory.
Now if we allocate and deallocate, it may not be contiguous. As OS
can't predict when next allotment request shall come in.

>
>
> @ashok - You don't need dev access to submit patches. Create a JIRA
>> account and submit patches. New committers @ASF are voted by PMC based
>> on their contribution :)
>>
>> I never asked to add me as a committer :) I asked for contributor access
> where in
> I have the permission to assign a ticket to myself and SubmitPatch option on
> JIRA.
> On HBase only after I was added as a contributor I got these options. This
> was the
> rationale behind my request.
>

Ahh.. my mistake.. Sorry ..

cheers
ashish

Re: Memory Deallocation Issue

Posted by Akash Ashok <th...@gmail.com>.
On Thu, Oct 20, 2011 at 2:13 PM, Ashish <pa...@gmail.com> wrote:

> On Thu, Oct 20, 2011 at 1:57 PM, Raffaele P. Guidi
> <ra...@gmail.com> wrote:
> > Gooood news! As with every not-so-well documented piece of software I
> should
> > have read the code before taking wrong assumptions (or at least take a
> look
> > at stackoverflow ;) ). I think we should ask our mentors to assign
> developer
> > rights. Or is it to be filed to INFRA? Sorry, I'm still an ASF rookie ;)
> >
> > Thanks,
> >    Raffaele
>
> I again have to disagree on this feature. Why would you need to have
> to deallocate memory? You should know how much you need.
> Its always better to have a contiguous memory allocated. It works
> well. Dynamically resizing will pose challenges and add to performance
> issues.
>
> From cache perspective, we are anyways clearing the element, making
> way for new element.
>
> IMHO, I see offheap as a big chunk of memory which is pre-allocated
> and the MemoryManager we have written should deal with all the object
> management. We can manage memory as chunks or realize maps in native
> memory, is all upto the design we choose.
>
> This is very much an importand feature I believe. Assume in production you
made the
wrong decision of how much memory was pre-allocated, then you shouldn't be
charged
with the penalty of being unable to use that memory right ? Even though this
is very expensive
the feature should be available but documented well enough, warning against
its use.

If you are concerned about memory fragmentation, It wouldn't lead to a lot
of fragmentaion
if we deallocate and re allocate contiguous blocks right. I am under the
 assumption that
 allocateDirect allocates contiguous blocks of memory.


@ashok - You don't need dev access to submit patches. Create a JIRA
> account and submit patches. New committers @ASF are voted by PMC based
> on their contribution :)
>
> I never asked to add me as a committer :) I asked for contributor access
where in
I have the permission to assign a ticket to myself and SubmitPatch option on
JIRA.
On HBase only after I was added as a contributor I got these options. This
was the
rationale behind my request.

Cheers,
Akash A


> thanks
> ashish
>

Re: Memory Deallocation Issue

Posted by Ashish <pa...@gmail.com>.
On Thu, Oct 20, 2011 at 1:57 PM, Raffaele P. Guidi
<ra...@gmail.com> wrote:
> Gooood news! As with every not-so-well documented piece of software I should
> have read the code before taking wrong assumptions (or at least take a look
> at stackoverflow ;) ). I think we should ask our mentors to assign developer
> rights. Or is it to be filed to INFRA? Sorry, I'm still an ASF rookie ;)
>
> Thanks,
>    Raffaele

I again have to disagree on this feature. Why would you need to have
to deallocate memory? You should know how much you need.
Its always better to have a contiguous memory allocated. It works
well. Dynamically resizing will pose challenges and add to performance
issues.

>From cache perspective, we are anyways clearing the element, making
way for new element.

IMHO, I see offheap as a big chunk of memory which is pre-allocated
and the MemoryManager we have written should deal with all the object
management. We can manage memory as chunks or realize maps in native
memory, is all upto the design we choose.

@ashok - You don't need dev access to submit patches. Create a JIRA
account and submit patches. New committers @ASF are voted by PMC based
on their contribution :)

thanks
ashish

Re: Memory Deallocation Issue

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
Gooood news! As with every not-so-well documented piece of software I should
have read the code before taking wrong assumptions (or at least take a look
at stackoverflow ;) ). I think we should ask our mentors to assign developer
rights. Or is it to be filed to INFRA? Sorry, I'm still an ASF rookie ;)

Thanks,
    Raffaele

On Thu, Oct 20, 2011 at 10:08 AM, Akash Ashok <th...@gmail.com>wrote:

> There is a way to free this memory. We will need to extend ByteBuffer to do
> this. finalize() method actually deallocates memory. But its a protected
> member.
>
> http://stackoverflow.com/questions/1854398/how-to-garbage-collect-a-direct-buffer-java
>
> So we might have to allocate a new Buffer and terminate the old one. This
> is
> gonna be expensive but since this is not done too often it should be ok.
>
> Have opened a JIRA for this.
> https://issues.apache.org/jira/browse/DIRECTMEMORY-24
>
>  Also could you please give me developer access and add me as a contributor
> so that I can assign tickets and submit patches ?
>
> Cheers,
> Akash A
>
> On Thu, Oct 20, 2011 at 1:26 PM, Raffaele P. Guidi <
> raffaele.p.guidi@gmail.com> wrote:
>
> > It is not casual. I didn't find a way to free a DirectBuffer except than
> > stopping the jvm. Please double check it, ten talented programmers are
> > better than a dumb one ;-)
> >
> > On Thursday, October 20, 2011, Akash Ashok <th...@gmail.com>
> wrote:
> > > I currently see that we have
> > >
> > > Cache.java
> > > public static void clear() {
> > > for (OffHeapMemoryBuffer buffer : buffers) {
> > > buffer.clear();
> > > }
> > > activeBuffer = buffers.get(0);
> > > }
> > >
> > > OffHeapMemoryBuffer.java
> > >
> > > public void clear()  {
> > > allocationErrors = 0;
> > > pointers.clear();
> > > createAndAddFirstPointer();
> > > buffer.clear();
> > > used.set(0);
> > > }
> > >
> > > buffer.clear() doesn't actually deallocate the memory. I assume
> currently
> > we
> > > dnt have the feature of de-allocating memory which means that we can't
> > > resize the cache once instantiated even if we want to. I feel we should
> > have
> > > this feature  which would give deeper control on the Cache.
> > >
> > >
> > >
> > > Cheers,
> > > Akash A
> > >
> >
>

Re: Memory Deallocation Issue

Posted by Akash Ashok <th...@gmail.com>.
There is a way to free this memory. We will need to extend ByteBuffer to do
this. finalize() method actually deallocates memory. But its a protected
member.
http://stackoverflow.com/questions/1854398/how-to-garbage-collect-a-direct-buffer-java

So we might have to allocate a new Buffer and terminate the old one. This is
gonna be expensive but since this is not done too often it should be ok.

Have opened a JIRA for this.
https://issues.apache.org/jira/browse/DIRECTMEMORY-24

 Also could you please give me developer access and add me as a contributor
so that I can assign tickets and submit patches ?

Cheers,
Akash A

On Thu, Oct 20, 2011 at 1:26 PM, Raffaele P. Guidi <
raffaele.p.guidi@gmail.com> wrote:

> It is not casual. I didn't find a way to free a DirectBuffer except than
> stopping the jvm. Please double check it, ten talented programmers are
> better than a dumb one ;-)
>
> On Thursday, October 20, 2011, Akash Ashok <th...@gmail.com> wrote:
> > I currently see that we have
> >
> > Cache.java
> > public static void clear() {
> > for (OffHeapMemoryBuffer buffer : buffers) {
> > buffer.clear();
> > }
> > activeBuffer = buffers.get(0);
> > }
> >
> > OffHeapMemoryBuffer.java
> >
> > public void clear()  {
> > allocationErrors = 0;
> > pointers.clear();
> > createAndAddFirstPointer();
> > buffer.clear();
> > used.set(0);
> > }
> >
> > buffer.clear() doesn't actually deallocate the memory. I assume currently
> we
> > dnt have the feature of de-allocating memory which means that we can't
> > resize the cache once instantiated even if we want to. I feel we should
> have
> > this feature  which would give deeper control on the Cache.
> >
> >
> >
> > Cheers,
> > Akash A
> >
>

Re: Memory Deallocation Issue

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
It is not casual. I didn't find a way to free a DirectBuffer except than
stopping the jvm. Please double check it, ten talented programmers are
better than a dumb one ;-)

On Thursday, October 20, 2011, Akash Ashok <th...@gmail.com> wrote:
> I currently see that we have
>
> Cache.java
> public static void clear() {
> for (OffHeapMemoryBuffer buffer : buffers) {
> buffer.clear();
> }
> activeBuffer = buffers.get(0);
> }
>
> OffHeapMemoryBuffer.java
>
> public void clear()  {
> allocationErrors = 0;
> pointers.clear();
> createAndAddFirstPointer();
> buffer.clear();
> used.set(0);
> }
>
> buffer.clear() doesn't actually deallocate the memory. I assume currently
we
> dnt have the feature of de-allocating memory which means that we can't
> resize the cache once instantiated even if we want to. I feel we should
have
> this feature  which would give deeper control on the Cache.
>
>
>
> Cheers,
> Akash A
>