You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@geode.apache.org by David <fi...@gmail.com> on 2016/04/20 03:53:14 UTC

Unsubscribe

Unsubscribe
On Apr 19, 2016 8:39 PM, "Eugene Strokin" <eu...@strokin.info> wrote:

> Dan, thanks for the response. Yes you right, 512 Mb of course. My mistake.
> The idea is to use as much disk space as possible. I understand the
> downside of using high compaction threshold. I'll play with that, and see
> how bad it could be.
> But what about eviction? Would Geode remove objects from the overflow
> automatically once it would reach a certain size?
> Ideally, I'd like to set the Geode to start kicking LRU objects out once
> the free disk space would reach 1Gb. Is it possible? If so, please point me
> to the right direction.
>
> Thanks again,
> Eugene
>
>
> On Tue, Apr 19, 2016 at 8:25 PM, Dan Smith <ds...@pivotal.io> wrote:
>
>> I'm guessing you mean 512MB of RAM, not KB? Otherwise, you are definitely
>> going to have problems :)
>>
>> Regarding conserving disk space - I think only allowing for 1 GB free
>> space is probably going to run into issues. I think you would be better off
>> having fewer droplets with more space if that's possible. And only leaving
>> 5% disk space for compaction and as a buffer to avoid running out of disk
>> is probably not enough.
>>
>> By default, geode will compact oplogs when they get to be 50% garbage,
>> which means needing maybe 2X the amount of actual disk space. You can
>> configure the compaction-threshold to something like 95%, but that means
>> geode will be doing a lot of extra work clean up garbage on disk.
>> Regardless, you'll probably want to tune down the max-oplog-size to
>> something much smaller than 1GB.
>>
>> -Dan
>>
>> On Tue, Apr 19, 2016 at 4:26 PM, Eugene Strokin <eu...@strokin.info>
>> wrote:
>>
>>> Hello, I'm seriously consider to use Geode as a core for distributed
>>> file cache system. But I have a few questions.
>>> But first, this is what needs to be done: Scalable file system with LRU
>>> eviction policy utilizing the disc space as much as possible. The idea is
>>> to have around 50 small Droplets from DigitalOcean, which provides 512Kb
>>> RAM and 20Gb Storage. The client should call the cluster and get a byte
>>> array by a key. If needed, the cluster should be expanded. The origin of
>>> the byte arrays are files from AWS S3.
>>> Looks like everything could be done using Geode, but:
>>> - it looks like the compaction requires a lot of free hard drive space.
>>> All I can allow is about 1Gb. Would this work in my case? How could it be
>>> done.
>>> - Is the objects would be evicted automatically from overflow storage
>>> using LRU policy?
>>>
>>> Thanks in advance for your answers, ideas, suggestions.
>>> Eugene
>>>
>>
>>
>