You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-dev@jackrabbit.apache.org by Andrei Dulceanu <an...@gmail.com> on 2017/08/21 07:03:21 UTC

Backporting cold standby chunking to 1.6.x?

Hi all,

With [0] and [1] blob chunking in cold standby was addressed in 1.8. I
think now we have a stable and robust solution which got rid of the 2.14
GB/blob limitation. As a positive side-effect, the memory footprint needed
for a successful sync of a big blob reduced considerably. While previously
4GB of heap memory were needed for syncing 1GB blob, now only 512MB are
needed for the same operation.

Considering all the above, I was wondering if it would make sense to
backport these fixes to 1.6.x. I know that traditionally we only backport
bug fixes, but depending on how you look at it, the limitation was also
kind of a bug :). I was only considering 1.6.x as a candidate branch
because the cold standby code in 1.8 and 1.6.x is 98% the same.

Thanks,

Andrei

[0] https://issues.apache.org/jira/browse/OAK-5902

[1] https://issues.apache.org/jira/browse/OAK-6565

Re: Backporting cold standby chunking to 1.6.x?

Posted by Michael Dürig <md...@apache.org>.
Same here. Let's wait for a concrete case. Hopefully until then that 
feature already had a bit of "real world" coverage.

Michael


On 21.08.17 09:12, Francesco Mari wrote:
> I wouldn't backport unless strictly necessary. In my opinion, this is
> not a bug but an improvement.
> 
> On Mon, Aug 21, 2017 at 9:03 AM, Andrei Dulceanu
> <an...@gmail.com> wrote:
>> Hi all,
>>
>> With [0] and [1] blob chunking in cold standby was addressed in 1.8. I
>> think now we have a stable and robust solution which got rid of the 2.14
>> GB/blob limitation. As a positive side-effect, the memory footprint needed
>> for a successful sync of a big blob reduced considerably. While previously
>> 4GB of heap memory were needed for syncing 1GB blob, now only 512MB are
>> needed for the same operation.
>>
>> Considering all the above, I was wondering if it would make sense to
>> backport these fixes to 1.6.x. I know that traditionally we only backport
>> bug fixes, but depending on how you look at it, the limitation was also
>> kind of a bug :). I was only considering 1.6.x as a candidate branch
>> because the cold standby code in 1.8 and 1.6.x is 98% the same.
>>
>> Thanks,
>>
>> Andrei
>>
>> [0] https://issues.apache.org/jira/browse/OAK-5902
>>
>> [1] https://issues.apache.org/jira/browse/OAK-6565

Re: Backporting cold standby chunking to 1.6.x?

Posted by Francesco Mari <ma...@gmail.com>.
I wouldn't backport unless strictly necessary. In my opinion, this is
not a bug but an improvement.

On Mon, Aug 21, 2017 at 9:03 AM, Andrei Dulceanu
<an...@gmail.com> wrote:
> Hi all,
>
> With [0] and [1] blob chunking in cold standby was addressed in 1.8. I
> think now we have a stable and robust solution which got rid of the 2.14
> GB/blob limitation. As a positive side-effect, the memory footprint needed
> for a successful sync of a big blob reduced considerably. While previously
> 4GB of heap memory were needed for syncing 1GB blob, now only 512MB are
> needed for the same operation.
>
> Considering all the above, I was wondering if it would make sense to
> backport these fixes to 1.6.x. I know that traditionally we only backport
> bug fixes, but depending on how you look at it, the limitation was also
> kind of a bug :). I was only considering 1.6.x as a candidate branch
> because the cold standby code in 1.8 and 1.6.x is 98% the same.
>
> Thanks,
>
> Andrei
>
> [0] https://issues.apache.org/jira/browse/OAK-5902
>
> [1] https://issues.apache.org/jira/browse/OAK-6565