You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@zookeeper.apache.org by li li <li...@gmail.com> on 2010/04/14 04:22:09 UTC

the scale of the data in the node

Dear developer,
     We are just making research using zookeeper in our experiment.Now,I do
research about the performance of the zookeeper.
    Now ,we need to know if the scale of data setting in the node influences
the speed of writing action.
    For example,when we have 1 M bytes data to write to the znode,which is
better between the two cases.NO.1,we set the 1M bytes data once.NO.2, we
break up the 1M data into several sections ,each section with 128 bytes.And
then,we write the data using several clients.Do these two cases have
different effection in the writing action?Which one is better?
    Thanks for your reading,I'm looking forward to your reply.
    With best wishes!

Lily

Re: the scale of the data in the node

Posted by Ted Dunning <te...@gmail.com>.
Writing a large amount of data in really small pieces is going to be slower
than larger pieces.

This might reverse at very large sizes.

But you should test this if you really need to know the correct answer.

On Tue, Apr 13, 2010 at 7:22 PM, li li <li...@gmail.com> wrote:

> Dear developer,
>     We are just making research using zookeeper in our experiment.Now,I do
> research about the performance of the zookeeper.
>    Now ,we need to know if the scale of data setting in the node influences
> the speed of writing action.
>    For example,when we have 1 M bytes data to write to the znode,which is
> better between the two cases.NO.1,we set the 1M bytes data once.NO.2, we
> break up the 1M data into several sections ,each section with 128 bytes.And
> then,we write the data using several clients.Do these two cases have
> different effection in the writing action?Which one is better?
>    Thanks for your reading,I'm looking forward to your reply.
>    With best wishes!
>
> Lily
>