You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zookeeper.apache.org by Ted Dunning <te...@gmail.com> on 2010/12/10 20:35:38 UTC

multiple updates again

I was wondering if we shouldn't raise the suggestion of limited multiple
updates again.

My suggestion is that we allow a batch of updates in the style suggested
some time ago
by Henry (list of znodes, list of versions, list of content values) with the
additional constraint
that the total size of all of the new content be limited to the same value
that a single update
has.

The semantics of this would be to succeed if all updates succeed and file
otherwise.

This could be implemented fairly straightforwardly in the master by adding
all the updates
to the queue at once or failing the update.  This is almost identical to the
current logic in
which items are added to the queue one at a time if they are not going to
fail due to upstream
transactions already in the queue.  It would be good, of course, to verify
that the multi-update
itself isn't going to cause a failure before even trying it (updating the
same znode and having
a version required in the second update is clearly going to fail).

The novelty here is

a) I finally get that the suggestion is very very limited

b) I have added a constraint on request size that is the same as we
currently have so this
should not change speed or cause stalls for updates.

Anybody have any thoughts?