You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@helix.apache.org by hu...@apache.org on 2019/08/15 01:32:44 UTC

[helix.wiki] branch master updated: Updated Storing Assignment Metadata in ZooKeeper (markdown)

This is an automated email from the ASF dual-hosted git repository.

hulee pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/helix.wiki.git


The following commit(s) were added to refs/heads/master by this push:
     new b8752a8  Updated Storing Assignment Metadata in ZooKeeper (markdown)
b8752a8 is described below

commit b8752a8be4240cc34bc0db1c916d2995660891a0
Author: Hunter Lee <na...@gmail.com>
AuthorDate: Wed Aug 14 18:32:43 2019 -0700

    Updated Storing Assignment Metadata in ZooKeeper (markdown)
---
 Storing-Assignment-Metadata-in-ZooKeeper.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Storing-Assignment-Metadata-in-ZooKeeper.md b/Storing-Assignment-Metadata-in-ZooKeeper.md
index 618bf90..c2d7040 100644
--- a/Storing-Assignment-Metadata-in-ZooKeeper.md
+++ b/Storing-Assignment-Metadata-in-ZooKeeper.md
@@ -93,11 +93,11 @@ Q: Why don't you just use ZooKeeper's Transaction class?
 
 A: This was actually the plan, but ZooKeeper PMC/API documentation has it that the total data size in a single multi() call cannot exceed the default size limit (1MB) even though you might be writing to multiple ZNodes. This means that we need to build an abstraction layer using multi() to enable reads and writes for large data as discussed above.
 
-####Reads and Writes
-#####Asynchronous Read and Writes with Callbacks
+###Reads and Writes
+####Asynchronous Read and Writes with Callbacks
 The actual write underneath the API calls will happen by way of asynchronous reads and writes. ZooKeeper provides these APIs, and failure scenarios will be dealt with by using callbacks to retry failed reads and writes.
 
-#####Alternative Design: Chained multi() reads and writes
+####Alternative Design: Chained multi() reads and writes
 
 Since we cannot fit in all writes in a single multi() call, another approach is to chain multiple multi() calls in series. The advantage of this design is that we would know exactly which read/write failed, and it would be easy to deal with failures. The disadvantage of this design is that reads or writes do not happen in parallel, so the performance might be inferior to submitting all reads and writes at once using the asynchronous APIs.