You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zookeeper.apache.org by "Michael Han (JIRA)" <ji...@apache.org> on 2017/03/13 16:22:03 UTC

[jira] [Updated] (ZOOKEEPER-1162) consistent handling of jute.maxbuffer when attempting to read large zk "directories"

     [ https://issues.apache.org/jira/browse/ZOOKEEPER-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Michael Han updated ZOOKEEPER-1162:
-----------------------------------
    Fix Version/s:     (was: 3.5.3)
                   3.5.4

> consistent handling of jute.maxbuffer when attempting to read large zk "directories"
> ------------------------------------------------------------------------------------
>
>                 Key: ZOOKEEPER-1162
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1162
>             Project: ZooKeeper
>          Issue Type: Improvement
>          Components: server
>    Affects Versions: 3.3.3
>            Reporter: Jonathan Hsieh
>            Assignee: Michael Han
>            Priority: Critical
>             Fix For: 3.5.4, 3.6.0
>
>
> Recently we encountered a sitaution where a zk directory got sucessfully populated with 250k elements.  When our system attempted to read the znode dir, it failed because the contents of the dir exceeded the default 1mb jute.maxbuffer limit.  There were a few odd things
> 1) It seems odd that we could populate to be very large but could not read the listing 
> 2) The workaround was bumping up jute.maxbuffer on the client side
> Would it make more sense to have it reject adding new znodes if it exceeds jute.maxbuffer? 
> Alternately, would it make sense to have zk dir listing ignore the jute.maxbuffer setting?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)