You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pinot.apache.org by GitBox <gi...@apache.org> on 2021/11/04 23:19:46 UTC

[GitHub] [pinot] mcvsubbu commented on issue #7704: Error with Data size larger than 1M, will not write to zk. Data (first 1k)

mcvsubbu commented on issue #7704:
URL: https://github.com/apache/pinot/issues/7704#issuecomment-961504359


   In the current version of helix that we use in Pinot, data over 1M is automatically compressed by Helix before writing to the zookeeper. I think your compressed data is exceeding this limit (helix provides only one limit in 0.9.x, we have requested to have two limits and they will provide it in 1.x)
   
   https://github.com/apache/helix/blob/master/zookeeper-api/src/main/java/org/apache/helix/zookeeper/datamodel/ZNRecord.java#L63:27
   
   The best way is to remove some segments or set your segment size to be larger so that less number of segments are made. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org