You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2021/12/24 13:12:19 UTC

[GitHub] [druid] alborotogarcia edited a comment on issue #12087: Deep storage on kubernetes

alborotogarcia edited a comment on issue #12087:
URL: https://github.com/apache/druid/issues/12087#issuecomment-1000832314


   Alright finally It got solved, by looking at the middle-manager logs, it seems that druid-s3-extensions needs to be properly tuned even if it's not used for deep storage (FWIW I was missing the aws zone, as I was trying to use minio s3 buckets, even if I finally stick to hdfs deep storage). Mind also the required write permisions on hdfs/s3.
   
   Question @asdf2014 , even though I got all the segments listed on hdfs, not all the segments are available on druid after a while. How could anyone keep some segments for a while (eg.24h period) when ingesting from kafka? Should I read back from hdfs as another datasource? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org