You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2020/08/24 02:51:36 UTC

[GitHub] [druid] qphien commented on issue #10311: All ingestion task failed and druid failed to ingestion any data

qphien commented on issue #10311:
URL: https://github.com/apache/druid/issues/10311#issuecomment-678876275


   I start up druid with supervise and i just check middleManager log specified by log4j2.xml. In supervise log, i find the exception:
   `
   INFO [forking-task-runner-3] org.apache.druid.indexing.overlord.ForkingTaskRunner - Exception caught during execution
   org.apache.hadoop.ipc.RemoteException: The directory item limit of /user/druid/indexing-logs is exceeded: limit=1048576 items=1048576
           at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyMaxDirItems(FSDirectory.java:1147)
   `
   
   The number of indexing log is upper to hdfs limitation. After deleting old indexing logs, everything works fine.
   
   However, why exception log only printed to supervise log and not in middleManager log specified by log4j2.xml?
   Is there any parameter to control druid to delete indexing log periodically?
   
   Thanks


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org