You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flume.apache.org by Roshan Naik <ro...@hortonworks.com> on 2015/03/24 22:49:29 UTC

hdfs.retryInterval - issues

hdfs.retryInterval seems to have a some issues.


  *   The default is not 0 as documented.
  *   This statement does not seem to reflect the implementation:  "If set to 0, the sink will try to close the file until the file is eventually closed". Setting it to 0 seems to be same as setting it to 1 .. AFAICT.
  *   Seems like no way to indicate unlimited tries.

Re: hdfs.retryInterval - issues

Posted by Hari Shreedharan <hs...@cloudera.com>.
Can you file a jira? We should fix it in that case.




Thanks, Hari

On Tue, Mar 24, 2015 at 2:49 PM, Roshan Naik <ro...@hortonworks.com>
wrote:

> hdfs.retryInterval seems to have a some issues.
>   *   The default is not 0 as documented.
>   *   This statement does not seem to reflect the implementation:  "If set to 0, the sink will try to close the file until the file is eventually closed". Setting it to 0 seems to be same as setting it to 1 .. AFAICT.
>   *   Seems like no way to indicate unlimited tries.

Re: hdfs.retryInterval - issues

Posted by Roshan Naik <ro...@hortonworks.com>.
My bad... Notice that setting it to 0 overrides the value to Integer.MAX_VALUE.  So setting it to 0 is equivalent to Integer.MAX_VALUE (the default). So the documentation that default is 0 ... is fine.


From: Roshan Naik <ro...@hortonworks.com>>
Date: Tuesday, March 24, 2015 2:54 PM
To: "dev@flume.apache.org<ma...@flume.apache.org>" <de...@flume.apache.org>>
Subject: Re: hdfs.retryInterval - issues

Sorry for typo.... I meant to say 'hdfs.closeTries' and not 'hdfs.retryInterval'

-roshan

From: Roshan Naik <ro...@hortonworks.com>>
Date: Tuesday, March 24, 2015 2:52 PM
To: "dev@flume.apache.org<ma...@flume.apache.org>" <de...@flume.apache.org>>
Subject: hdfs.retryInterval - issues


hdfs.retryInterval seems to have a some issues.


  *   The default is not 0 as documented.
  *   This statement does not seem to reflect the implementation:  "If set to 0, the sink will try to close the file until the file is eventually closed". Setting it to 0 seems to be same as setting it to 1 .. AFAICT.
  *   Seems like no way to indicate unlimited tries.

Re: hdfs.retryInterval - issues

Posted by Roshan Naik <ro...@hortonworks.com>.
Sorry for typo.... I meant to say 'hdfs.closeTries' and not 'hdfs.retryInterval'

-roshan

From: Roshan Naik <ro...@hortonworks.com>>
Date: Tuesday, March 24, 2015 2:52 PM
To: "dev@flume.apache.org<ma...@flume.apache.org>" <de...@flume.apache.org>>
Subject: hdfs.retryInterval - issues


hdfs.retryInterval seems to have a some issues.


  *   The default is not 0 as documented.
  *   This statement does not seem to reflect the implementation:  "If set to 0, the sink will try to close the file until the file is eventually closed". Setting it to 0 seems to be same as setting it to 1 .. AFAICT.
  *   Seems like no way to indicate unlimited tries.