You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Alexander Denissov (JIRA)" <ji...@apache.org> on 2016/03/18 02:55:33 UTC

[jira] [Updated] (AMBARI-15449) HAWQ hdfs-client / output.replace-datanode-on-failure should be set to true by default

     [ https://issues.apache.org/jira/browse/AMBARI-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alexander Denissov updated AMBARI-15449:
----------------------------------------
    Fix Version/s: 2.2.2
           Status: Patch Available  (was: Open)

> HAWQ hdfs-client / output.replace-datanode-on-failure should be set to true by default
> --------------------------------------------------------------------------------------
>
>                 Key: AMBARI-15449
>                 URL: https://issues.apache.org/jira/browse/AMBARI-15449
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Alexander Denissov
>            Assignee: Alexander Denissov
>            Priority: Minor
>             Fix For: 2.2.2
>
>
> On large cluster, replace-datanode-on-failure should be set to true, but on small clusters (developers environment or testing environment), it should be set to false, otherwise, if datanodes are overloaded, it will report error. This is the reason it was set to false by default earlier. 
> Ambari should set it to true when cluster size > 4, otherwise set it to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)