You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Chen He (JIRA)" <ji...@apache.org> on 2017/04/18 22:12:41 UTC

[jira] [Created] (AMBARI-20785) Ambari report datanode decommissioned but datanode is still in decommissing

Chen He created AMBARI-20785:
--------------------------------

             Summary: Ambari report datanode decommissioned but datanode is still in decommissing
                 Key: AMBARI-20785
                 URL: https://issues.apache.org/jira/browse/AMBARI-20785
             Project: Ambari
          Issue Type: Bug
          Components: infra
    Affects Versions: 2.4.0
            Reporter: Chen He


If we decommission HDFS datanode through ambari REST API call. It will create a new request http://ambari_server:8080/api/v1/clusters/cluster_name/requests/
However, the request quickly response "COMPLETED" only after it added the given datanode into dfs.exclude. It does not block till datanode fully decommissioned. It should block till the datanode completely decommissioned. 
At the same time, org.apache.ambari.groovy.client.decommissionDataNode() is using the same way to decommission datanode. It could cause data loss if cluster shutdown this node instantly after decommission the datanode. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)