You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Siddharth Wagle (Jira)" <ji...@apache.org> on 2019/11/15 18:39:00 UTC

[jira] [Resolved] (HDFS-14980) diskbalancer query command always tries to contact to port 9867

     [ https://issues.apache.org/jira/browse/HDFS-14980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Siddharth Wagle resolved HDFS-14980.
------------------------------------
    Resolution: Not A Problem

This is an issue specific to a HDFS deployment using Cloudera Manager. The client side configuration in /etc/hadoop/conf/ (hdfs-site.xml), excludes all daemon configs and so DiskBalancerCli cannot resolve the {{dfs.datanode.ipc.address}}. If you add this to the configuration file, the query command works as expected.

> diskbalancer query command always tries to contact to port 9867
> ---------------------------------------------------------------
>
>                 Key: HDFS-14980
>                 URL: https://issues.apache.org/jira/browse/HDFS-14980
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: diskbalancer
>            Reporter: Nilotpal Nandi
>            Assignee: Siddharth Wagle
>            Priority: Major
>
> disbalancer query commands always tries to connect to port 9867 even when datanode IPC port is different.
> In this setup , datanode IPC port is set to 20001.
>  
> diskbalancer report command works fine and connects to IPC port 20001
>  
> {noformat}
> hdfs diskbalancer -report -node 172.27.131.193
> 19/11/12 08:58:55 INFO command.Command: Processing report command
> 19/11/12 08:58:57 INFO balancer.KeyManager: Block token params received from NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 19/11/12 08:58:57 INFO block.BlockTokenSecretManager: Setting block keys
> 19/11/12 08:58:57 INFO balancer.KeyManager: Update block keys every 2hrs, 30mins, 0sec
> 19/11/12 08:58:58 INFO command.Command: Reporting volume information for DataNode(s). These DataNode(s) are parsed from '172.27.131.193'.
> Processing report command
> Reporting volume information for DataNode(s). These DataNode(s) are parsed from '172.27.131.193'.
> <HOST_ADDR>[172.27.131.193:20001] - <e9d905bf-3146-4c54-a98a-0345448b920a>: 3 volumes with node data density 0.05.
> [DISK: volume-/dataroot/ycloud/dfs/NEW_DISK1/] - 0.15 used: 39343871181/259692498944, 0.85 free: 220348627763/259692498944, isFailed: False, isReadOnly: False, isSkip: False, isTransient: False.
> [DISK: volume-/dataroot/ycloud/dfs/NEW_DISK2/] - 0.15 used: 39371179986/259692498944, 0.85 free: 220321318958/259692498944, isFailed: False, isReadOnly: False, isSkip: False, isTransient: False.
> [DISK: volume-/dataroot/ycloud/dfs/dn/] - 0.19 used: 49934903670/259692498944, 0.81 free: 209757595274/259692498944, isFailed: False, isReadOnly: False, isSkip: False, isTransient: False.
>  
> {noformat}
>  
> But  diskbalancer query command fails and tries to connect to port 9867 (default port).
>  
> {noformat}
> hdfs diskbalancer -query 172.27.131.193
> 19/11/12 06:37:15 INFO command.Command: Executing "query plan" command.
> 19/11/12 06:37:16 INFO ipc.Client: Retrying connect to server: <HOST_ADDR>/172.27.131.193:9867. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
> 19/11/12 06:37:17 INFO ipc.Client: Retrying connect to server: <HOST_ADDR>/172.27.131.193:9867. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
> ..
> ..
> ..
> 19/11/12 06:37:25 ERROR tools.DiskBalancerCLI: Exception thrown while running DiskBalancerCLI.
> {noformat}
>  
>  
> Expectation :
> diskbalancer query command should work fine without explicitly mentioning datanode IPC port address



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org