You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Jan Lukavský <ja...@firma.seznam.cz> on 2014/08/04 13:31:21 UTC

HftpFileSystem is not working with HighAvailability configuration

Hi all,

I think there is an issue in cooperation of HftpFileSystem (hftp://) and 
HDFS High Availability. The read might fail in the following scenario:

  * a cluster is configured in HA mode, with the following configuration:
    <property>
     <name>dfs.nameservices</name>
     <value>master</value>
   </property>
   <property>
     <name>dfs.ha.namenodes.master</name>
     <value>master1,master2</value>
   </property>
   ...

  * 'master1' is set to standby and 'master2' to active
  * the following command gives error
   $ hadoop fs -ls hftp://master/
   ls: Operation category READ is not supported in state standby
  * while the following succeeds:
   $ hadoop fs -ls hdfs://master/
   Found 98 items
   ...


I have not checked the code, but I have a suspicion, that the 
HftpFileSystem takes always the first host, or it doesn't handle the 
thrown exception correctly.
Is this a known issue? Should the error be handled in some kind of 
wrapper in client code, or is there some other workaround? Should this 
be fixed in the HftpFileSystem (somehow)?

Thanks for opinions,
  Jan