You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "gael URBAUER (JIRA)" <ji...@apache.org> on 2019/03/19 13:30:00 UTC

[jira] [Created] (HDFS-14380) webhdfs failover append to stand-by namenode fails

gael URBAUER created HDFS-14380:
-----------------------------------

             Summary: webhdfs failover append to stand-by namenode fails
                 Key: HDFS-14380
                 URL: https://issues.apache.org/jira/browse/HDFS-14380
             Project: Hadoop HDFS
          Issue Type: Sub-task
          Components: webhdfs
    Affects Versions: 2.7.3
         Environment: HDP 2.6.2

HA namenode activated
            Reporter: gael URBAUER


I'm using datastage to create file in Hadoop through webhdfs.

It happens that when namenode failover happens, datastage is sometimes talking to the standby namenode.

Then create operation succeed but when files are bigger than the buffer size, datastage calls the APPEND operation and get back a 403 response.

It seems not very coherent that some write operation are allowed on the stand-by and other aren't.

 

Regards,

 

Gaël



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org