You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/07/04 14:05:00 UTC

[jira] [Updated] (HDFS-15079) RBF: Client maybe get an unexpected result with network anomaly

     [ https://issues.apache.org/jira/browse/HDFS-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ASF GitHub Bot updated HDFS-15079:
----------------------------------
    Labels: pull-request-available  (was: )

> RBF: Client maybe get an unexpected result with network anomaly 
> ----------------------------------------------------------------
>
>                 Key: HDFS-15079
>                 URL: https://issues.apache.org/jira/browse/HDFS-15079
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: rbf
>    Affects Versions: 3.3.0
>            Reporter: Hui Fei
>            Priority: Critical
>              Labels: pull-request-available
>         Attachments: HDFS-15079.001.patch, HDFS-15079.002.patch, UnexpectedOverWriteUT.patch
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
>  I find there is a critical problem on RBF, HDFS-15078 can resolve it on some Scenarios, but i have no idea about the overall resolution.
> The problem is that
> Client with RBF(r0, r1) create a file HDFS file via r0, it gets Exception and failovers to r1
> r0 has been send create rpc to namenode(1st create)
> Client create a HDFS file via r1(2nd create)
> Client writes the HDFS file and close it finally(3rd close)
> Maybe namenode receiving the rpc in order as follow
> 2nd create
> 3rd close
> 1st create
> And overwrite is true by default, this would make the file had been written an empty file. This is an critical problem 
> We had encountered this problem. There are many hive and spark jobs running on our cluster,   sometimes it occurs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org