You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Íñigo Goiri (Jira)" <ji...@apache.org> on 2022/08/26 23:05:00 UTC

[jira] [Resolved] (HDFS-16734) RBF: fix some bugs when handling getContentSummary RPC

     [ https://issues.apache.org/jira/browse/HDFS-16734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Íñigo Goiri resolved HDFS-16734.
--------------------------------
    Fix Version/s: 3.4.0
       Resolution: Fixed

> RBF: fix some bugs when handling getContentSummary RPC
> ------------------------------------------------------
>
>                 Key: HDFS-16734
>                 URL: https://issues.apache.org/jira/browse/HDFS-16734
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0
>
>
> Suppose there are some mount points as bellows in RBF without default namespace.
> ||Source Path||NameSpace||Destination Path ||
> |/a/b|ns0|/a/b|
> |/a/b/c|ns0|/a/b/c|
> |/a/b/c/d|ns1|/a/b/c/d|
> Suppose there a file /a/b/c/file1 with 10MB data in ns0 and a file /a/b/c/d/file2 with 20MB data in ns1.
> There are bugs during handling some cases:
> ||Case Number||Case||Current Result||Expected Result||
> |1|getContentSummary('/a')|Throw RouterResolveException |2files and 30MB data|
> |2|getContentSummary('/a/b')|2files and 40MB data|3files and 40MB data|
> Bugs for these cases:
> Case1: If can't find any locations for the path,  RBF should try to do it with sub mount points.
> Case2: RBF shouldn't repeatedly get content summary from one same namespace with same ancestors path, such as from ns0 with /a/b and from ns0 with /a/b/c.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org