You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "zhengchenyu (Jira)" <ji...@apache.org> on 2022/09/06 08:05:00 UTC

[jira] [Commented] (HDFS-14117) RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled

    [ https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600659#comment-17600659 ] 

zhengchenyu commented on HDFS-14117:
------------------------------------

    In our cluster, I must mount all nameservice for /user/${user}/.Trash, it means router will rename all nameservice when move to trash. Though it works for long time, this will result to bad performance when one namenode degrade.

I wanna only connect one nameservice. So I have a new proposal: 
    Condition: 
    (1) /test is mounted in ns0
    (2) /user/hdfs is mounted is ns1
    If we move /test/hello to /user/hdfs/.Trash/Current/test/hello.
    When we process the location with trash prefix, we just use the location which remove the prefix to find the mounted ns. For /user/hdfs/.Trash/Current/test/hello, we remove the prefix '/user/hdfs/.Trash/Current', get '/test/hello', use '/test/hello' to find the mounted ns. Then we got the location: ns0->/user/hdfs/.Trash/Current/test/hello, then rename to trash will work.
    The problem is that we must check the pattern of location in every call, but I think it is low cost.

[~elgoiri] [~ayushtkn] [~hexiaoqiao] [~ramkumar]  [~xuzq_zander] How about my proposal?

> RBF: We can only delete the files or dirs of one subcluster in a cluster with multiple subclusters when trash is enabled
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-14117
>                 URL: https://issues.apache.org/jira/browse/HDFS-14117
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Ramkumar Ramalingam
>            Assignee: Ramkumar Ramalingam
>            Priority: Major
>              Labels: RBF
>         Attachments: HDFS-14117-HDFS-13891.001.patch, HDFS-14117-HDFS-13891.002.patch, HDFS-14117-HDFS-13891.003.patch, HDFS-14117-HDFS-13891.004.patch, HDFS-14117-HDFS-13891.005.patch, HDFS-14117-HDFS-13891.006.patch, HDFS-14117-HDFS-13891.007.patch, HDFS-14117-HDFS-13891.008.patch, HDFS-14117-HDFS-13891.009.patch, HDFS-14117-HDFS-13891.010.patch, HDFS-14117-HDFS-13891.011.patch, HDFS-14117-HDFS-13891.012.patch, HDFS-14117-HDFS-13891.013.patch, HDFS-14117-HDFS-13891.014.patch, HDFS-14117-HDFS-13891.015.patch, HDFS-14117-HDFS-13891.016.patch, HDFS-14117-HDFS-13891.017.patch, HDFS-14117-HDFS-13891.018.patch, HDFS-14117-HDFS-13891.019.patch, HDFS-14117-HDFS-13891.020.patch, HDFS-14117.001.patch, HDFS-14117.002.patch, HDFS-14117.003.patch, HDFS-14117.004.patch, HDFS-14117.005.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount trash dir /user of the subcluster ns1 to the global path /user. Then we can delete files or dirs of ns1, but when we delete the files or dirs of another subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup       6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 'hdfs://router/test/hdfs.cmd' to trash at: hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org