You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Karam Singh (JIRA)" <ji...@apache.org> on 2014/07/19 16:29:38 UTC
[jira] [Resolved] (HADOOP-2270) Title: DFS submit client params
overrides final params on cluster
[ https://issues.apache.org/jira/browse/HADOOP-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Karam Singh resolved HADOOP-2270.
---------------------------------
Resolution: Invalid
Release Note: I opened this very long. Closing it
> Title: DFS submit client params overrides final params on cluster
> ------------------------------------------------------------------
>
> Key: HADOOP-2270
> URL: https://issues.apache.org/jira/browse/HADOOP-2270
> Project: Hadoop Common
> Issue Type: Bug
> Components: conf
> Affects Versions: 0.15.1
> Reporter: Karam Singh
>
> hdfs client params over-rides the params set as final on hdfs cluster nodes.
> default valuesv of cleint side hadoop-site.xml values override the final prameters of hdfs hadoop-site.xml .
> oberved the following cases -:
> 1. dfs.trash.root=/recycle, dfs.trash.interval=10 and dfs.replication=2 marked final under hadoop-site.xml on hdfs cluster.
> When fsShel command "hadoop dfs -put local_dir dest" fired from submission host
> Files will still get replicated 3 times (default) instead of final dfs.replication=2.
> Similarly when "hadoop dfs -rmr dfs_dir OR hadoop dfs -rm file_path " fired from submit client the file/driectory diectly got deleted without being moved to /recycle.
> Here hadoop-site.xml on submit client does not specify dfs.trash.root, dfs.trash.interval and dfs.replication.
>
> Same is the case when we submit mapred JOB from client and job.xml dispalys default values which overrides the lsuter values.
> 2. dfs.trash.root=/recycle, dfs.trash.interval=10 and dfs.replication=2 marked final under hadoop-site.xml on hdfs cluster.
> And
> dfs.trash.root=/rubbish, dfs.trash.interval=2 and dfs.replication=5 under hadoop-site.xml on submit client.
> When fsShel command "hadoop dfs -put local_dir dest" fired from submit client
> Files will get replicated 5 times instead of final dfs.replication=2.
> Similarly when "hadoop dfs -rmr dfs_dir OR hadoop dfs -rm file_path " fired from submit client the file/driectory diectly will be moved to /rubbish instead of /recycle.
>
> Same is the case when we submit mapred job from client, job.xml displays following values -:
> dfs.trash.root=/rubbish, dfs.trash.interval=2 and dfs.replication=5
--
This message was sent by Atlassian JIRA
(v6.2#6252)