You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Suresh Srinivas (JIRA)" <ji...@apache.org> on 2014/03/05 01:03:42 UTC
[jira] [Created] (HDFS-6055) Change default configuration to limit
file name length in HDFS
Suresh Srinivas created HDFS-6055:
-------------------------------------
Summary: Change default configuration to limit file name length in HDFS
Key: HDFS-6055
URL: https://issues.apache.org/jira/browse/HDFS-6055
Project: Hadoop HDFS
Issue Type: Improvement
Components: namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Currently configuration "dfs.namenode.fs-limits.max-component-length" is set to 0. With this HDFS file names have no length limit. However, we see more users run into issues where they copy files from HDFS to another file system and the copy fails due to the file name length being too long.
I propose changing the default configuration "dfs.namenode.fs-limits.max-component-length" to a reasonable value. This will be an incompatible change. However, user who need long file names can override this configuration to turn off length limit.
What do folks think?
--
This message was sent by Atlassian JIRA
(v6.2#6252)