You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Andrew Wang (JIRA)" <ji...@apache.org> on 2013/10/12 01:14:41 UTC
[jira] [Resolved] (HDFS-5348) Fix error message when
dfs.datanode.max.locked.memory is improperly configured
[ https://issues.apache.org/jira/browse/HDFS-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Andrew Wang resolved HDFS-5348.
-------------------------------
Resolution: Fixed
Fix Version/s: HDFS-4949
Hadoop Flags: Reviewed
Committed to branch.
> Fix error message when dfs.datanode.max.locked.memory is improperly configured
> ------------------------------------------------------------------------------
>
> Key: HDFS-5348
> URL: https://issues.apache.org/jira/browse/HDFS-5348
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: datanode
> Affects Versions: HDFS-4949
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Fix For: HDFS-4949
>
> Attachments: HDFS-5348-caching.001.patch
>
>
> We need to fix the error message when dfs.datanode.max.locked.memory is improperly configured. Currently it says the size is "less than the datanode's available RLIMIT_MEMLOCK limit" when it really means "more"
--
This message was sent by Atlassian JIRA
(v6.1#6144)