You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Andrew Wang (JIRA)" <ji...@apache.org> on 2014/03/24 21:46:48 UTC
[jira] [Created] (HDFS-6151) HDFS should refuse to cache blocks
>=2GB
Andrew Wang created HDFS-6151:
---------------------------------
Summary: HDFS should refuse to cache blocks >=2GB
Key: HDFS-6151
URL: https://issues.apache.org/jira/browse/HDFS-6151
Project: Hadoop HDFS
Issue Type: Bug
Components: caching, datanode
Affects Versions: 2.4.0
Reporter: Andrew Wang
Assignee: Andrew Wang
If you try to cache a block that's >=2GB, the DN will silently fail to cache it since {{MappedByteBuffer}} uses a signed int to represent size. Blocks this large are rare, but we should log or alert the user somehow.
--
This message was sent by Atlassian JIRA
(v6.2#6252)