You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Vlad Berindei (JIRA)" <ji...@apache.org> on 2015/08/24 20:55:46 UTC

[jira] [Created] (HDFS-8949) hdfsOpenFile() in HDFS C API does not support block sizes larger than 2GB

Vlad Berindei created HDFS-8949:
-----------------------------------

             Summary: hdfsOpenFile() in HDFS C API does not support block sizes larger than 2GB
                 Key: HDFS-8949
                 URL: https://issues.apache.org/jira/browse/HDFS-8949
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Vlad Berindei


hdfsOpenFile() has an int32 blocksize parameter which restricts the size of the blocks to 2GB, while FileSystem.create accepts a long blockSize parameter.

https://github.com/apache/hadoop/blob/c1d50a91f7c05e4aaf4655380c8dcd11703ff158/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h#L395 - int32 blocksize

https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path, boolean, int, short, long) - long blockSize








--
This message was sent by Atlassian JIRA
(v6.3.4#6332)