You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Colin Patrick McCabe (JIRA)" <ji...@apache.org> on 2016/01/22 20:24:39 UTC

[jira] [Resolved] (HDFS-8949) hdfsOpenFile() in HDFS C API does not support block sizes larger than 2GB

     [ https://issues.apache.org/jira/browse/HDFS-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe resolved HDFS-8949.
----------------------------------------
    Resolution: Duplicate

Duplicate of HDFS-9541

> hdfsOpenFile() in HDFS C API does not support block sizes larger than 2GB
> -------------------------------------------------------------------------
>
>                 Key: HDFS-8949
>                 URL: https://issues.apache.org/jira/browse/HDFS-8949
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.1
>            Reporter: Vlad Berindei
>
> hdfsOpenFile() has an int32 blocksize parameter which restricts the size of the blocks to 2GB, while FileSystem.create accepts a long blockSize parameter.
> https://github.com/apache/hadoop/blob/c1d50a91f7c05e4aaf4655380c8dcd11703ff158/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h#L395 - int32 blocksize
> https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path, boolean, int, short, long) - long blockSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)