You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Dan Hecht (JIRA)" <ji...@apache.org> on 2015/02/11 21:15:11 UTC
[jira] [Created] (HADOOP-11584) s3a file block size set to 0
Dan Hecht created HADOOP-11584:
----------------------------------
Summary: s3a file block size set to 0
Key: HADOOP-11584
URL: https://issues.apache.org/jira/browse/HADOOP-11584
Project: Hadoop Common
Issue Type: Bug
Components: fs/s3
Affects Versions: 2.6.0
Reporter: Dan Hecht
The consequence is that mapreduce probably is not splitting s3a files in the expected way. This is similar to HADOOP-5861 (which was for s3n, though s3n was passing 5G rather than 0 for block size).
FileInputFormat.getSplits() relies on the FileStatus block size being set:
{code}
if (isSplitable(job, path)) {
long blockSize = file.getBlockSize();
long splitSize = computeSplitSize(blockSize, minSize, maxSize);
{code}
However, S3AFileSystem does not set the FileStatus block size field. From S3AFileStatus.java:
{code}
// Files
public S3AFileStatus(long length, long modification_time, Path path) {
super(length, false, 1, 0, modification_time, path);
isEmptyDirectory = false;
}
{code}
I think it should use S3AFileSystem.getDefaultBlockSize() for each file's block size (where it's currently passing 0).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)