You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Joydeep Sen Sarma (JIRA)" <ji...@apache.org> on 2009/05/17 06:33:45 UTC
[jira] Created: (HADOOP-5861) s3n files are not getting split by
default
s3n files are not getting split by default
-------------------------------------------
Key: HADOOP-5861
URL: https://issues.apache.org/jira/browse/HADOOP-5861
Project: Hadoop Core
Issue Type: Bug
Components: fs/s3
Affects Versions: 0.19.1
Environment: ec2
Reporter: Joydeep Sen Sarma
running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5861) s3n files are not getting split by
default
Posted by "Tom White (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tom White updated HADOOP-5861:
------------------------------
Assignee: Tom White
Status: Patch Available (was: Open)
> s3n files are not getting split by default
> -------------------------------------------
>
> Key: HADOOP-5861
> URL: https://issues.apache.org/jira/browse/HADOOP-5861
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.19.1
> Environment: ec2
> Reporter: Joydeep Sen Sarma
> Assignee: Tom White
> Attachments: hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5861) s3n files are not getting split by
default
Posted by "Tom White (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tom White updated HADOOP-5861:
------------------------------
Attachment: hadoop-5861.patch
Here's a patch which makes the default block size 64MB (configured using fs.s3n.block.size). It also fixes HADOOP-5804, which is related.
> s3n files are not getting split by default
> -------------------------------------------
>
> Key: HADOOP-5861
> URL: https://issues.apache.org/jira/browse/HADOOP-5861
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.19.1
> Environment: ec2
> Reporter: Joydeep Sen Sarma
> Attachments: hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5861) s3n files are not getting split by
default
Posted by "Tom White (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tom White updated HADOOP-5861:
------------------------------
Resolution: Fixed
Fix Version/s: 0.21.0
Release Note: Files stored on the native S3 filesystem (s3n:// URIs) now report a block size determined by the fs.s3n.block.size property (default 64MB).
Hadoop Flags: [Incompatible change, Reviewed]
Status: Resolved (was: Patch Available)
I've just committed this.
(The contrib test failures were not related.)
> s3n files are not getting split by default
> -------------------------------------------
>
> Key: HADOOP-5861
> URL: https://issues.apache.org/jira/browse/HADOOP-5861
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.19.1
> Environment: ec2
> Reporter: Joydeep Sen Sarma
> Assignee: Tom White
> Fix For: 0.21.0
>
> Attachments: hadoop-5861-v2.patch, hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5861) s3n files are not getting split by
default
Posted by "Tom White (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tom White updated HADOOP-5861:
------------------------------
Status: Patch Available (was: Open)
> s3n files are not getting split by default
> -------------------------------------------
>
> Key: HADOOP-5861
> URL: https://issues.apache.org/jira/browse/HADOOP-5861
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.19.1
> Environment: ec2
> Reporter: Joydeep Sen Sarma
> Assignee: Tom White
> Attachments: hadoop-5861-v2.patch, hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5861) s3n files are not getting split by
default
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714617#action_12714617 ]
Hadoop QA commented on HADOOP-5861:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12409188/hadoop-5861.patch
against trunk revision 780114.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 9 new or modified tests.
-1 patch. The patch command could not apply the patch.
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/428/console
This message is automatically generated.
> s3n files are not getting split by default
> -------------------------------------------
>
> Key: HADOOP-5861
> URL: https://issues.apache.org/jira/browse/HADOOP-5861
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.19.1
> Environment: ec2
> Reporter: Joydeep Sen Sarma
> Assignee: Tom White
> Attachments: hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5861) s3n files are not getting split by
default
Posted by "Tom White (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tom White updated HADOOP-5861:
------------------------------
Attachment: hadoop-5861-v2.patch
Regenerating patch for trunk.
> s3n files are not getting split by default
> -------------------------------------------
>
> Key: HADOOP-5861
> URL: https://issues.apache.org/jira/browse/HADOOP-5861
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.19.1
> Environment: ec2
> Reporter: Joydeep Sen Sarma
> Assignee: Tom White
> Attachments: hadoop-5861-v2.patch, hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5861) s3n files are not getting split by
default
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715307#action_12715307 ]
Hadoop QA commented on HADOOP-5861:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12409550/hadoop-5861-v2.patch
against trunk revision 780777.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 8 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
+1 core tests. The patch passed core unit tests.
-1 contrib tests. The patch failed contrib unit tests.
Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/447/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/447/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/447/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/447/console
This message is automatically generated.
> s3n files are not getting split by default
> -------------------------------------------
>
> Key: HADOOP-5861
> URL: https://issues.apache.org/jira/browse/HADOOP-5861
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.19.1
> Environment: ec2
> Reporter: Joydeep Sen Sarma
> Assignee: Tom White
> Attachments: hadoop-5861-v2.patch, hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5861) s3n files are not getting split by
default
Posted by "Joydeep Sen Sarma (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12713768#action_12713768 ]
Joydeep Sen Sarma commented on HADOOP-5861:
-------------------------------------------
looks good to me ..
> s3n files are not getting split by default
> -------------------------------------------
>
> Key: HADOOP-5861
> URL: https://issues.apache.org/jira/browse/HADOOP-5861
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.19.1
> Environment: ec2
> Reporter: Joydeep Sen Sarma
> Assignee: Tom White
> Attachments: hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5861) s3n files are not getting split by
default
Posted by "Tom White (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tom White updated HADOOP-5861:
------------------------------
Status: Open (was: Patch Available)
> s3n files are not getting split by default
> -------------------------------------------
>
> Key: HADOOP-5861
> URL: https://issues.apache.org/jira/browse/HADOOP-5861
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.19.1
> Environment: ec2
> Reporter: Joydeep Sen Sarma
> Assignee: Tom White
> Attachments: hadoop-5861-v2.patch, hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.