You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Hadoop QA (Commented) (JIRA)" <ji...@apache.org> on 2012/02/09 08:32:59 UTC

[jira] [Commented] (HADOOP-8045) org.apache.hadoop.mapreduce.lib.output.MultipleOutputs does not handle many files well

    [ https://issues.apache.org/jira/browse/HADOOP-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13204331#comment-13204331 ] 

Hadoop QA commented on HADOOP-8045:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12513918/hadoop-multiple-outputs.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    -1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/577//console

This message is automatically generated.
                
> org.apache.hadoop.mapreduce.lib.output.MultipleOutputs does not handle many files well
> --------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8045
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8045
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.21.0, 1.0.0
>         Environment: Cloudera CH3 release.
>            Reporter: Tarjei Huse
>              Labels: patch
>         Attachments: hadoop-multiple-outputs.patch
>
>
> We were tryong to use MultipleOutputs to write one file per key. This produced the error:
> exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/me/part6/_temporary/
> _attempt_201202071305_0017_r_000000_2/2011-11-18-22-
> attempt_201202071305_0017_r_000000_2-r-00000
> could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:
> 1520)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:
> 665)
>     at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:
> 25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:
> 1157) 
> When the nr. of files processed increased over 20 on a single developer system. 
> The solution proved to be to close each RecordWriter when the reducer was finished with a key, something that required that we extended the multiple outputs to fetch the recordwriter - not a good solution. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira