You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "sunil ranjan khuntia (JIRA)" <ji...@apache.org> on 2014/02/07 04:22:19 UTC

[jira] [Commented] (MAPREDUCE-5735) MultipleOutputs of hadoop not working properly with s3 filesyatem

    [ https://issues.apache.org/jira/browse/MAPREDUCE-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894160#comment-13894160 ] 

sunil ranjan khuntia commented on MAPREDUCE-5735:
-------------------------------------------------

Steve,
Like you said I have tried it on Hadoop 2.2.0. And its still the same. I am not getting the output.

> MultipleOutputs of hadoop not working properly with s3 filesyatem
> -----------------------------------------------------------------
>
>                 Key: MAPREDUCE-5735
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5735
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: sunil ranjan khuntia
>            Priority: Minor
>
> I have written a mapreduce job and used MultipleOutputs(org.apache.hadoop.mapreduce.lib.output.MultipleOutputs) calss to put the resultant file in a specific user defined directory path(instead of getting the o/p file part-r-00000 i want to have dir1/dir2/dir3/d-r-00000). This works fine for hdfs.
> But when I run the same mapreduce job with s3 file sytem the user defined directory structure is not created in s3. Is it that MultipleOutputs is not suported in S3? if so, any alternate way by which I can customize my mapreduce o/p file directory path in s3.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)