You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Mark <st...@gmail.com> on 2010/08/26 18:30:38 UTC

Writing to an existing directory

  /Exception in thread "main" 
org.apache.hadoop.fs.FileAlreadyExistsException: Output directory 
playground/output already exi/sts

Is there anyway to force writing to an existing directory? It's quite 
annoying to keep specifiying a seperate output directory on each run.. 
especially when my task fails.

Thanks

Re: Writing to an existing directory

Posted by Mark <st...@gmail.com>.
  On 8/26/10 9:54 AM, Harsh J wrote:
> Well for learning purposes you can delete the output directory before
> you submit the job. FileSystem.get(conf).delete(outputPath) I believe
> is the snippet for that.
>
> But again, _strictly_ for learning purposes ONLY. Never do this as
> your default outputs are always named as part-{r-}00000 onwards and
> you will lose them.
>
> Another tip is -- try not to fail; mapper/reducer logic isn't so big
> generally, to avoid reviewing before submitting :)
>
> On Thu, Aug 26, 2010 at 10:00 PM, Mark<st...@gmail.com>  wrote:
>>   /Exception in thread "main"
>> org.apache.hadoop.fs.FileAlreadyExistsException: Output directory
>> playground/output already exi/sts
>>
>> Is there anyway to force writing to an existing directory? It's quite
>> annoying to keep specifiying a seperate output directory on each run..
>> especially when my task fails.
>>
>> Thanks
>>
>
>
Thanks. This is and will be fore learning purposes... hence the mistakes :)

Re: Writing to an existing directory

Posted by Harsh J <qw...@gmail.com>.
Well for learning purposes you can delete the output directory before
you submit the job. FileSystem.get(conf).delete(outputPath) I believe
is the snippet for that.

But again, _strictly_ for learning purposes ONLY. Never do this as
your default outputs are always named as part-{r-}00000 onwards and
you will lose them.

Another tip is -- try not to fail; mapper/reducer logic isn't so big
generally, to avoid reviewing before submitting :)

On Thu, Aug 26, 2010 at 10:00 PM, Mark <st...@gmail.com> wrote:
>  /Exception in thread "main"
> org.apache.hadoop.fs.FileAlreadyExistsException: Output directory
> playground/output already exi/sts
>
> Is there anyway to force writing to an existing directory? It's quite
> annoying to keep specifiying a seperate output directory on each run..
> especially when my task fails.
>
> Thanks
>



-- 
Harsh J
www.harshj.com