You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by "S.L" <si...@gmail.com> on 2014/05/02 14:43:44 UTC

Random Exception

Hi All,

I get this exception after af resubmit my failed MapReduce jon, can one
please let me know what this exception means ?

14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
attempt_1398989569957_0021_m_000000_0, Status : FAILED
Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
find any valid local directory for
attempt_1398989569957_0021_m_000000_0/intermediate.26
        at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402)
        at
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
        at
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
        at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711)
        at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579)
        at org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
        at
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1870)
        at
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Unknown Source)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)

Re: Random Exception

Posted by "S.L" <si...@gmail.com>.
On Fri, May 2, 2014 at 12:20 PM, S.L <si...@gmail.com> wrote:

> I am using Hadoop 2.3 , the problem is that my disk runs out of space
> (80GB) and then I reboot my machine , which causes my /tmp data to be
> deleted and frees up space , I then resubmit the job assuming that since
> the namenode and datanode data are not stored in /tmp everything should be
> ok.( I have  set the namenode and datanode to be stored at a dieffrent
> location than /tmp).
>
> By the way this is a single node cluster(psuedo distributed) setup.
>
>
> On Fri, May 2, 2014 at 9:02 AM, Marcos Ortiz <ml...@uci.cu> wrote:
>
>> It seems that your Hadoop data directory is broken or your disk has
>> problems.
>> Which version of Hadoop are you using?
>>
>> On Friday, May 02, 2014 08:43:44 AM S.L wrote:
>> > Hi All,
>> >
>> > I get this exception after af resubmit my failed MapReduce jon, can one
>> > please let me know what this exception means ?
>> >
>> > 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
>> > attempt_1398989569957_0021_m_000000_0, Status : FAILED
>> > Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
>> > find any valid local directory for
>> > attempt_1398989569957_0021_m_000000_0/intermediate.26
>> >         at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
>> > ite(LocalDirAllocator.java:402) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:150) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:131) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
>> > org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
>> >         at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
>> > 0) at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
>> >         at
>> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
>> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> >         at java.security.AccessController.doPrivileged(Native Method)
>> >         at javax.security.auth.Subject.doAs(Unknown Source)
>> >         at
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
>> > va:1548)
>>
>> ________________________________________________________________________________________________
>> I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al
>> 26 de abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu
>>
>
>

Re: Random Exception

Posted by "S.L" <si...@gmail.com>.
On Fri, May 2, 2014 at 12:20 PM, S.L <si...@gmail.com> wrote:

> I am using Hadoop 2.3 , the problem is that my disk runs out of space
> (80GB) and then I reboot my machine , which causes my /tmp data to be
> deleted and frees up space , I then resubmit the job assuming that since
> the namenode and datanode data are not stored in /tmp everything should be
> ok.( I have  set the namenode and datanode to be stored at a dieffrent
> location than /tmp).
>
> By the way this is a single node cluster(psuedo distributed) setup.
>
>
> On Fri, May 2, 2014 at 9:02 AM, Marcos Ortiz <ml...@uci.cu> wrote:
>
>> It seems that your Hadoop data directory is broken or your disk has
>> problems.
>> Which version of Hadoop are you using?
>>
>> On Friday, May 02, 2014 08:43:44 AM S.L wrote:
>> > Hi All,
>> >
>> > I get this exception after af resubmit my failed MapReduce jon, can one
>> > please let me know what this exception means ?
>> >
>> > 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
>> > attempt_1398989569957_0021_m_000000_0, Status : FAILED
>> > Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
>> > find any valid local directory for
>> > attempt_1398989569957_0021_m_000000_0/intermediate.26
>> >         at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
>> > ite(LocalDirAllocator.java:402) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:150) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:131) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
>> > org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
>> >         at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
>> > 0) at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
>> >         at
>> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
>> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> >         at java.security.AccessController.doPrivileged(Native Method)
>> >         at javax.security.auth.Subject.doAs(Unknown Source)
>> >         at
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
>> > va:1548)
>>
>> ________________________________________________________________________________________________
>> I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al
>> 26 de abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu
>>
>
>

Re: Random Exception

Posted by "S.L" <si...@gmail.com>.
On Fri, May 2, 2014 at 12:20 PM, S.L <si...@gmail.com> wrote:

> I am using Hadoop 2.3 , the problem is that my disk runs out of space
> (80GB) and then I reboot my machine , which causes my /tmp data to be
> deleted and frees up space , I then resubmit the job assuming that since
> the namenode and datanode data are not stored in /tmp everything should be
> ok.( I have  set the namenode and datanode to be stored at a dieffrent
> location than /tmp).
>
> By the way this is a single node cluster(psuedo distributed) setup.
>
>
> On Fri, May 2, 2014 at 9:02 AM, Marcos Ortiz <ml...@uci.cu> wrote:
>
>> It seems that your Hadoop data directory is broken or your disk has
>> problems.
>> Which version of Hadoop are you using?
>>
>> On Friday, May 02, 2014 08:43:44 AM S.L wrote:
>> > Hi All,
>> >
>> > I get this exception after af resubmit my failed MapReduce jon, can one
>> > please let me know what this exception means ?
>> >
>> > 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
>> > attempt_1398989569957_0021_m_000000_0, Status : FAILED
>> > Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
>> > find any valid local directory for
>> > attempt_1398989569957_0021_m_000000_0/intermediate.26
>> >         at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
>> > ite(LocalDirAllocator.java:402) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:150) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:131) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
>> > org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
>> >         at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
>> > 0) at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
>> >         at
>> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
>> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> >         at java.security.AccessController.doPrivileged(Native Method)
>> >         at javax.security.auth.Subject.doAs(Unknown Source)
>> >         at
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
>> > va:1548)
>>
>> ________________________________________________________________________________________________
>> I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al
>> 26 de abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu
>>
>
>

Re: Random Exception

Posted by "S.L" <si...@gmail.com>.
On Fri, May 2, 2014 at 12:20 PM, S.L <si...@gmail.com> wrote:

> I am using Hadoop 2.3 , the problem is that my disk runs out of space
> (80GB) and then I reboot my machine , which causes my /tmp data to be
> deleted and frees up space , I then resubmit the job assuming that since
> the namenode and datanode data are not stored in /tmp everything should be
> ok.( I have  set the namenode and datanode to be stored at a dieffrent
> location than /tmp).
>
> By the way this is a single node cluster(psuedo distributed) setup.
>
>
> On Fri, May 2, 2014 at 9:02 AM, Marcos Ortiz <ml...@uci.cu> wrote:
>
>> It seems that your Hadoop data directory is broken or your disk has
>> problems.
>> Which version of Hadoop are you using?
>>
>> On Friday, May 02, 2014 08:43:44 AM S.L wrote:
>> > Hi All,
>> >
>> > I get this exception after af resubmit my failed MapReduce jon, can one
>> > please let me know what this exception means ?
>> >
>> > 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
>> > attempt_1398989569957_0021_m_000000_0, Status : FAILED
>> > Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
>> > find any valid local directory for
>> > attempt_1398989569957_0021_m_000000_0/intermediate.26
>> >         at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
>> > ite(LocalDirAllocator.java:402) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:150) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:131) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
>> > org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
>> >         at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
>> > 0) at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
>> >         at
>> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
>> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> >         at java.security.AccessController.doPrivileged(Native Method)
>> >         at javax.security.auth.Subject.doAs(Unknown Source)
>> >         at
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
>> > va:1548)
>>
>> ________________________________________________________________________________________________
>> I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al
>> 26 de abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu
>>
>
>

Re: Random Exception

Posted by Marcos Ortiz <ml...@uci.cu>.
It seems that your Hadoop data directory is broken or your disk has problems.
Which version of Hadoop are you using?

On Friday, May 02, 2014 08:43:44 AM S.L wrote:
> Hi All,
> 
> I get this exception after af resubmit my failed MapReduce jon, can one
> please let me know what this exception means ?
> 
> 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
> attempt_1398989569957_0021_m_000000_0, Status : FAILED
> Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
> find any valid local directory for
> attempt_1398989569957_0021_m_000000_0/intermediate.26
>         at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
> ite(LocalDirAllocator.java:402) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:150) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:131) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
> org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
>         at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
> 0) at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Unknown Source)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
> va:1548)
________________________________________________________________________________________________
I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al 26 de abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu

Re: Random Exception

Posted by Marcos Ortiz <ml...@uci.cu>.
It seems that your Hadoop data directory is broken or your disk has problems.
Which version of Hadoop are you using?

On Friday, May 02, 2014 08:43:44 AM S.L wrote:
> Hi All,
> 
> I get this exception after af resubmit my failed MapReduce jon, can one
> please let me know what this exception means ?
> 
> 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
> attempt_1398989569957_0021_m_000000_0, Status : FAILED
> Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
> find any valid local directory for
> attempt_1398989569957_0021_m_000000_0/intermediate.26
>         at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
> ite(LocalDirAllocator.java:402) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:150) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:131) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
> org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
>         at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
> 0) at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Unknown Source)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
> va:1548)
________________________________________________________________________________________________
I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al 26 de abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu

Re: Random Exception

Posted by Marcos Ortiz <ml...@uci.cu>.
It seems that your Hadoop data directory is broken or your disk has problems.
Which version of Hadoop are you using?

On Friday, May 02, 2014 08:43:44 AM S.L wrote:
> Hi All,
> 
> I get this exception after af resubmit my failed MapReduce jon, can one
> please let me know what this exception means ?
> 
> 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
> attempt_1398989569957_0021_m_000000_0, Status : FAILED
> Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
> find any valid local directory for
> attempt_1398989569957_0021_m_000000_0/intermediate.26
>         at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
> ite(LocalDirAllocator.java:402) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:150) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:131) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
> org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
>         at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
> 0) at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Unknown Source)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
> va:1548)
________________________________________________________________________________________________
I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al 26 de abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu

Re: Random Exception

Posted by Marcos Ortiz <ml...@uci.cu>.
It seems that your Hadoop data directory is broken or your disk has problems.
Which version of Hadoop are you using?

On Friday, May 02, 2014 08:43:44 AM S.L wrote:
> Hi All,
> 
> I get this exception after af resubmit my failed MapReduce jon, can one
> please let me know what this exception means ?
> 
> 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
> attempt_1398989569957_0021_m_000000_0, Status : FAILED
> Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
> find any valid local directory for
> attempt_1398989569957_0021_m_000000_0/intermediate.26
>         at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
> ite(LocalDirAllocator.java:402) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:150) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:131) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
> org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
>         at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
> 0) at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Unknown Source)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
> va:1548)
________________________________________________________________________________________________
I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al 26 de abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu