You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Marko Dinic <ma...@nissatech.com> on 2015/05/21 12:51:18 UTC

Could not find any valid local directory for jobcache EXCEPTION

I'm new to Hadoop and I'm getting the following exception when I try to 
run my job on Hadoop cluster:

org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
     at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:376)
     at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
     at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:127)
     at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:268)
     at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:380)
     at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:370)
     at 
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:232)
     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
     at java.security.AccessController.doPrivileged(Native Method)
     at javax.security.auth.Subject.doAs(Subject.java

Can anyone please tell me what seems to be the problem?

Best regards,
Marko

Re: Could not find any valid local directory for jobcache EXCEPTION

Posted by ma...@nissatech.com.
Chris,

Thank you very much. It does help, a lot. I had a feeling that it's  
something like that.

I wish you all the best,
Marko

Quoting Chris Nauroth <cn...@hortonworks.com>:

> Based on this stack trace, I'm guessing that you're running a 1.x version
> of Hadoop.
>
> The TaskTracker uses a set of local directories on the node for storage of
> submitted job files during the task's execution.  This is configured in
> mapred-site.xml in the property named mapred.job.local.dir.  The
> DiskErrorException means that even after trying all directories configured
> in mapped.job.local.dir, the TaskTracker couldn't find a place to store
> the files.  Possible root causes are misconfiguration, permissions on the
> local directories blocking access, disks are full, or disks have failed
> and gone into read-only mode.
>
> I hope this helps.
>
> --Chris Nauroth
>
>
>
>
> On 5/21/15, 3:51 AM, "Marko Dinic" <ma...@nissatech.com> wrote:
>
>> I'm new to Hadoop and I'm getting the following exception when I try to
>> run my job on Hadoop cluster:
>>
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>> any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathFor
>> Write(LocalDirAllocator.java:376)
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>> tor.java:146)
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>> tor.java:127)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java
>> :268)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>> 80)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>> 70)
>>     at
>> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskCo
>> ntroller.java:232)
>>     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java
>>
>> Can anyone please tell me what seems to be the problem?
>>
>> Best regards,
>> Marko




Re: Could not find any valid local directory for jobcache EXCEPTION

Posted by ma...@nissatech.com.
Chris,

Thank you very much. It does help, a lot. I had a feeling that it's  
something like that.

I wish you all the best,
Marko

Quoting Chris Nauroth <cn...@hortonworks.com>:

> Based on this stack trace, I'm guessing that you're running a 1.x version
> of Hadoop.
>
> The TaskTracker uses a set of local directories on the node for storage of
> submitted job files during the task's execution.  This is configured in
> mapred-site.xml in the property named mapred.job.local.dir.  The
> DiskErrorException means that even after trying all directories configured
> in mapped.job.local.dir, the TaskTracker couldn't find a place to store
> the files.  Possible root causes are misconfiguration, permissions on the
> local directories blocking access, disks are full, or disks have failed
> and gone into read-only mode.
>
> I hope this helps.
>
> --Chris Nauroth
>
>
>
>
> On 5/21/15, 3:51 AM, "Marko Dinic" <ma...@nissatech.com> wrote:
>
>> I'm new to Hadoop and I'm getting the following exception when I try to
>> run my job on Hadoop cluster:
>>
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>> any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathFor
>> Write(LocalDirAllocator.java:376)
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>> tor.java:146)
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>> tor.java:127)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java
>> :268)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>> 80)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>> 70)
>>     at
>> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskCo
>> ntroller.java:232)
>>     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java
>>
>> Can anyone please tell me what seems to be the problem?
>>
>> Best regards,
>> Marko




Re: Could not find any valid local directory for jobcache EXCEPTION

Posted by ma...@nissatech.com.
Chris,

Thank you very much. It does help, a lot. I had a feeling that it's  
something like that.

I wish you all the best,
Marko

Quoting Chris Nauroth <cn...@hortonworks.com>:

> Based on this stack trace, I'm guessing that you're running a 1.x version
> of Hadoop.
>
> The TaskTracker uses a set of local directories on the node for storage of
> submitted job files during the task's execution.  This is configured in
> mapred-site.xml in the property named mapred.job.local.dir.  The
> DiskErrorException means that even after trying all directories configured
> in mapped.job.local.dir, the TaskTracker couldn't find a place to store
> the files.  Possible root causes are misconfiguration, permissions on the
> local directories blocking access, disks are full, or disks have failed
> and gone into read-only mode.
>
> I hope this helps.
>
> --Chris Nauroth
>
>
>
>
> On 5/21/15, 3:51 AM, "Marko Dinic" <ma...@nissatech.com> wrote:
>
>> I'm new to Hadoop and I'm getting the following exception when I try to
>> run my job on Hadoop cluster:
>>
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>> any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathFor
>> Write(LocalDirAllocator.java:376)
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>> tor.java:146)
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>> tor.java:127)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java
>> :268)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>> 80)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>> 70)
>>     at
>> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskCo
>> ntroller.java:232)
>>     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java
>>
>> Can anyone please tell me what seems to be the problem?
>>
>> Best regards,
>> Marko




Re: Could not find any valid local directory for jobcache EXCEPTION

Posted by ma...@nissatech.com.
Chris,

Thank you very much. It does help, a lot. I had a feeling that it's  
something like that.

I wish you all the best,
Marko

Quoting Chris Nauroth <cn...@hortonworks.com>:

> Based on this stack trace, I'm guessing that you're running a 1.x version
> of Hadoop.
>
> The TaskTracker uses a set of local directories on the node for storage of
> submitted job files during the task's execution.  This is configured in
> mapred-site.xml in the property named mapred.job.local.dir.  The
> DiskErrorException means that even after trying all directories configured
> in mapped.job.local.dir, the TaskTracker couldn't find a place to store
> the files.  Possible root causes are misconfiguration, permissions on the
> local directories blocking access, disks are full, or disks have failed
> and gone into read-only mode.
>
> I hope this helps.
>
> --Chris Nauroth
>
>
>
>
> On 5/21/15, 3:51 AM, "Marko Dinic" <ma...@nissatech.com> wrote:
>
>> I'm new to Hadoop and I'm getting the following exception when I try to
>> run my job on Hadoop cluster:
>>
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>> any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathFor
>> Write(LocalDirAllocator.java:376)
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>> tor.java:146)
>>     at
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>> tor.java:127)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java
>> :268)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>> 80)
>>     at
>> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>> 70)
>>     at
>> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskCo
>> ntroller.java:232)
>>     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java
>>
>> Can anyone please tell me what seems to be the problem?
>>
>> Best regards,
>> Marko




Re: Could not find any valid local directory for jobcache EXCEPTION

Posted by Chris Nauroth <cn...@hortonworks.com>.
Based on this stack trace, I'm guessing that you're running a 1.x version
of Hadoop.

The TaskTracker uses a set of local directories on the node for storage of
submitted job files during the task's execution.  This is configured in
mapred-site.xml in the property named mapred.job.local.dir.  The
DiskErrorException means that even after trying all directories configured
in mapped.job.local.dir, the TaskTracker couldn't find a place to store
the files.  Possible root causes are misconfiguration, permissions on the
local directories blocking access, disks are full, or disks have failed
and gone into read-only mode.

I hope this helps.

--Chris Nauroth




On 5/21/15, 3:51 AM, "Marko Dinic" <ma...@nissatech.com> wrote:

>I'm new to Hadoop and I'm getting the following exception when I try to
>run my job on Hadoop cluster:
>
>org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
>     at 
>org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathFor
>Write(LocalDirAllocator.java:376)
>     at 
>org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>tor.java:146)
>     at 
>org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>tor.java:127)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java
>:268)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>80)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>70)
>     at 
>org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskCo
>ntroller.java:232)
>     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java
>
>Can anyone please tell me what seems to be the problem?
>
>Best regards,
>Marko


Re: Could not find any valid local directory for jobcache EXCEPTION

Posted by Chris Nauroth <cn...@hortonworks.com>.
Based on this stack trace, I'm guessing that you're running a 1.x version
of Hadoop.

The TaskTracker uses a set of local directories on the node for storage of
submitted job files during the task's execution.  This is configured in
mapred-site.xml in the property named mapred.job.local.dir.  The
DiskErrorException means that even after trying all directories configured
in mapped.job.local.dir, the TaskTracker couldn't find a place to store
the files.  Possible root causes are misconfiguration, permissions on the
local directories blocking access, disks are full, or disks have failed
and gone into read-only mode.

I hope this helps.

--Chris Nauroth




On 5/21/15, 3:51 AM, "Marko Dinic" <ma...@nissatech.com> wrote:

>I'm new to Hadoop and I'm getting the following exception when I try to
>run my job on Hadoop cluster:
>
>org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
>     at 
>org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathFor
>Write(LocalDirAllocator.java:376)
>     at 
>org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>tor.java:146)
>     at 
>org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>tor.java:127)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java
>:268)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>80)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>70)
>     at 
>org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskCo
>ntroller.java:232)
>     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java
>
>Can anyone please tell me what seems to be the problem?
>
>Best regards,
>Marko


Re: Could not find any valid local directory for jobcache EXCEPTION

Posted by Chris Nauroth <cn...@hortonworks.com>.
Based on this stack trace, I'm guessing that you're running a 1.x version
of Hadoop.

The TaskTracker uses a set of local directories on the node for storage of
submitted job files during the task's execution.  This is configured in
mapred-site.xml in the property named mapred.job.local.dir.  The
DiskErrorException means that even after trying all directories configured
in mapped.job.local.dir, the TaskTracker couldn't find a place to store
the files.  Possible root causes are misconfiguration, permissions on the
local directories blocking access, disks are full, or disks have failed
and gone into read-only mode.

I hope this helps.

--Chris Nauroth




On 5/21/15, 3:51 AM, "Marko Dinic" <ma...@nissatech.com> wrote:

>I'm new to Hadoop and I'm getting the following exception when I try to
>run my job on Hadoop cluster:
>
>org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
>     at 
>org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathFor
>Write(LocalDirAllocator.java:376)
>     at 
>org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>tor.java:146)
>     at 
>org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>tor.java:127)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java
>:268)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>80)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>70)
>     at 
>org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskCo
>ntroller.java:232)
>     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java
>
>Can anyone please tell me what seems to be the problem?
>
>Best regards,
>Marko


Re: Could not find any valid local directory for jobcache EXCEPTION

Posted by Chris Nauroth <cn...@hortonworks.com>.
Based on this stack trace, I'm guessing that you're running a 1.x version
of Hadoop.

The TaskTracker uses a set of local directories on the node for storage of
submitted job files during the task's execution.  This is configured in
mapred-site.xml in the property named mapred.job.local.dir.  The
DiskErrorException means that even after trying all directories configured
in mapped.job.local.dir, the TaskTracker couldn't find a place to store
the files.  Possible root causes are misconfiguration, permissions on the
local directories blocking access, disks are full, or disks have failed
and gone into read-only mode.

I hope this helps.

--Chris Nauroth




On 5/21/15, 3:51 AM, "Marko Dinic" <ma...@nissatech.com> wrote:

>I'm new to Hadoop and I'm getting the following exception when I try to
>run my job on Hadoop cluster:
>
>org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
>     at 
>org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathFor
>Write(LocalDirAllocator.java:376)
>     at 
>org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>tor.java:146)
>     at 
>org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAlloca
>tor.java:127)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java
>:268)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>80)
>     at 
>org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:3
>70)
>     at 
>org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskCo
>ntroller.java:232)
>     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java
>
>Can anyone please tell me what seems to be the problem?
>
>Best regards,
>Marko