You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com> on 2014/05/22 09:04:29 UTC

HDFS Quota Error

Hi

When I run a query in Hive, I get below exception.  I noticed the error "No space left on device".

Then I did "hadoop fs -count -q /var/local/hadoop" - which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for "hadoop fs -count -q /var/local/hadoop"

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not....


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"





RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Thanks for your reply.  We have more than 50% disk space.

Just FYI.. This is not a physical machine. Its vmware virtual machine.

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Aitor Perez Cedres [mailto:aperez@pragsis.com]
Sent: Thursday, May 22, 2014 1:04 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error


Maybe you are out of space in a local disk? That location[1] looks like the local dir where MR places some intermediate files. Can you check the output of df -h on a shell?


[1] /var/local/hadoop/cache/mapred/local
On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error "No space left on device".

Then I did "hadoop fs -count -q /var/local/hadoop" - which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for "hadoop fs -count -q /var/local/hadoop"

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not....


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"





--
Aitor Pérez
Big Data System Engineer

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

http://www.bidoop.es

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Thanks for your reply.  We have more than 50% disk space.

Just FYI.. This is not a physical machine. Its vmware virtual machine.

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Aitor Perez Cedres [mailto:aperez@pragsis.com]
Sent: Thursday, May 22, 2014 1:04 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error


Maybe you are out of space in a local disk? That location[1] looks like the local dir where MR places some intermediate files. Can you check the output of df -h on a shell?


[1] /var/local/hadoop/cache/mapred/local
On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error "No space left on device".

Then I did "hadoop fs -count -q /var/local/hadoop" - which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for "hadoop fs -count -q /var/local/hadoop"

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not....


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"





--
Aitor Pérez
Big Data System Engineer

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

http://www.bidoop.es

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Thanks for your reply.  We have more than 50% disk space.

Just FYI.. This is not a physical machine. Its vmware virtual machine.

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Aitor Perez Cedres [mailto:aperez@pragsis.com]
Sent: Thursday, May 22, 2014 1:04 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error


Maybe you are out of space in a local disk? That location[1] looks like the local dir where MR places some intermediate files. Can you check the output of df -h on a shell?


[1] /var/local/hadoop/cache/mapred/local
On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error "No space left on device".

Then I did "hadoop fs -count -q /var/local/hadoop" - which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for "hadoop fs -count -q /var/local/hadoop"

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not....


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"





--
Aitor Pérez
Big Data System Engineer

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

http://www.bidoop.es

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Thanks for your reply.  We have more than 50% disk space.

Just FYI.. This is not a physical machine. Its vmware virtual machine.

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Aitor Perez Cedres [mailto:aperez@pragsis.com]
Sent: Thursday, May 22, 2014 1:04 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error


Maybe you are out of space in a local disk? That location[1] looks like the local dir where MR places some intermediate files. Can you check the output of df -h on a shell?


[1] /var/local/hadoop/cache/mapred/local
On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error "No space left on device".

Then I did "hadoop fs -count -q /var/local/hadoop" - which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for "hadoop fs -count -q /var/local/hadoop"

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not....


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"





--
Aitor Pérez
Big Data System Engineer

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

http://www.bidoop.es

Re: HDFS Quota Error

Posted by Aitor Perez Cedres <ap...@pragsis.com>.
Maybe you are out of space in a local disk? That location[1] looks like 
the local dir where MR places some intermediate files. Can you check the 
output of df -h on a shell?


[1] /var/local/hadoop/cache/mapred/local

On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
> Hi
> When I run a query in Hive, I get below exception.  I noticed the 
> error "No space left on device".
> Then I did "hadoop fs -count -q /var/local/hadoop" -- which gave below 
> output
> none             inf            none inf           69          
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
> Why I am getting none and inf for space and remaining space quota?  Is 
> this meaning is unlimited space or is there is any space left?
> I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  
> Not sure 100G is correct or not? How much I need to set and how to 
> calculate this?
> After setting 100G , I get the below output  for "hadoop fs -count -q 
> /var/local/hadoop"
> none             inf    107374182400 104408308039           
> 73          286          297777777 hdfs://nnode:54310/var/local/hadoop
> I have to wait to see whether 100G is going to give me an exception or 
> not....
> --------------
> 2014-05-22 10:48:43,585 ERROR 
> org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hadoop 
> cause:java.io.FileNotFoundException: 
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
> (No space left on device)
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: 
> Error initializing attempt_201405211712_0625_r_000001_2:
> java.io.FileNotFoundException: 
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
> *(No space left on device)*
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at 
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at 
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at 
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at 
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at 
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:744)
> *Thanks and Regards*
> Prabakaran.N  aka NP
> nsn, Bangalore
> */When "I" is replaced by "We" - even Illness becomes "Wellness"/*

-- 
*Aitor Pérez*
/Big Data System Engineer/

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

_http://www.bidoop.es_


Re: HDFS Quota Error

Posted by Aitor Perez Cedres <ap...@pragsis.com>.
Maybe you are out of space in a local disk? That location[1] looks like 
the local dir where MR places some intermediate files. Can you check the 
output of df -h on a shell?


[1] /var/local/hadoop/cache/mapred/local

On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
> Hi
> When I run a query in Hive, I get below exception.  I noticed the 
> error "No space left on device".
> Then I did "hadoop fs -count -q /var/local/hadoop" -- which gave below 
> output
> none             inf            none inf           69          
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
> Why I am getting none and inf for space and remaining space quota?  Is 
> this meaning is unlimited space or is there is any space left?
> I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  
> Not sure 100G is correct or not? How much I need to set and how to 
> calculate this?
> After setting 100G , I get the below output  for "hadoop fs -count -q 
> /var/local/hadoop"
> none             inf    107374182400 104408308039           
> 73          286          297777777 hdfs://nnode:54310/var/local/hadoop
> I have to wait to see whether 100G is going to give me an exception or 
> not....
> --------------
> 2014-05-22 10:48:43,585 ERROR 
> org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hadoop 
> cause:java.io.FileNotFoundException: 
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
> (No space left on device)
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: 
> Error initializing attempt_201405211712_0625_r_000001_2:
> java.io.FileNotFoundException: 
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
> *(No space left on device)*
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at 
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at 
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at 
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at 
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at 
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:744)
> *Thanks and Regards*
> Prabakaran.N  aka NP
> nsn, Bangalore
> */When "I" is replaced by "We" - even Illness becomes "Wellness"/*

-- 
*Aitor Pérez*
/Big Data System Engineer/

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

_http://www.bidoop.es_


Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
based on your table file format along with table definition  and what kind
of query you are running on that data will decide on how many files need to
created. These files are created as temporary output from maps till
reducers consume them.

you can control how many files hive's job should create on run time by
setting
set hive.merge.mapfiles=true;
set hive.exec.max.dynamic.partitions.pernode=10000;
set hive.exec.max.dynamic.partitions=20000; # check what number you want to
set this to based on your machine configs

set hive.exec.max.created.files=200000;

Also if your table has lots of small files then change the input file
format by setting
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;


But this will also depend on what size disk you have and what's your base
filesystem type.
also do not forget to set the ulimit to unlimited.
If have reset ulimit then you will need to restart your hadoop cluster.

wait for some experts from dev forum to give more insights on this


On Thu, May 22, 2014 at 3:53 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Hi,
>
>
>
> Thanks.
>
>
>
> Inode is 100% in the disk where it mounted to the directly
> /var/local/hadoop (its not temp, but hadoops working or cache directory).
> This happens when we run aggregation query in hive.  Looks like hive query
> (map-red) create many small files.
>
>
>
> How to control this? What are those files?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
> *Sent:* Thursday, May 22, 2014 3:07 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> That means there are some or a process which are creating tons of small
> files and leaving it there when the work completed.
>
>
>
> To free up inode space you will need to delete the files.
>
> I do not think there is any other way.
>
>
>
> Check in your /tmp folder, how many files are there and if any process is
> leaving tmp files behind.
>
>
>
> On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natarajan@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com<ni...@gmail.com>]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --------------
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>
>         at java.io.FileOutputStream.open(Native Method)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>
>         at java.lang.Thread.run(Thread.java:744)
>
>
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Nitin Pawar
>
>
>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
based on your table file format along with table definition  and what kind
of query you are running on that data will decide on how many files need to
created. These files are created as temporary output from maps till
reducers consume them.

you can control how many files hive's job should create on run time by
setting
set hive.merge.mapfiles=true;
set hive.exec.max.dynamic.partitions.pernode=10000;
set hive.exec.max.dynamic.partitions=20000; # check what number you want to
set this to based on your machine configs

set hive.exec.max.created.files=200000;

Also if your table has lots of small files then change the input file
format by setting
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;


But this will also depend on what size disk you have and what's your base
filesystem type.
also do not forget to set the ulimit to unlimited.
If have reset ulimit then you will need to restart your hadoop cluster.

wait for some experts from dev forum to give more insights on this


On Thu, May 22, 2014 at 3:53 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Hi,
>
>
>
> Thanks.
>
>
>
> Inode is 100% in the disk where it mounted to the directly
> /var/local/hadoop (its not temp, but hadoops working or cache directory).
> This happens when we run aggregation query in hive.  Looks like hive query
> (map-red) create many small files.
>
>
>
> How to control this? What are those files?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
> *Sent:* Thursday, May 22, 2014 3:07 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> That means there are some or a process which are creating tons of small
> files and leaving it there when the work completed.
>
>
>
> To free up inode space you will need to delete the files.
>
> I do not think there is any other way.
>
>
>
> Check in your /tmp folder, how many files are there and if any process is
> leaving tmp files behind.
>
>
>
> On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natarajan@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com<ni...@gmail.com>]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --------------
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>
>         at java.io.FileOutputStream.open(Native Method)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>
>         at java.lang.Thread.run(Thread.java:744)
>
>
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Nitin Pawar
>
>
>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
based on your table file format along with table definition  and what kind
of query you are running on that data will decide on how many files need to
created. These files are created as temporary output from maps till
reducers consume them.

you can control how many files hive's job should create on run time by
setting
set hive.merge.mapfiles=true;
set hive.exec.max.dynamic.partitions.pernode=10000;
set hive.exec.max.dynamic.partitions=20000; # check what number you want to
set this to based on your machine configs

set hive.exec.max.created.files=200000;

Also if your table has lots of small files then change the input file
format by setting
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;


But this will also depend on what size disk you have and what's your base
filesystem type.
also do not forget to set the ulimit to unlimited.
If have reset ulimit then you will need to restart your hadoop cluster.

wait for some experts from dev forum to give more insights on this


On Thu, May 22, 2014 at 3:53 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Hi,
>
>
>
> Thanks.
>
>
>
> Inode is 100% in the disk where it mounted to the directly
> /var/local/hadoop (its not temp, but hadoops working or cache directory).
> This happens when we run aggregation query in hive.  Looks like hive query
> (map-red) create many small files.
>
>
>
> How to control this? What are those files?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
> *Sent:* Thursday, May 22, 2014 3:07 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> That means there are some or a process which are creating tons of small
> files and leaving it there when the work completed.
>
>
>
> To free up inode space you will need to delete the files.
>
> I do not think there is any other way.
>
>
>
> Check in your /tmp folder, how many files are there and if any process is
> leaving tmp files behind.
>
>
>
> On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natarajan@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com<ni...@gmail.com>]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --------------
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>
>         at java.io.FileOutputStream.open(Native Method)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>
>         at java.lang.Thread.run(Thread.java:744)
>
>
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Nitin Pawar
>
>
>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
based on your table file format along with table definition  and what kind
of query you are running on that data will decide on how many files need to
created. These files are created as temporary output from maps till
reducers consume them.

you can control how many files hive's job should create on run time by
setting
set hive.merge.mapfiles=true;
set hive.exec.max.dynamic.partitions.pernode=10000;
set hive.exec.max.dynamic.partitions=20000; # check what number you want to
set this to based on your machine configs

set hive.exec.max.created.files=200000;

Also if your table has lots of small files then change the input file
format by setting
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;


But this will also depend on what size disk you have and what's your base
filesystem type.
also do not forget to set the ulimit to unlimited.
If have reset ulimit then you will need to restart your hadoop cluster.

wait for some experts from dev forum to give more insights on this


On Thu, May 22, 2014 at 3:53 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Hi,
>
>
>
> Thanks.
>
>
>
> Inode is 100% in the disk where it mounted to the directly
> /var/local/hadoop (its not temp, but hadoops working or cache directory).
> This happens when we run aggregation query in hive.  Looks like hive query
> (map-red) create many small files.
>
>
>
> How to control this? What are those files?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
> *Sent:* Thursday, May 22, 2014 3:07 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> That means there are some or a process which are creating tons of small
> files and leaving it there when the work completed.
>
>
>
> To free up inode space you will need to delete the files.
>
> I do not think there is any other way.
>
>
>
> Check in your /tmp folder, how many files are there and if any process is
> leaving tmp files behind.
>
>
>
> On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natarajan@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com<ni...@gmail.com>]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --------------
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>
>         at java.io.FileOutputStream.open(Native Method)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>
>         at java.lang.Thread.run(Thread.java:744)
>
>
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Nitin Pawar
>
>
>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Hi,

Thanks.

Inode is 100% in the disk where it mounted to the directly /var/local/hadoop (its not temp, but hadoops working or cache directory).  This happens when we run aggregation query in hive.  Looks like hive query (map-red) create many small files.

How to control this? What are those files?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 3:07 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

That means there are some or a process which are creating tons of small files and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is leaving tmp files behind.

On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:prabakaran.1.natarajan@nsn.com<ma...@nsn.com>]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar



--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Hi,

Thanks.

Inode is 100% in the disk where it mounted to the directly /var/local/hadoop (its not temp, but hadoops working or cache directory).  This happens when we run aggregation query in hive.  Looks like hive query (map-red) create many small files.

How to control this? What are those files?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 3:07 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

That means there are some or a process which are creating tons of small files and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is leaving tmp files behind.

On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:prabakaran.1.natarajan@nsn.com<ma...@nsn.com>]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar



--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Hi,

Thanks.

Inode is 100% in the disk where it mounted to the directly /var/local/hadoop (its not temp, but hadoops working or cache directory).  This happens when we run aggregation query in hive.  Looks like hive query (map-red) create many small files.

How to control this? What are those files?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 3:07 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

That means there are some or a process which are creating tons of small files and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is leaving tmp files behind.

On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:prabakaran.1.natarajan@nsn.com<ma...@nsn.com>]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar



--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Hi,

Thanks.

Inode is 100% in the disk where it mounted to the directly /var/local/hadoop (its not temp, but hadoops working or cache directory).  This happens when we run aggregation query in hive.  Looks like hive query (map-red) create many small files.

How to control this? What are those files?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 3:07 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

That means there are some or a process which are creating tons of small files and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is leaving tmp files behind.

On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:prabakaran.1.natarajan@nsn.com<ma...@nsn.com>]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar



--
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
That means there are some or a process which are creating tons of small
files and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is
leaving tmp files behind.


On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natarajan@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com<ni...@gmail.com>]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --------------
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>
>         at java.io.FileOutputStream.open(Native Method)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>
>         at java.lang.Thread.run(Thread.java:744)
>
>
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
That means there are some or a process which are creating tons of small
files and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is
leaving tmp files behind.


On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natarajan@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com<ni...@gmail.com>]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --------------
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>
>         at java.io.FileOutputStream.open(Native Method)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>
>         at java.lang.Thread.run(Thread.java:744)
>
>
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
That means there are some or a process which are creating tons of small
files and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is
leaving tmp files behind.


On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natarajan@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com<ni...@gmail.com>]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --------------
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>
>         at java.io.FileOutputStream.open(Native Method)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>
>         at java.lang.Thread.run(Thread.java:744)
>
>
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
That means there are some or a process which are creating tons of small
files and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is
leaving tmp files behind.


On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natarajan@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar432@gmail.com<ni...@gmail.com>]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore) <pr...@nsn.com> wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --------------
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>
>         at java.io.FileOutputStream.open(Native Method)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>
>         at java.lang.Thread.run(Thread.java:744)
>
>
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:prabakaran.1.natarajan@nsn.com]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:prabakaran.1.natarajan@nsn.com]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:prabakaran.1.natarajan@nsn.com]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:prabakaran.1.natarajan@nsn.com]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar

RE: HDFS Quota Error

Posted by "Natarajan, Prabakaran 1. (NSN - IN/Bangalore)" <pr...@nsn.com>.
Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) <pr...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275          288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q /var/local/hadoop”

none             inf    107374182400    104408308039           73          286          297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
no space left on device can also mean that one of your datanode disk is
full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is
made free on this datanode.


On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Hi
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
> --------------
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> *Thanks and Regards*
> Prabakaran.N  aka NP
> nsn, Bangalore
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
no space left on device can also mean that one of your datanode disk is
full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is
made free on this datanode.


On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Hi
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
> --------------
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> *Thanks and Regards*
> Prabakaran.N  aka NP
> nsn, Bangalore
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Aitor Perez Cedres <ap...@pragsis.com>.
Maybe you are out of space in a local disk? That location[1] looks like 
the local dir where MR places some intermediate files. Can you check the 
output of df -h on a shell?


[1] /var/local/hadoop/cache/mapred/local

On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
> Hi
> When I run a query in Hive, I get below exception.  I noticed the 
> error "No space left on device".
> Then I did "hadoop fs -count -q /var/local/hadoop" -- which gave below 
> output
> none             inf            none inf           69          
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
> Why I am getting none and inf for space and remaining space quota?  Is 
> this meaning is unlimited space or is there is any space left?
> I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  
> Not sure 100G is correct or not? How much I need to set and how to 
> calculate this?
> After setting 100G , I get the below output  for "hadoop fs -count -q 
> /var/local/hadoop"
> none             inf    107374182400 104408308039           
> 73          286          297777777 hdfs://nnode:54310/var/local/hadoop
> I have to wait to see whether 100G is going to give me an exception or 
> not....
> --------------
> 2014-05-22 10:48:43,585 ERROR 
> org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hadoop 
> cause:java.io.FileNotFoundException: 
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
> (No space left on device)
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: 
> Error initializing attempt_201405211712_0625_r_000001_2:
> java.io.FileNotFoundException: 
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
> *(No space left on device)*
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at 
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at 
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at 
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at 
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at 
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:744)
> *Thanks and Regards*
> Prabakaran.N  aka NP
> nsn, Bangalore
> */When "I" is replaced by "We" - even Illness becomes "Wellness"/*

-- 
*Aitor Pérez*
/Big Data System Engineer/

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

_http://www.bidoop.es_


Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
no space left on device can also mean that one of your datanode disk is
full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is
made free on this datanode.


On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Hi
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
> --------------
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> *Thanks and Regards*
> Prabakaran.N  aka NP
> nsn, Bangalore
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Nitin Pawar <ni...@gmail.com>.
no space left on device can also mean that one of your datanode disk is
full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is
made free on this datanode.


On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore) <pr...@nsn.com> wrote:

>  Hi
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
> none             inf            none             inf           69
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
> none             inf    107374182400    104408308039           73
> 286          297777777 hdfs://nnode:54310/var/local/hadoop
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
> --------------
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_000001_2:
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> *Thanks and Regards*
> Prabakaran.N  aka NP
> nsn, Bangalore
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>



-- 
Nitin Pawar

Re: HDFS Quota Error

Posted by Aitor Perez Cedres <ap...@pragsis.com>.
Maybe you are out of space in a local disk? That location[1] looks like 
the local dir where MR places some intermediate files. Can you check the 
output of df -h on a shell?


[1] /var/local/hadoop/cache/mapred/local

On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
> Hi
> When I run a query in Hive, I get below exception.  I noticed the 
> error "No space left on device".
> Then I did "hadoop fs -count -q /var/local/hadoop" -- which gave below 
> output
> none             inf            none inf           69          
> 275          288034318 hdfs://nnode:54310/var/local/hadoop
> Why I am getting none and inf for space and remaining space quota?  Is 
> this meaning is unlimited space or is there is any space left?
> I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  
> Not sure 100G is correct or not? How much I need to set and how to 
> calculate this?
> After setting 100G , I get the below output  for "hadoop fs -count -q 
> /var/local/hadoop"
> none             inf    107374182400 104408308039           
> 73          286          297777777 hdfs://nnode:54310/var/local/hadoop
> I have to wait to see whether 100G is going to give me an exception or 
> not....
> --------------
> 2014-05-22 10:48:43,585 ERROR 
> org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hadoop 
> cause:java.io.FileNotFoundException: 
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
> (No space left on device)
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: 
> Error initializing attempt_201405211712_0625_r_000001_2:
> java.io.FileNotFoundException: 
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
> *(No space left on device)*
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
>         at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at 
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at 
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at 
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at 
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at 
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at 
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:744)
> *Thanks and Regards*
> Prabakaran.N  aka NP
> nsn, Bangalore
> */When "I" is replaced by "We" - even Illness becomes "Wellness"/*

-- 
*Aitor Pérez*
/Big Data System Engineer/

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

_http://www.bidoop.es_