You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Siddharth Tiwari <si...@live.com> on 2013/11/27 22:26:43 UTC

Error for larger jobs

Hi Team
I am getting following strange error, can you point me to the possible reason.I have set heap size to 4GB but still getting it. please help
syslog logs2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 02013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec72013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 02013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running childjava.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)        at org.apache.hadoop.util.Shell.run(Shell.java:188)        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:415)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)        at org.apache.hadoop.mapred.Child.main(Child.java:262)Caused by: java.io.IOException: error=11, Resource temporarily unavailable        at java.lang.UNIXProcess.forkAndExec(Native Method)        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)        at java.lang.ProcessImpl.start(ProcessImpl.java:130)        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)        ... 16 more2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task

*------------------------*

Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 

"Maybe other people will try to limit me but I don't limit myself"
 		 	   		  

RE: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
What shall I put in my bash_profile ?

Date: Thu, 28 Nov 2013 10:04:58 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com
To: user@hadoop.apache.org

yes. you need to increase it, a simple way is put it in your /etc/profile



On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <si...@live.com> wrote:

Hi Vinay and AzuryyThanks for your responses.I get these error when I just run a teragen.
Also, do you suggest me to increase nproc value ? What should I increase it to ?

Sent from my iPad
On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com> wrote:










Hi Siddharth,
 
Looks like the issue with one of the machine.  Or its happening in different machines also?

 
I don’t think it’s a problem with JVM heap memory.
 
Suggest you to check this once, 

http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11

 
Thanks and Regards,
Vinayakumar B
 


From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com]


Sent: 28 November 2013 05:50

To: USers Hadoop

Subject: RE: Error for larger jobs


 

Hi Azury

 


Thanks for response. I have plenty of space on my Disks so that cannot be the issue.






*------------------------*

Cheers !!!


Siddharth Tiwari

Have a refreshing day !!!

"Every duty is holy, and devotion to duty is the highest form of worship of God.”


"Maybe other people will try to limit me but I don't limit myself"








Date: Thu, 28 Nov 2013 08:10:06 +0800

Subject: Re: Error for larger jobs

From: azuryyyu@gmail.com

To: user@hadoop.apache.org
Your disk is full from the log.

On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:



Hi Team

 


I am getting following strange error, can you point me to the possible reason.



I have set heap size to 4GB but still getting it. please help



 


syslog logs

2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java
 classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter
 instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
session.id is deprecated. Instead, use dfs.metrics.session-id

2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=

2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0

2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d

2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7

2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and 
 BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0

2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException:
 Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child

java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable

        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)

        at org.apache.hadoop.util.Shell.run(Shell.java:188)

        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)

        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)

        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)

        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)

        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)

        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)

        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)

        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)

        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)

        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

        at org.apache.hadoop.mapred.Child.main(Child.java:262)

Caused by: java.io.IOException: error=11, Resource temporarily unavailable

        at java.lang.UNIXProcess.forkAndExec(Native Method)

        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)

        at java.lang.ProcessImpl.start(ProcessImpl.java:130)

        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)

        ... 16 more

2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task





*------------------------*

Cheers !!!


Siddharth Tiwari

Have a refreshing day !!!

"Every duty is holy, and devotion to duty is the highest form of worship of God.”


"Maybe other people will try to limit me but I don't limit myself"












 		 	   		  

Re: Error for larger jobs

Posted by Ted Yu <yu...@gmail.com>.
Siddharth :
Take a look at 2.1.2.5.  ulimit and nproc under
http://hbase.apache.org/book.html#os

Cheers


On Wed, Nov 27, 2013 at 6:04 PM, Azuryy Yu <az...@gmail.com> wrote:

> yes. you need to increase it, a simple way is put it in your /etc/profile
>
>
>
>
> On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <
> siddharth.tiwari@live.com> wrote:
>
>> Hi Vinay and Azuryy
>> Thanks for your responses.
>> I get these error when I just run a teragen.
>> Also, do you suggest me to increase nproc value ? What should I increase
>> it to ?
>>
>> Sent from my iPad
>>
>> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com>
>> wrote:
>>
>>  Hi Siddharth,
>>
>>
>>
>> Looks like the issue with one of the machine.  Or its happening in
>> different machines also?
>>
>>
>>
>> I don’t think it’s a problem with JVM heap memory.
>>
>>
>>
>> Suggest you to check this once,
>>
>> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>>
>>
>>
>> Thanks and Regards,
>>
>> Vinayakumar B
>>
>>
>>
>> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com<si...@live.com>]
>>
>> *Sent:* 28 November 2013 05:50
>> *To:* USers Hadoop
>> *Subject:* RE: Error for larger jobs
>>
>>
>>
>> Hi Azury
>>
>>
>>
>> Thanks for response. I have plenty of space on my Disks so that cannot be
>> the issue.
>>
>>
>> **------------------------**
>> *Cheers !!!*
>> *Siddharth Tiwari*
>> Have a refreshing day !!!
>> *"Every duty is holy, and devotion to duty is the highest form of worship
>> of God.” *
>> *"Maybe other people will try to limit me but I don't limit myself"*
>>
>>   ------------------------------
>>
>> Date: Thu, 28 Nov 2013 08:10:06 +0800
>> Subject: Re: Error for larger jobs
>> From: azuryyyu@gmail.com
>> To: user@hadoop.apache.org
>>
>> Your disk is full from the log.
>>
>> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
>> wrote:
>>
>> Hi Team
>>
>>
>>
>> I am getting following strange error, can you point me to the possible
>> reason.
>>
>> I have set heap size to 4GB but still getting it. please help
>>
>>
>>
>> *syslog logs*
>>
>> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
>> Unable to load native-hadoop library for your platform... using
>> builtin-java classes where applicable
>>
>> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>>
>> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
>> session.id is deprecated. Instead, use dfs.metrics.session-id
>>
>> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=MAP, sessionId=
>>
>> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
>> exited with exit code 0
>>
>> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
>> ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>>
>> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
>> split:
>> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>>
>> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
>> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
>> and  BYTES_READ as counter name instead
>>
>> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
>> numReduceTasks: 0
>>
>> 2013-11-27 19:01:52,250 ERROR
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
>> "chmod": error=11, Resource temporarily unavailable
>>
>> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error
>> running child
>>
>> java.io.IOException: Cannot run program "chmod": error=11, Resource
>> temporarily unavailable
>>
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>>
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>
>>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>
>>         at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>>
>>         at
>> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>>
>>         at
>> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>>
>>         at
>> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>>
>>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>>
>>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>>
>>         at java.security.AccessController.doPrivileged(Native Method)
>>
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>
>>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>>
>> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>>
>>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>>
>>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>>
>>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>>
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>>
>>         ... 16 more
>>
>> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
>> cleanup for the task
>>
>>
>>
>> **------------------------**
>> *Cheers !!!*
>> *Siddharth Tiwari*
>> Have a refreshing day !!!
>> *"Every duty is holy, and devotion to duty is the highest form of worship
>> of God.” *
>> *"Maybe other people will try to limit me but I don't limit myself"*
>>
>>
>

Re: Error for larger jobs

Posted by Ted Yu <yu...@gmail.com>.
Siddharth :
Take a look at 2.1.2.5.  ulimit and nproc under
http://hbase.apache.org/book.html#os

Cheers


On Wed, Nov 27, 2013 at 6:04 PM, Azuryy Yu <az...@gmail.com> wrote:

> yes. you need to increase it, a simple way is put it in your /etc/profile
>
>
>
>
> On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <
> siddharth.tiwari@live.com> wrote:
>
>> Hi Vinay and Azuryy
>> Thanks for your responses.
>> I get these error when I just run a teragen.
>> Also, do you suggest me to increase nproc value ? What should I increase
>> it to ?
>>
>> Sent from my iPad
>>
>> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com>
>> wrote:
>>
>>  Hi Siddharth,
>>
>>
>>
>> Looks like the issue with one of the machine.  Or its happening in
>> different machines also?
>>
>>
>>
>> I don’t think it’s a problem with JVM heap memory.
>>
>>
>>
>> Suggest you to check this once,
>>
>> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>>
>>
>>
>> Thanks and Regards,
>>
>> Vinayakumar B
>>
>>
>>
>> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com<si...@live.com>]
>>
>> *Sent:* 28 November 2013 05:50
>> *To:* USers Hadoop
>> *Subject:* RE: Error for larger jobs
>>
>>
>>
>> Hi Azury
>>
>>
>>
>> Thanks for response. I have plenty of space on my Disks so that cannot be
>> the issue.
>>
>>
>> **------------------------**
>> *Cheers !!!*
>> *Siddharth Tiwari*
>> Have a refreshing day !!!
>> *"Every duty is holy, and devotion to duty is the highest form of worship
>> of God.” *
>> *"Maybe other people will try to limit me but I don't limit myself"*
>>
>>   ------------------------------
>>
>> Date: Thu, 28 Nov 2013 08:10:06 +0800
>> Subject: Re: Error for larger jobs
>> From: azuryyyu@gmail.com
>> To: user@hadoop.apache.org
>>
>> Your disk is full from the log.
>>
>> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
>> wrote:
>>
>> Hi Team
>>
>>
>>
>> I am getting following strange error, can you point me to the possible
>> reason.
>>
>> I have set heap size to 4GB but still getting it. please help
>>
>>
>>
>> *syslog logs*
>>
>> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
>> Unable to load native-hadoop library for your platform... using
>> builtin-java classes where applicable
>>
>> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>>
>> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
>> session.id is deprecated. Instead, use dfs.metrics.session-id
>>
>> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=MAP, sessionId=
>>
>> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
>> exited with exit code 0
>>
>> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
>> ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>>
>> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
>> split:
>> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>>
>> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
>> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
>> and  BYTES_READ as counter name instead
>>
>> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
>> numReduceTasks: 0
>>
>> 2013-11-27 19:01:52,250 ERROR
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
>> "chmod": error=11, Resource temporarily unavailable
>>
>> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error
>> running child
>>
>> java.io.IOException: Cannot run program "chmod": error=11, Resource
>> temporarily unavailable
>>
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>>
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>
>>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>
>>         at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>>
>>         at
>> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>>
>>         at
>> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>>
>>         at
>> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>>
>>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>>
>>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>>
>>         at java.security.AccessController.doPrivileged(Native Method)
>>
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>
>>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>>
>> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>>
>>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>>
>>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>>
>>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>>
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>>
>>         ... 16 more
>>
>> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
>> cleanup for the task
>>
>>
>>
>> **------------------------**
>> *Cheers !!!*
>> *Siddharth Tiwari*
>> Have a refreshing day !!!
>> *"Every duty is holy, and devotion to duty is the highest form of worship
>> of God.” *
>> *"Maybe other people will try to limit me but I don't limit myself"*
>>
>>
>

RE: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
What shall I put in my bash_profile ?

Date: Thu, 28 Nov 2013 10:04:58 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com
To: user@hadoop.apache.org

yes. you need to increase it, a simple way is put it in your /etc/profile



On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <si...@live.com> wrote:

Hi Vinay and AzuryyThanks for your responses.I get these error when I just run a teragen.
Also, do you suggest me to increase nproc value ? What should I increase it to ?

Sent from my iPad
On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com> wrote:










Hi Siddharth,
 
Looks like the issue with one of the machine.  Or its happening in different machines also?

 
I don’t think it’s a problem with JVM heap memory.
 
Suggest you to check this once, 

http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11

 
Thanks and Regards,
Vinayakumar B
 


From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com]


Sent: 28 November 2013 05:50

To: USers Hadoop

Subject: RE: Error for larger jobs


 

Hi Azury

 


Thanks for response. I have plenty of space on my Disks so that cannot be the issue.






*------------------------*

Cheers !!!


Siddharth Tiwari

Have a refreshing day !!!

"Every duty is holy, and devotion to duty is the highest form of worship of God.”


"Maybe other people will try to limit me but I don't limit myself"








Date: Thu, 28 Nov 2013 08:10:06 +0800

Subject: Re: Error for larger jobs

From: azuryyyu@gmail.com

To: user@hadoop.apache.org
Your disk is full from the log.

On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:



Hi Team

 


I am getting following strange error, can you point me to the possible reason.



I have set heap size to 4GB but still getting it. please help



 


syslog logs

2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java
 classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter
 instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
session.id is deprecated. Instead, use dfs.metrics.session-id

2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=

2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0

2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d

2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7

2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and 
 BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0

2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException:
 Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child

java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable

        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)

        at org.apache.hadoop.util.Shell.run(Shell.java:188)

        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)

        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)

        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)

        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)

        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)

        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)

        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)

        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)

        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)

        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

        at org.apache.hadoop.mapred.Child.main(Child.java:262)

Caused by: java.io.IOException: error=11, Resource temporarily unavailable

        at java.lang.UNIXProcess.forkAndExec(Native Method)

        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)

        at java.lang.ProcessImpl.start(ProcessImpl.java:130)

        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)

        ... 16 more

2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task





*------------------------*

Cheers !!!


Siddharth Tiwari

Have a refreshing day !!!

"Every duty is holy, and devotion to duty is the highest form of worship of God.”


"Maybe other people will try to limit me but I don't limit myself"












 		 	   		  

RE: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
What shall I put in my bash_profile ?

Date: Thu, 28 Nov 2013 10:04:58 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com
To: user@hadoop.apache.org

yes. you need to increase it, a simple way is put it in your /etc/profile



On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <si...@live.com> wrote:

Hi Vinay and AzuryyThanks for your responses.I get these error when I just run a teragen.
Also, do you suggest me to increase nproc value ? What should I increase it to ?

Sent from my iPad
On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com> wrote:










Hi Siddharth,
 
Looks like the issue with one of the machine.  Or its happening in different machines also?

 
I don’t think it’s a problem with JVM heap memory.
 
Suggest you to check this once, 

http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11

 
Thanks and Regards,
Vinayakumar B
 


From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com]


Sent: 28 November 2013 05:50

To: USers Hadoop

Subject: RE: Error for larger jobs


 

Hi Azury

 


Thanks for response. I have plenty of space on my Disks so that cannot be the issue.






*------------------------*

Cheers !!!


Siddharth Tiwari

Have a refreshing day !!!

"Every duty is holy, and devotion to duty is the highest form of worship of God.”


"Maybe other people will try to limit me but I don't limit myself"








Date: Thu, 28 Nov 2013 08:10:06 +0800

Subject: Re: Error for larger jobs

From: azuryyyu@gmail.com

To: user@hadoop.apache.org
Your disk is full from the log.

On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:



Hi Team

 


I am getting following strange error, can you point me to the possible reason.



I have set heap size to 4GB but still getting it. please help



 


syslog logs

2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java
 classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter
 instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
session.id is deprecated. Instead, use dfs.metrics.session-id

2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=

2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0

2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d

2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7

2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and 
 BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0

2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException:
 Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child

java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable

        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)

        at org.apache.hadoop.util.Shell.run(Shell.java:188)

        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)

        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)

        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)

        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)

        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)

        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)

        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)

        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)

        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)

        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

        at org.apache.hadoop.mapred.Child.main(Child.java:262)

Caused by: java.io.IOException: error=11, Resource temporarily unavailable

        at java.lang.UNIXProcess.forkAndExec(Native Method)

        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)

        at java.lang.ProcessImpl.start(ProcessImpl.java:130)

        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)

        ... 16 more

2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task





*------------------------*

Cheers !!!


Siddharth Tiwari

Have a refreshing day !!!

"Every duty is holy, and devotion to duty is the highest form of worship of God.”


"Maybe other people will try to limit me but I don't limit myself"












 		 	   		  

Re: Error for larger jobs

Posted by Ted Yu <yu...@gmail.com>.
Siddharth :
Take a look at 2.1.2.5.  ulimit and nproc under
http://hbase.apache.org/book.html#os

Cheers


On Wed, Nov 27, 2013 at 6:04 PM, Azuryy Yu <az...@gmail.com> wrote:

> yes. you need to increase it, a simple way is put it in your /etc/profile
>
>
>
>
> On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <
> siddharth.tiwari@live.com> wrote:
>
>> Hi Vinay and Azuryy
>> Thanks for your responses.
>> I get these error when I just run a teragen.
>> Also, do you suggest me to increase nproc value ? What should I increase
>> it to ?
>>
>> Sent from my iPad
>>
>> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com>
>> wrote:
>>
>>  Hi Siddharth,
>>
>>
>>
>> Looks like the issue with one of the machine.  Or its happening in
>> different machines also?
>>
>>
>>
>> I don’t think it’s a problem with JVM heap memory.
>>
>>
>>
>> Suggest you to check this once,
>>
>> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>>
>>
>>
>> Thanks and Regards,
>>
>> Vinayakumar B
>>
>>
>>
>> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com<si...@live.com>]
>>
>> *Sent:* 28 November 2013 05:50
>> *To:* USers Hadoop
>> *Subject:* RE: Error for larger jobs
>>
>>
>>
>> Hi Azury
>>
>>
>>
>> Thanks for response. I have plenty of space on my Disks so that cannot be
>> the issue.
>>
>>
>> **------------------------**
>> *Cheers !!!*
>> *Siddharth Tiwari*
>> Have a refreshing day !!!
>> *"Every duty is holy, and devotion to duty is the highest form of worship
>> of God.” *
>> *"Maybe other people will try to limit me but I don't limit myself"*
>>
>>   ------------------------------
>>
>> Date: Thu, 28 Nov 2013 08:10:06 +0800
>> Subject: Re: Error for larger jobs
>> From: azuryyyu@gmail.com
>> To: user@hadoop.apache.org
>>
>> Your disk is full from the log.
>>
>> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
>> wrote:
>>
>> Hi Team
>>
>>
>>
>> I am getting following strange error, can you point me to the possible
>> reason.
>>
>> I have set heap size to 4GB but still getting it. please help
>>
>>
>>
>> *syslog logs*
>>
>> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
>> Unable to load native-hadoop library for your platform... using
>> builtin-java classes where applicable
>>
>> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>>
>> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
>> session.id is deprecated. Instead, use dfs.metrics.session-id
>>
>> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=MAP, sessionId=
>>
>> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
>> exited with exit code 0
>>
>> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
>> ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>>
>> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
>> split:
>> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>>
>> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
>> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
>> and  BYTES_READ as counter name instead
>>
>> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
>> numReduceTasks: 0
>>
>> 2013-11-27 19:01:52,250 ERROR
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
>> "chmod": error=11, Resource temporarily unavailable
>>
>> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error
>> running child
>>
>> java.io.IOException: Cannot run program "chmod": error=11, Resource
>> temporarily unavailable
>>
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>>
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>
>>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>
>>         at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>>
>>         at
>> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>>
>>         at
>> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>>
>>         at
>> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>>
>>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>>
>>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>>
>>         at java.security.AccessController.doPrivileged(Native Method)
>>
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>
>>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>>
>> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>>
>>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>>
>>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>>
>>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>>
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>>
>>         ... 16 more
>>
>> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
>> cleanup for the task
>>
>>
>>
>> **------------------------**
>> *Cheers !!!*
>> *Siddharth Tiwari*
>> Have a refreshing day !!!
>> *"Every duty is holy, and devotion to duty is the highest form of worship
>> of God.” *
>> *"Maybe other people will try to limit me but I don't limit myself"*
>>
>>
>

RE: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
What shall I put in my bash_profile ?

Date: Thu, 28 Nov 2013 10:04:58 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com
To: user@hadoop.apache.org

yes. you need to increase it, a simple way is put it in your /etc/profile



On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <si...@live.com> wrote:

Hi Vinay and AzuryyThanks for your responses.I get these error when I just run a teragen.
Also, do you suggest me to increase nproc value ? What should I increase it to ?

Sent from my iPad
On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com> wrote:










Hi Siddharth,
 
Looks like the issue with one of the machine.  Or its happening in different machines also?

 
I don’t think it’s a problem with JVM heap memory.
 
Suggest you to check this once, 

http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11

 
Thanks and Regards,
Vinayakumar B
 


From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com]


Sent: 28 November 2013 05:50

To: USers Hadoop

Subject: RE: Error for larger jobs


 

Hi Azury

 


Thanks for response. I have plenty of space on my Disks so that cannot be the issue.






*------------------------*

Cheers !!!


Siddharth Tiwari

Have a refreshing day !!!

"Every duty is holy, and devotion to duty is the highest form of worship of God.”


"Maybe other people will try to limit me but I don't limit myself"








Date: Thu, 28 Nov 2013 08:10:06 +0800

Subject: Re: Error for larger jobs

From: azuryyyu@gmail.com

To: user@hadoop.apache.org
Your disk is full from the log.

On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:



Hi Team

 


I am getting following strange error, can you point me to the possible reason.



I have set heap size to 4GB but still getting it. please help



 


syslog logs

2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java
 classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter
 instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
session.id is deprecated. Instead, use dfs.metrics.session-id

2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=

2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0

2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d

2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7

2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and 
 BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0

2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException:
 Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child

java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable

        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)

        at org.apache.hadoop.util.Shell.run(Shell.java:188)

        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)

        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)

        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)

        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)

        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)

        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)

        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)

        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)

        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)

        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

        at org.apache.hadoop.mapred.Child.main(Child.java:262)

Caused by: java.io.IOException: error=11, Resource temporarily unavailable

        at java.lang.UNIXProcess.forkAndExec(Native Method)

        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)

        at java.lang.ProcessImpl.start(ProcessImpl.java:130)

        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)

        ... 16 more

2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task





*------------------------*

Cheers !!!


Siddharth Tiwari

Have a refreshing day !!!

"Every duty is holy, and devotion to duty is the highest form of worship of God.”


"Maybe other people will try to limit me but I don't limit myself"












 		 	   		  

Re: Error for larger jobs

Posted by Ted Yu <yu...@gmail.com>.
Siddharth :
Take a look at 2.1.2.5.  ulimit and nproc under
http://hbase.apache.org/book.html#os

Cheers


On Wed, Nov 27, 2013 at 6:04 PM, Azuryy Yu <az...@gmail.com> wrote:

> yes. you need to increase it, a simple way is put it in your /etc/profile
>
>
>
>
> On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <
> siddharth.tiwari@live.com> wrote:
>
>> Hi Vinay and Azuryy
>> Thanks for your responses.
>> I get these error when I just run a teragen.
>> Also, do you suggest me to increase nproc value ? What should I increase
>> it to ?
>>
>> Sent from my iPad
>>
>> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com>
>> wrote:
>>
>>  Hi Siddharth,
>>
>>
>>
>> Looks like the issue with one of the machine.  Or its happening in
>> different machines also?
>>
>>
>>
>> I don’t think it’s a problem with JVM heap memory.
>>
>>
>>
>> Suggest you to check this once,
>>
>> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>>
>>
>>
>> Thanks and Regards,
>>
>> Vinayakumar B
>>
>>
>>
>> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com<si...@live.com>]
>>
>> *Sent:* 28 November 2013 05:50
>> *To:* USers Hadoop
>> *Subject:* RE: Error for larger jobs
>>
>>
>>
>> Hi Azury
>>
>>
>>
>> Thanks for response. I have plenty of space on my Disks so that cannot be
>> the issue.
>>
>>
>> **------------------------**
>> *Cheers !!!*
>> *Siddharth Tiwari*
>> Have a refreshing day !!!
>> *"Every duty is holy, and devotion to duty is the highest form of worship
>> of God.” *
>> *"Maybe other people will try to limit me but I don't limit myself"*
>>
>>   ------------------------------
>>
>> Date: Thu, 28 Nov 2013 08:10:06 +0800
>> Subject: Re: Error for larger jobs
>> From: azuryyyu@gmail.com
>> To: user@hadoop.apache.org
>>
>> Your disk is full from the log.
>>
>> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
>> wrote:
>>
>> Hi Team
>>
>>
>>
>> I am getting following strange error, can you point me to the possible
>> reason.
>>
>> I have set heap size to 4GB but still getting it. please help
>>
>>
>>
>> *syslog logs*
>>
>> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
>> Unable to load native-hadoop library for your platform... using
>> builtin-java classes where applicable
>>
>> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>>
>> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
>> session.id is deprecated. Instead, use dfs.metrics.session-id
>>
>> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=MAP, sessionId=
>>
>> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
>> exited with exit code 0
>>
>> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
>> ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>>
>> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
>> split:
>> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>>
>> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
>> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
>> and  BYTES_READ as counter name instead
>>
>> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
>> numReduceTasks: 0
>>
>> 2013-11-27 19:01:52,250 ERROR
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
>> "chmod": error=11, Resource temporarily unavailable
>>
>> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error
>> running child
>>
>> java.io.IOException: Cannot run program "chmod": error=11, Resource
>> temporarily unavailable
>>
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>>
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>
>>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>
>>         at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>>
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>>
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>>
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>>
>>         at
>> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>>
>>         at
>> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>>
>>         at
>> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>>
>>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>>
>>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>>
>>         at java.security.AccessController.doPrivileged(Native Method)
>>
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>
>>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>>
>> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>>
>>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>>
>>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>>
>>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>>
>>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>>
>>         ... 16 more
>>
>> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
>> cleanup for the task
>>
>>
>>
>> **------------------------**
>> *Cheers !!!*
>> *Siddharth Tiwari*
>> Have a refreshing day !!!
>> *"Every duty is holy, and devotion to duty is the highest form of worship
>> of God.” *
>> *"Maybe other people will try to limit me but I don't limit myself"*
>>
>>
>

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
yes. you need to increase it, a simple way is put it in your /etc/profile




On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <siddharth.tiwari@live.com
> wrote:

> Hi Vinay and Azuryy
> Thanks for your responses.
> I get these error when I just run a teragen.
> Also, do you suggest me to increase nproc value ? What should I increase
> it to ?
>
> Sent from my iPad
>
> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com>
> wrote:
>
>  Hi Siddharth,
>
>
>
> Looks like the issue with one of the machine.  Or its happening in
> different machines also?
>
>
>
> I don’t think it’s a problem with JVM heap memory.
>
>
>
> Suggest you to check this once,
>
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>
>
>
> Thanks and Regards,
>
> Vinayakumar B
>
>
>
> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com<si...@live.com>]
>
> *Sent:* 28 November 2013 05:50
> *To:* USers Hadoop
> *Subject:* RE: Error for larger jobs
>
>
>
> Hi Azury
>
>
>
> Thanks for response. I have plenty of space on my Disks so that cannot be
> the issue.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>   ------------------------------
>
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
>
> Your disk is full from the log.
>
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
> wrote:
>
> Hi Team
>
>
>
> I am getting following strange error, can you point me to the possible
> reason.
>
> I have set heap size to 4GB but still getting it. please help
>
>
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
yes. you need to increase it, a simple way is put it in your /etc/profile




On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <siddharth.tiwari@live.com
> wrote:

> Hi Vinay and Azuryy
> Thanks for your responses.
> I get these error when I just run a teragen.
> Also, do you suggest me to increase nproc value ? What should I increase
> it to ?
>
> Sent from my iPad
>
> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com>
> wrote:
>
>  Hi Siddharth,
>
>
>
> Looks like the issue with one of the machine.  Or its happening in
> different machines also?
>
>
>
> I don’t think it’s a problem with JVM heap memory.
>
>
>
> Suggest you to check this once,
>
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>
>
>
> Thanks and Regards,
>
> Vinayakumar B
>
>
>
> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com<si...@live.com>]
>
> *Sent:* 28 November 2013 05:50
> *To:* USers Hadoop
> *Subject:* RE: Error for larger jobs
>
>
>
> Hi Azury
>
>
>
> Thanks for response. I have plenty of space on my Disks so that cannot be
> the issue.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>   ------------------------------
>
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
>
> Your disk is full from the log.
>
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
> wrote:
>
> Hi Team
>
>
>
> I am getting following strange error, can you point me to the possible
> reason.
>
> I have set heap size to 4GB but still getting it. please help
>
>
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
yes. you need to increase it, a simple way is put it in your /etc/profile




On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <siddharth.tiwari@live.com
> wrote:

> Hi Vinay and Azuryy
> Thanks for your responses.
> I get these error when I just run a teragen.
> Also, do you suggest me to increase nproc value ? What should I increase
> it to ?
>
> Sent from my iPad
>
> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com>
> wrote:
>
>  Hi Siddharth,
>
>
>
> Looks like the issue with one of the machine.  Or its happening in
> different machines also?
>
>
>
> I don’t think it’s a problem with JVM heap memory.
>
>
>
> Suggest you to check this once,
>
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>
>
>
> Thanks and Regards,
>
> Vinayakumar B
>
>
>
> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com<si...@live.com>]
>
> *Sent:* 28 November 2013 05:50
> *To:* USers Hadoop
> *Subject:* RE: Error for larger jobs
>
>
>
> Hi Azury
>
>
>
> Thanks for response. I have plenty of space on my Disks so that cannot be
> the issue.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>   ------------------------------
>
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
>
> Your disk is full from the log.
>
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
> wrote:
>
> Hi Team
>
>
>
> I am getting following strange error, can you point me to the possible
> reason.
>
> I have set heap size to 4GB but still getting it. please help
>
>
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
yes. you need to increase it, a simple way is put it in your /etc/profile




On Thu, Nov 28, 2013 at 9:59 AM, Siddharth Tiwari <siddharth.tiwari@live.com
> wrote:

> Hi Vinay and Azuryy
> Thanks for your responses.
> I get these error when I just run a teragen.
> Also, do you suggest me to increase nproc value ? What should I increase
> it to ?
>
> Sent from my iPad
>
> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com>
> wrote:
>
>  Hi Siddharth,
>
>
>
> Looks like the issue with one of the machine.  Or its happening in
> different machines also?
>
>
>
> I don’t think it’s a problem with JVM heap memory.
>
>
>
> Suggest you to check this once,
>
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>
>
>
> Thanks and Regards,
>
> Vinayakumar B
>
>
>
> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com<si...@live.com>]
>
> *Sent:* 28 November 2013 05:50
> *To:* USers Hadoop
> *Subject:* RE: Error for larger jobs
>
>
>
> Hi Azury
>
>
>
> Thanks for response. I have plenty of space on my Disks so that cannot be
> the issue.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>   ------------------------------
>
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
>
> Your disk is full from the log.
>
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
> wrote:
>
> Hi Team
>
>
>
> I am getting following strange error, can you point me to the possible
> reason.
>
> I have set heap size to 4GB but still getting it. please help
>
>
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>

Re: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
Hi Vinay and Azuryy
Thanks for your responses.
I get these error when I just run a teragen.
Also, do you suggest me to increase nproc value ? What should I increase it to ?

Sent from my iPad

> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com> wrote:
> 
> Hi Siddharth,
>  
> Looks like the issue with one of the machine.  Or its happening in different machines also?
>  
> I don’t think it’s a problem with JVM heap memory.
>  
> Suggest you to check this once, 
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>  
> Thanks and Regards,
> Vinayakumar B
>  
> From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com] 
> Sent: 28 November 2013 05:50
> To: USers Hadoop
> Subject: RE: Error for larger jobs
>  
> Hi Azury
>  
> Thanks for response. I have plenty of space on my Disks so that cannot be the issue.
> 
> 
> *------------------------*
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing day !!!
> "Every duty is holy, and devotion to duty is the highest form of worship of God.” 
> "Maybe other people will try to limit me but I don't limit myself"
> 
> 
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
> 
> Your disk is full from the log.
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:
> Hi Team
>  
> I am getting following strange error, can you point me to the possible reason.
> I have set heap size to 4GB but still getting it. please help
>  
> syslog logs
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
> 2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
> java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>         at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>         at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>         at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>         at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>         at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>         ... 16 more
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task
> 
> 
> *------------------------*
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing day !!!
> "Every duty is holy, and devotion to duty is the highest form of worship of God.” 
> "Maybe other people will try to limit me but I don't limit myself"

Re: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
Hi Vinay and Azuryy
Thanks for your responses.
I get these error when I just run a teragen.
Also, do you suggest me to increase nproc value ? What should I increase it to ?

Sent from my iPad

> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com> wrote:
> 
> Hi Siddharth,
>  
> Looks like the issue with one of the machine.  Or its happening in different machines also?
>  
> I don’t think it’s a problem with JVM heap memory.
>  
> Suggest you to check this once, 
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>  
> Thanks and Regards,
> Vinayakumar B
>  
> From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com] 
> Sent: 28 November 2013 05:50
> To: USers Hadoop
> Subject: RE: Error for larger jobs
>  
> Hi Azury
>  
> Thanks for response. I have plenty of space on my Disks so that cannot be the issue.
> 
> 
> *------------------------*
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing day !!!
> "Every duty is holy, and devotion to duty is the highest form of worship of God.” 
> "Maybe other people will try to limit me but I don't limit myself"
> 
> 
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
> 
> Your disk is full from the log.
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:
> Hi Team
>  
> I am getting following strange error, can you point me to the possible reason.
> I have set heap size to 4GB but still getting it. please help
>  
> syslog logs
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
> 2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
> java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>         at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>         at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>         at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>         at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>         at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>         ... 16 more
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task
> 
> 
> *------------------------*
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing day !!!
> "Every duty is holy, and devotion to duty is the highest form of worship of God.” 
> "Maybe other people will try to limit me but I don't limit myself"

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
Siddharth,
please check 'mapred.local.dir', but I would like advice you check GC logs
and OS logs. pay more attention on OS logs. I suspect you start too many
threads concurrently, then consumed all OS avaliable resources.



On Thu, Nov 28, 2013 at 9:08 AM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Siddharth,
>
>
>
> Looks like the issue with one of the machine.  Or its happening in
> different machines also?
>
>
>
> I don’t think it’s a problem with JVM heap memory.
>
>
>
> Suggest you to check this once,
>
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>
>
>
> Thanks and Regards,
>
> Vinayakumar B
>
>
>
> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com]
> *Sent:* 28 November 2013 05:50
> *To:* USers Hadoop
> *Subject:* RE: Error for larger jobs
>
>
>
> Hi Azury
>
>
>
> Thanks for response. I have plenty of space on my Disks so that cannot be
> the issue.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>   ------------------------------
>
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
>
> Your disk is full from the log.
>
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
> wrote:
>
> Hi Team
>
>
>
> I am getting following strange error, can you point me to the possible
> reason.
>
> I have set heap size to 4GB but still getting it. please help
>
>
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
Siddharth,
please check 'mapred.local.dir', but I would like advice you check GC logs
and OS logs. pay more attention on OS logs. I suspect you start too many
threads concurrently, then consumed all OS avaliable resources.



On Thu, Nov 28, 2013 at 9:08 AM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Siddharth,
>
>
>
> Looks like the issue with one of the machine.  Or its happening in
> different machines also?
>
>
>
> I don’t think it’s a problem with JVM heap memory.
>
>
>
> Suggest you to check this once,
>
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>
>
>
> Thanks and Regards,
>
> Vinayakumar B
>
>
>
> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com]
> *Sent:* 28 November 2013 05:50
> *To:* USers Hadoop
> *Subject:* RE: Error for larger jobs
>
>
>
> Hi Azury
>
>
>
> Thanks for response. I have plenty of space on my Disks so that cannot be
> the issue.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>   ------------------------------
>
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
>
> Your disk is full from the log.
>
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
> wrote:
>
> Hi Team
>
>
>
> I am getting following strange error, can you point me to the possible
> reason.
>
> I have set heap size to 4GB but still getting it. please help
>
>
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
Siddharth,
please check 'mapred.local.dir', but I would like advice you check GC logs
and OS logs. pay more attention on OS logs. I suspect you start too many
threads concurrently, then consumed all OS avaliable resources.



On Thu, Nov 28, 2013 at 9:08 AM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Siddharth,
>
>
>
> Looks like the issue with one of the machine.  Or its happening in
> different machines also?
>
>
>
> I don’t think it’s a problem with JVM heap memory.
>
>
>
> Suggest you to check this once,
>
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>
>
>
> Thanks and Regards,
>
> Vinayakumar B
>
>
>
> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com]
> *Sent:* 28 November 2013 05:50
> *To:* USers Hadoop
> *Subject:* RE: Error for larger jobs
>
>
>
> Hi Azury
>
>
>
> Thanks for response. I have plenty of space on my Disks so that cannot be
> the issue.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>   ------------------------------
>
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
>
> Your disk is full from the log.
>
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
> wrote:
>
> Hi Team
>
>
>
> I am getting following strange error, can you point me to the possible
> reason.
>
> I have set heap size to 4GB but still getting it. please help
>
>
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>

Re: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
Hi Vinay and Azuryy
Thanks for your responses.
I get these error when I just run a teragen.
Also, do you suggest me to increase nproc value ? What should I increase it to ?

Sent from my iPad

> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com> wrote:
> 
> Hi Siddharth,
>  
> Looks like the issue with one of the machine.  Or its happening in different machines also?
>  
> I don’t think it’s a problem with JVM heap memory.
>  
> Suggest you to check this once, 
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>  
> Thanks and Regards,
> Vinayakumar B
>  
> From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com] 
> Sent: 28 November 2013 05:50
> To: USers Hadoop
> Subject: RE: Error for larger jobs
>  
> Hi Azury
>  
> Thanks for response. I have plenty of space on my Disks so that cannot be the issue.
> 
> 
> *------------------------*
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing day !!!
> "Every duty is holy, and devotion to duty is the highest form of worship of God.” 
> "Maybe other people will try to limit me but I don't limit myself"
> 
> 
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
> 
> Your disk is full from the log.
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:
> Hi Team
>  
> I am getting following strange error, can you point me to the possible reason.
> I have set heap size to 4GB but still getting it. please help
>  
> syslog logs
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
> 2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
> java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>         at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>         at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>         at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>         at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>         at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>         ... 16 more
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task
> 
> 
> *------------------------*
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing day !!!
> "Every duty is holy, and devotion to duty is the highest form of worship of God.” 
> "Maybe other people will try to limit me but I don't limit myself"

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
Siddharth,
please check 'mapred.local.dir', but I would like advice you check GC logs
and OS logs. pay more attention on OS logs. I suspect you start too many
threads concurrently, then consumed all OS avaliable resources.



On Thu, Nov 28, 2013 at 9:08 AM, Vinayakumar B <vi...@huawei.com>wrote:

>  Hi Siddharth,
>
>
>
> Looks like the issue with one of the machine.  Or its happening in
> different machines also?
>
>
>
> I don’t think it’s a problem with JVM heap memory.
>
>
>
> Suggest you to check this once,
>
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>
>
>
> Thanks and Regards,
>
> Vinayakumar B
>
>
>
> *From:* Siddharth Tiwari [mailto:siddharth.tiwari@live.com]
> *Sent:* 28 November 2013 05:50
> *To:* USers Hadoop
> *Subject:* RE: Error for larger jobs
>
>
>
> Hi Azury
>
>
>
> Thanks for response. I have plenty of space on my Disks so that cannot be
> the issue.
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>   ------------------------------
>
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
>
> Your disk is full from the log.
>
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>
> wrote:
>
> Hi Team
>
>
>
> I am getting following strange error, can you point me to the possible
> reason.
>
> I have set heap size to 4GB but still getting it. please help
>
>
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>

Re: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
Hi Vinay and Azuryy
Thanks for your responses.
I get these error when I just run a teragen.
Also, do you suggest me to increase nproc value ? What should I increase it to ?

Sent from my iPad

> On Nov 27, 2013, at 11:08 PM, "Vinayakumar B" <vi...@huawei.com> wrote:
> 
> Hi Siddharth,
>  
> Looks like the issue with one of the machine.  Or its happening in different machines also?
>  
> I don’t think it’s a problem with JVM heap memory.
>  
> Suggest you to check this once, 
> http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
>  
> Thanks and Regards,
> Vinayakumar B
>  
> From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com] 
> Sent: 28 November 2013 05:50
> To: USers Hadoop
> Subject: RE: Error for larger jobs
>  
> Hi Azury
>  
> Thanks for response. I have plenty of space on my Disks so that cannot be the issue.
> 
> 
> *------------------------*
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing day !!!
> "Every duty is holy, and devotion to duty is the highest form of worship of God.” 
> "Maybe other people will try to limit me but I don't limit myself"
> 
> 
> Date: Thu, 28 Nov 2013 08:10:06 +0800
> Subject: Re: Error for larger jobs
> From: azuryyyu@gmail.com
> To: user@hadoop.apache.org
> 
> Your disk is full from the log.
> On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:
> Hi Team
>  
> I am getting following strange error, can you point me to the possible reason.
> I have set heap size to 4GB but still getting it. please help
>  
> syslog logs
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
> 2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
> java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>         at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>         at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>         at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>         at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>         at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>         ... 16 more
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task
> 
> 
> *------------------------*
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing day !!!
> "Every duty is holy, and devotion to duty is the highest form of worship of God.” 
> "Maybe other people will try to limit me but I don't limit myself"

RE: Error for larger jobs

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Siddharth,

Looks like the issue with one of the machine.  Or its happening in different machines also?

I don't think it's a problem with JVM heap memory.

Suggest you to check this once,
http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11

Thanks and Regards,
Vinayakumar B

From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com]
Sent: 28 November 2013 05:50
To: USers Hadoop
Subject: RE: Error for larger jobs

Hi Azury

Thanks for response. I have plenty of space on my Disks so that cannot be the issue.


*------------------------*
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God."
"Maybe other people will try to limit me but I don't limit myself"

________________________________
Date: Thu, 28 Nov 2013 08:10:06 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com<ma...@gmail.com>
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>> wrote:
Hi Team

I am getting following strange error, can you point me to the possible reason.
I have set heap size to 4GB but still getting it. please help

syslog logs
2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id<http://session.id> is deprecated. Instead, use dfs.metrics.session-id
2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d<ma...@4a0bd13d>
2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7<ma...@6c30aec7>
2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
        at java.lang.ProcessImpl.start(ProcessImpl.java:130)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
        ... 16 more
2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task


*------------------------*
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God."
"Maybe other people will try to limit me but I don't limit myself"

RE: Error for larger jobs

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Siddharth,

Looks like the issue with one of the machine.  Or its happening in different machines also?

I don't think it's a problem with JVM heap memory.

Suggest you to check this once,
http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11

Thanks and Regards,
Vinayakumar B

From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com]
Sent: 28 November 2013 05:50
To: USers Hadoop
Subject: RE: Error for larger jobs

Hi Azury

Thanks for response. I have plenty of space on my Disks so that cannot be the issue.


*------------------------*
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God."
"Maybe other people will try to limit me but I don't limit myself"

________________________________
Date: Thu, 28 Nov 2013 08:10:06 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com<ma...@gmail.com>
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>> wrote:
Hi Team

I am getting following strange error, can you point me to the possible reason.
I have set heap size to 4GB but still getting it. please help

syslog logs
2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id<http://session.id> is deprecated. Instead, use dfs.metrics.session-id
2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d<ma...@4a0bd13d>
2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7<ma...@6c30aec7>
2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
        at java.lang.ProcessImpl.start(ProcessImpl.java:130)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
        ... 16 more
2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task


*------------------------*
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God."
"Maybe other people will try to limit me but I don't limit myself"

RE: Error for larger jobs

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Siddharth,

Looks like the issue with one of the machine.  Or its happening in different machines also?

I don't think it's a problem with JVM heap memory.

Suggest you to check this once,
http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11

Thanks and Regards,
Vinayakumar B

From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com]
Sent: 28 November 2013 05:50
To: USers Hadoop
Subject: RE: Error for larger jobs

Hi Azury

Thanks for response. I have plenty of space on my Disks so that cannot be the issue.


*------------------------*
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God."
"Maybe other people will try to limit me but I don't limit myself"

________________________________
Date: Thu, 28 Nov 2013 08:10:06 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com<ma...@gmail.com>
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>> wrote:
Hi Team

I am getting following strange error, can you point me to the possible reason.
I have set heap size to 4GB but still getting it. please help

syslog logs
2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id<http://session.id> is deprecated. Instead, use dfs.metrics.session-id
2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d<ma...@4a0bd13d>
2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7<ma...@6c30aec7>
2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
        at java.lang.ProcessImpl.start(ProcessImpl.java:130)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
        ... 16 more
2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task


*------------------------*
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God."
"Maybe other people will try to limit me but I don't limit myself"

RE: Error for larger jobs

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Siddharth,

Looks like the issue with one of the machine.  Or its happening in different machines also?

I don't think it's a problem with JVM heap memory.

Suggest you to check this once,
http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11

Thanks and Regards,
Vinayakumar B

From: Siddharth Tiwari [mailto:siddharth.tiwari@live.com]
Sent: 28 November 2013 05:50
To: USers Hadoop
Subject: RE: Error for larger jobs

Hi Azury

Thanks for response. I have plenty of space on my Disks so that cannot be the issue.


*------------------------*
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God."
"Maybe other people will try to limit me but I don't limit myself"

________________________________
Date: Thu, 28 Nov 2013 08:10:06 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com<ma...@gmail.com>
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com>> wrote:
Hi Team

I am getting following strange error, can you point me to the possible reason.
I have set heap size to 4GB but still getting it. please help

syslog logs
2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id<http://session.id> is deprecated. Instead, use dfs.metrics.session-id
2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d<ma...@4a0bd13d>
2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7<ma...@6c30aec7>
2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
        at java.lang.ProcessImpl.start(ProcessImpl.java:130)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
        ... 16 more
2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task


*------------------------*
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God."
"Maybe other people will try to limit me but I don't limit myself"

RE: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
Hi Azury
Thanks for response. I have plenty of space on my Disks so that cannot be the issue.

*------------------------*

Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 

"Maybe other people will try to limit me but I don't limit myself"


Date: Thu, 28 Nov 2013 08:10:06 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com
To: user@hadoop.apache.org

Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:




Hi Team
I am getting following strange error, can you point me to the possible reason.
I have set heap size to 4GB but still getting it. please help

syslog logs
2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
        at java.lang.ProcessImpl.start(ProcessImpl.java:130)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
        ... 16 more
2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task

*------------------------*


Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 


"Maybe other people will try to limit me but I don't limit myself"
 		 	   		  

 		 	   		  

RE: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
Hi Azury
Thanks for response. I have plenty of space on my Disks so that cannot be the issue.

*------------------------*

Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 

"Maybe other people will try to limit me but I don't limit myself"


Date: Thu, 28 Nov 2013 08:10:06 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com
To: user@hadoop.apache.org

Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:




Hi Team
I am getting following strange error, can you point me to the possible reason.
I have set heap size to 4GB but still getting it. please help

syslog logs
2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
        at java.lang.ProcessImpl.start(ProcessImpl.java:130)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
        ... 16 more
2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task

*------------------------*


Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 


"Maybe other people will try to limit me but I don't limit myself"
 		 	   		  

 		 	   		  

RE: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
Hi Azury
Thanks for response. I have plenty of space on my Disks so that cannot be the issue.

*------------------------*

Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 

"Maybe other people will try to limit me but I don't limit myself"


Date: Thu, 28 Nov 2013 08:10:06 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com
To: user@hadoop.apache.org

Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:




Hi Team
I am getting following strange error, can you point me to the possible reason.
I have set heap size to 4GB but still getting it. please help

syslog logs
2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
        at java.lang.ProcessImpl.start(ProcessImpl.java:130)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
        ... 16 more
2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task

*------------------------*


Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 


"Maybe other people will try to limit me but I don't limit myself"
 		 	   		  

 		 	   		  

RE: Error for larger jobs

Posted by Siddharth Tiwari <si...@live.com>.
Hi Azury
Thanks for response. I have plenty of space on my Disks so that cannot be the issue.

*------------------------*

Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 

"Maybe other people will try to limit me but I don't limit myself"


Date: Thu, 28 Nov 2013 08:10:06 +0800
Subject: Re: Error for larger jobs
From: azuryyyu@gmail.com
To: user@hadoop.apache.org

Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:




Hi Team
I am getting following strange error, can you point me to the possible reason.
I have set heap size to 4GB but still getting it. please help

syslog logs
2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing split:org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2013-11-27 19:01:52,250 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
        at org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
        at org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
        at org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
        at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
        at java.lang.ProcessImpl.start(ProcessImpl.java:130)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
        ... 16 more
2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task

*------------------------*


Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of God.” 


"Maybe other people will try to limit me but I don't limit myself"
 		 	   		  

 		 	   		  

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:

> Hi Team
>
> I am getting following strange error, can you point me to the possible
> reason.
> I have set heap size to 4GB but still getting it. please help
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:

> Hi Team
>
> I am getting following strange error, can you point me to the possible
> reason.
> I have set heap size to 4GB but still getting it. please help
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:

> Hi Team
>
> I am getting following strange error, can you point me to the possible
> reason.
> I have set heap size to 4GB but still getting it. please help
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>

Re: Error for larger jobs

Posted by Azuryy Yu <az...@gmail.com>.
Your disk is full from the log.
On 2013-11-28 5:27 AM, "Siddharth Tiwari" <si...@live.com> wrote:

> Hi Team
>
> I am getting following strange error, can you point me to the possible
> reason.
> I have set heap size to 4GB but still getting it. please help
>
> *syslog logs*
>
> 2013-11-27 19:01:50,678 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2013-11-27 19:01:51,051 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
>
> 2013-11-27 19:01:51,539 WARN org.apache.hadoop.conf.Configuration:
> session.id is deprecated. Instead, use dfs.metrics.session-id
>
> 2013-11-27 19:01:51,540 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=MAP, sessionId=
>
> 2013-11-27 19:01:51,867 INFO org.apache.hadoop.util.ProcessTree: setsid
> exited with exit code 0
>
> 2013-11-27 19:01:51,870 INFO org.apache.hadoop.mapred.Task:  Using
> ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a0bd13d
>
> 2013-11-27 19:01:52,217 INFO org.apache.hadoop.mapred.MapTask: Processing
> split:
> org.apache.hadoop.examples.terasort.TeraGen$RangeInputFormat$RangeInputSplit@6c30aec7
>
> 2013-11-27 19:01:52,222 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name
> and  BYTES_READ as counter name instead
>
> 2013-11-27 19:01:52,226 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 0
>
> 2013-11-27 19:01:52,250 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot run program
> "chmod": error=11, Resource temporarily unavailable
>
> 2013-11-27 19:01:52,250 WARN org.apache.hadoop.mapred.Child: Error running
> child
>
> java.io.IOException: Cannot run program "chmod": error=11, Resource
> temporarily unavailable
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.insecureCreateForWrite(SecureIOUtils.java:146)
>
>         at
> org.apache.hadoop.io.SecureIOUtils.createForWrite(SecureIOUtils.java:168)
>
>         at
> org.apache.hadoop.mapred.TaskLog.writeToIndexFile(TaskLog.java:310)
>
>         at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:383)
>
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
>
> Caused by: java.io.IOException: error=11, Resource temporarily unavailable
>
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
>
>         ... 16 more
>
> 2013-11-27 19:01:52,256 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>