You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Xiaobo Gu <gu...@gmail.com> on 2011/08/07 11:48:25 UTC

Map task can't execute /bin/ls on solaris

Hi,

I am trying to write a map-reduce job to convert csv files to
sequencefiles, but the job fails with the following error:
java.lang.RuntimeException: Error while running command to get file
permissions : java.io.IOException: Cannot run program "/bin/ls":
error=12, Not enough space
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
	at org.apache.hadoop.util.Shell.run(Shell.java:182)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
	at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
	at org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
	at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
	at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
	at org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
	at org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
	at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
	at org.apache.hadoop.mapred.Child.main(Child.java:253)
Caused by: java.io.IOException: error=12, Not enough space
	at java.lang.UNIXProcess.forkAndExec(Native Method)
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	... 16 more

	at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
	at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
	at org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
	at org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
	at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
	at org.apache.hadoop.mapred.Child.main(Child.java:253)

Re: Map task can't execute /bin/ls on solaris

Posted by Sean Owen <sr...@gmail.com>.
Xiaobo -- don't cross-post to so many lists.
This in particular has nothing to do with Mahout.

On Sun, Aug 7, 2011 at 10:48 AM, Xiaobo Gu <gu...@gmail.com> wrote:

> Hi,
>
> I am trying to write a map-reduce job to convert csv files to
> sequencefiles, but the job fails with the following error:
> java.lang.RuntimeException: Error while running command to get file
> permissions : java.io.IOException: Cannot run program "/bin/ls":
> error=12, Not enough space
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
>        at org.apache.hadoop.util.Shell.run(Shell.java:182)
>        at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>        at
> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>        at
> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
> Caused by: java.io.IOException: error=12, Not enough space
>        at java.lang.UNIXProcess.forkAndExec(Native Method)
>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>        ... 16 more
>
>        at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>        at
> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>        at
> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>

Re: My cluster datanode machine can't start

Posted by Harsh J <ha...@cloudera.com>.
A quick workaround is to not run your services as root.

(Actually, you shouldn't run Hadoop as root ever!)

On Thu, Aug 11, 2011 at 3:02 PM, devilsp4 <de...@gmail.com> wrote:
> Hi,
>
>      I deploy hadoop cluster use two machine.one as a namenode,and the other be used a datanode.
>
>      My namenode machine hostname is namenode1,and datanode machine hostname is datanode1.
>
>      when I use command ./start-all.sh on namenode1,the console display below string,
>
> root@namenode1:/opt/hadoop/bin# ./start-all.sh
> starting namenode, logging to /opt/hadoop/bin/../logs/hadoop-root-namenode-namenode1.out
> datanode1: starting datanode, logging to /opt/hadoop/bin/../logs/hadoop-root-datanode-datanode1.out
> namenode1: starting secondarynamenode, logging to /opt/hadoop/bin/../logs/hadoop-root-secondarynamenode-namenode1.out
> starting jobtracker, logging to /opt/hadoop/bin/../logs/hadoop-root-jobtracker-namenode1.out
> datanode1: starting tasktracker, logging to /opt/hadoop/bin/../logs/hadoop-root-tasktracker-datanode1.out
>
> and use jps show java processs,display below string,
>
>    15438 JobTracker
> 15159 NameNode
> 15582 Jps
> 15362 SecondaryNameNode
>
> and ssh datanode1,use comman jps,display below somethins strings
>
> 21417 TaskTracker
> 21497 Jps
>
>
> so,the datanode can't run,and I find logs
>
> [root@datanode1 logs]# ls
> hadoop-root-datanode-datanode1.out    hadoop-root-tasktracker-datanode1.log    hadoop-root-tasktracker-datanode1.out.2
> hadoop-root-datanode-datanode1.out.1  hadoop-root-tasktracker-datanode1.out
> hadoop-root-datanode-datanode1.out.2  hadoop-root-tasktracker-datanode1.out.1
>
> [root@datanode1 logs]# cat hadoop-root-datanode-datanode1.out
> Unrecognized option: -jvm
> Could not create the Java virtual machine.
>
>
>    Next, what should I do to solve this problem。
>
>
>    Thanks. devilsp
>



-- 
Harsh J

My cluster datanode machine can't start

Posted by devilsp4 <de...@gmail.com>.
Hi,

      I deploy hadoop cluster use two machine.one as a namenode,and the other be used a datanode.

      My namenode machine hostname is namenode1,and datanode machine hostname is datanode1.

      when I use command ./start-all.sh on namenode1,the console display below string,

root@namenode1:/opt/hadoop/bin# ./start-all.sh
starting namenode, logging to /opt/hadoop/bin/../logs/hadoop-root-namenode-namenode1.out
datanode1: starting datanode, logging to /opt/hadoop/bin/../logs/hadoop-root-datanode-datanode1.out
namenode1: starting secondarynamenode, logging to /opt/hadoop/bin/../logs/hadoop-root-secondarynamenode-namenode1.out
starting jobtracker, logging to /opt/hadoop/bin/../logs/hadoop-root-jobtracker-namenode1.out
datanode1: starting tasktracker, logging to /opt/hadoop/bin/../logs/hadoop-root-tasktracker-datanode1.out

and use jps show java processs,display below string,

    15438 JobTracker
15159 NameNode
15582 Jps
15362 SecondaryNameNode

and ssh datanode1,use comman jps,display below somethins strings

21417 TaskTracker
21497 Jps


so,the datanode can't run,and I find logs 

[root@datanode1 logs]# ls
hadoop-root-datanode-datanode1.out    hadoop-root-tasktracker-datanode1.log    hadoop-root-tasktracker-datanode1.out.2
hadoop-root-datanode-datanode1.out.1  hadoop-root-tasktracker-datanode1.out
hadoop-root-datanode-datanode1.out.2  hadoop-root-tasktracker-datanode1.out.1

[root@datanode1 logs]# cat hadoop-root-datanode-datanode1.out
Unrecognized option: -jvm
Could not create the Java virtual machine.


    Next, what should I do to solve this problem。


    Thanks. devilsp

Re: Map task can't execute /bin/ls on solaris

Posted by Adi <ad...@gmail.com>.
Some other options that effect the number of mappers and reducers and the
amount of memory they use:

mapred.child.java.opts*  *-Xmx1200M  (e.g. heap for your mapper/reducer or
any other java options) - this will decide the number of slots(512M) per
mapper

splitsize will effect the number of splits(and in effect the number of
mappers) depending on your input file and input format(in case you are using
fileinputformat or deriving from it)
mapreduce.input.fileinputformat.split.maxsize  <max number of bytes>
mapreduce.input.fileinputformat.split.minsize   <min number of bytes>

-Adi



On Thu, Aug 11, 2011 at 2:11 AM, Harsh J <ha...@cloudera.com> wrote:

> It applies to all Hadoop daemon processes (JT, TT, NN, SNN, DN) and
> all direct commands executed via the 'hadoop' executable.
>
> On Thu, Aug 11, 2011 at 11:37 AM, Xiaobo Gu <gu...@gmail.com>
> wrote:
> > Is HADOOP_HEAPSIZE set for all Hadoop related Java processes, or just
> > one Java process?
> >
> > Regards,
> >
> > Xiaobo Gu
> >
> > On Thu, Aug 11, 2011 at 1:07 PM, Lance Norskog <go...@gmail.com>
> wrote:
> >> If the server is dedicated to this job, you might as well give it
> >> 10-15g. After that shakes out, try changing the number of mappers &
> >> reducers.
> >>
> >> On Tue, Aug 9, 2011 at 2:06 AM, Xiaobo Gu <gu...@gmail.com>
> wrote:
> >>> Hi Adi,
> >>>
> >>> Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
> >>> what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
> >>> dedicated for a Single Node Hadoop with 1 data node instance, and the
> >>> it will run 4 mapper and reducer tasks .
> >>>
> >>> Regards,
> >>>
> >>> Xiaobo Gu
> >>>
> >>>
> >>> On Sun, Aug 7, 2011 at 11:35 PM, Adi <ad...@gmail.com> wrote:
> >>>>>>Caused by: java.io.IOException: error=12, Not enough space
> >>>>
> >>>> You either do not have enough memory allocated to your hadoop
> daemons(via
> >>>> HADOOP_HEAPSIZE) or swap space.
> >>>>
> >>>> -Adi
> >>>>
> >>>> On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu <gu...@gmail.com>
> wrote:
> >>>>
> >>>>> Hi,
> >>>>>
> >>>>> I am trying to write a map-reduce job to convert csv files to
> >>>>> sequencefiles, but the job fails with the following error:
> >>>>> java.lang.RuntimeException: Error while running command to get file
> >>>>> permissions : java.io.IOException: Cannot run program "/bin/ls":
> >>>>> error=12, Not enough space
> >>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
> >>>>>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
> >>>>>        at org.apache.hadoop.util.Shell.run(Shell.java:182)
> >>>>>        at
> >>>>>
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
> >>>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
> >>>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
> >>>>>        at
> >>>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
> >>>>>        at
> >>>>>
> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
> >>>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
> >>>>>        at java.security.AccessController.doPrivileged(Native Method)
> >>>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>>        at
> >>>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> >>>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
> >>>>> Caused by: java.io.IOException: error=12, Not enough space
> >>>>>        at java.lang.UNIXProcess.forkAndExec(Native Method)
> >>>>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
> >>>>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> >>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
> >>>>>        ... 16 more
> >>>>>
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
> >>>>>        at
> >>>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
> >>>>>        at
> >>>>>
> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
> >>>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
> >>>>>        at java.security.AccessController.doPrivileged(Native Method)
> >>>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>>        at
> >>>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> >>>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
> >>>>>
> >>>>
> >>>
> >>
> >>
> >>
> >> --
> >> Lance Norskog
> >> goksron@gmail.com
> >>
> >
>
>
>
> --
> Harsh J
>

Re: Map task can't execute /bin/ls on solaris

Posted by Harsh J <ha...@cloudera.com>.
It applies to all Hadoop daemon processes (JT, TT, NN, SNN, DN) and
all direct commands executed via the 'hadoop' executable.

On Thu, Aug 11, 2011 at 11:37 AM, Xiaobo Gu <gu...@gmail.com> wrote:
> Is HADOOP_HEAPSIZE set for all Hadoop related Java processes, or just
> one Java process?
>
> Regards,
>
> Xiaobo Gu
>
> On Thu, Aug 11, 2011 at 1:07 PM, Lance Norskog <go...@gmail.com> wrote:
>> If the server is dedicated to this job, you might as well give it
>> 10-15g. After that shakes out, try changing the number of mappers &
>> reducers.
>>
>> On Tue, Aug 9, 2011 at 2:06 AM, Xiaobo Gu <gu...@gmail.com> wrote:
>>> Hi Adi,
>>>
>>> Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
>>> what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
>>> dedicated for a Single Node Hadoop with 1 data node instance, and the
>>> it will run 4 mapper and reducer tasks .
>>>
>>> Regards,
>>>
>>> Xiaobo Gu
>>>
>>>
>>> On Sun, Aug 7, 2011 at 11:35 PM, Adi <ad...@gmail.com> wrote:
>>>>>>Caused by: java.io.IOException: error=12, Not enough space
>>>>
>>>> You either do not have enough memory allocated to your hadoop daemons(via
>>>> HADOOP_HEAPSIZE) or swap space.
>>>>
>>>> -Adi
>>>>
>>>> On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu <gu...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am trying to write a map-reduce job to convert csv files to
>>>>> sequencefiles, but the job fails with the following error:
>>>>> java.lang.RuntimeException: Error while running command to get file
>>>>> permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>>> error=12, Not enough space
>>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>>>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
>>>>>        at org.apache.hadoop.util.Shell.run(Shell.java:182)
>>>>>        at
>>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>>>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>>>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>>>>>        at
>>>>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
>>>>>        at
>>>>> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
>>>>>        at
>>>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
>>>>>        at
>>>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>>>>>        at
>>>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>>>>>        at
>>>>> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>>>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>>        at
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>>>>> Caused by: java.io.IOException: error=12, Not enough space
>>>>>        at java.lang.UNIXProcess.forkAndExec(Native Method)
>>>>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
>>>>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>>>        ... 16 more
>>>>>
>>>>>        at
>>>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
>>>>>        at
>>>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>>>>>        at
>>>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>>>>>        at
>>>>> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>>>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>>        at
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>>>>>
>>>>
>>>
>>
>>
>>
>> --
>> Lance Norskog
>> goksron@gmail.com
>>
>



-- 
Harsh J

Re: Map task can't execute /bin/ls on solaris

Posted by Xiaobo Gu <gu...@gmail.com>.
Is HADOOP_HEAPSIZE set for all Hadoop related Java processes, or just
one Java process?

Regards,

Xiaobo Gu

On Thu, Aug 11, 2011 at 1:07 PM, Lance Norskog <go...@gmail.com> wrote:
> If the server is dedicated to this job, you might as well give it
> 10-15g. After that shakes out, try changing the number of mappers &
> reducers.
>
> On Tue, Aug 9, 2011 at 2:06 AM, Xiaobo Gu <gu...@gmail.com> wrote:
>> Hi Adi,
>>
>> Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
>> what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
>> dedicated for a Single Node Hadoop with 1 data node instance, and the
>> it will run 4 mapper and reducer tasks .
>>
>> Regards,
>>
>> Xiaobo Gu
>>
>>
>> On Sun, Aug 7, 2011 at 11:35 PM, Adi <ad...@gmail.com> wrote:
>>>>>Caused by: java.io.IOException: error=12, Not enough space
>>>
>>> You either do not have enough memory allocated to your hadoop daemons(via
>>> HADOOP_HEAPSIZE) or swap space.
>>>
>>> -Adi
>>>
>>> On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu <gu...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am trying to write a map-reduce job to convert csv files to
>>>> sequencefiles, but the job fails with the following error:
>>>> java.lang.RuntimeException: Error while running command to get file
>>>> permissions : java.io.IOException: Cannot run program "/bin/ls":
>>>> error=12, Not enough space
>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
>>>>        at org.apache.hadoop.util.Shell.run(Shell.java:182)
>>>>        at
>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>>>>        at
>>>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
>>>>        at
>>>> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
>>>>        at
>>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
>>>>        at
>>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>>>>        at
>>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>>>>        at
>>>> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>        at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>>>> Caused by: java.io.IOException: error=12, Not enough space
>>>>        at java.lang.UNIXProcess.forkAndExec(Native Method)
>>>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
>>>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>>        ... 16 more
>>>>
>>>>        at
>>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
>>>>        at
>>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>>>>        at
>>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>>>>        at
>>>> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>        at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>>>>
>>>
>>
>
>
>
> --
> Lance Norskog
> goksron@gmail.com
>

Re: Map task can't execute /bin/ls on solaris

Posted by Lance Norskog <go...@gmail.com>.
If the server is dedicated to this job, you might as well give it
10-15g. After that shakes out, try changing the number of mappers &
reducers.

On Tue, Aug 9, 2011 at 2:06 AM, Xiaobo Gu <gu...@gmail.com> wrote:
> Hi Adi,
>
> Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
> what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
> dedicated for a Single Node Hadoop with 1 data node instance, and the
> it will run 4 mapper and reducer tasks .
>
> Regards,
>
> Xiaobo Gu
>
>
> On Sun, Aug 7, 2011 at 11:35 PM, Adi <ad...@gmail.com> wrote:
>>>>Caused by: java.io.IOException: error=12, Not enough space
>>
>> You either do not have enough memory allocated to your hadoop daemons(via
>> HADOOP_HEAPSIZE) or swap space.
>>
>> -Adi
>>
>> On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu <gu...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I am trying to write a map-reduce job to convert csv files to
>>> sequencefiles, but the job fails with the following error:
>>> java.lang.RuntimeException: Error while running command to get file
>>> permissions : java.io.IOException: Cannot run program "/bin/ls":
>>> error=12, Not enough space
>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
>>>        at org.apache.hadoop.util.Shell.run(Shell.java:182)
>>>        at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>>>        at
>>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
>>>        at
>>> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
>>>        at
>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
>>>        at
>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>>>        at
>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>>>        at
>>> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>>        at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>>> Caused by: java.io.IOException: error=12, Not enough space
>>>        at java.lang.UNIXProcess.forkAndExec(Native Method)
>>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
>>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>        ... 16 more
>>>
>>>        at
>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
>>>        at
>>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>>>        at
>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>>>        at
>>> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>>        at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>>>
>>
>



-- 
Lance Norskog
goksron@gmail.com

Re: Map task can't execute /bin/ls on solaris

Posted by Xiaobo Gu <gu...@gmail.com>.
Hi Adi,

Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
dedicated for a Single Node Hadoop with 1 data node instance, and the
it will run 4 mapper and reducer tasks .

Regards,

Xiaobo Gu


On Sun, Aug 7, 2011 at 11:35 PM, Adi <ad...@gmail.com> wrote:
>>>Caused by: java.io.IOException: error=12, Not enough space
>
> You either do not have enough memory allocated to your hadoop daemons(via
> HADOOP_HEAPSIZE) or swap space.
>
> -Adi
>
> On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu <gu...@gmail.com> wrote:
>
>> Hi,
>>
>> I am trying to write a map-reduce job to convert csv files to
>> sequencefiles, but the job fails with the following error:
>> java.lang.RuntimeException: Error while running command to get file
>> permissions : java.io.IOException: Cannot run program "/bin/ls":
>> error=12, Not enough space
>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
>>        at org.apache.hadoop.util.Shell.run(Shell.java:182)
>>        at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>>        at
>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
>>        at
>> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
>>        at
>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
>>        at
>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>>        at
>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>>        at
>> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>        at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>> Caused by: java.io.IOException: error=12, Not enough space
>>        at java.lang.UNIXProcess.forkAndExec(Native Method)
>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>        ... 16 more
>>
>>        at
>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
>>        at
>> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>>        at
>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>>        at
>> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>        at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>>
>

Re: Map task can't execute /bin/ls on solaris

Posted by Adi <ad...@gmail.com>.
>>Caused by: java.io.IOException: error=12, Not enough space

You either do not have enough memory allocated to your hadoop daemons(via
HADOOP_HEAPSIZE) or swap space.

-Adi

On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu <gu...@gmail.com> wrote:

> Hi,
>
> I am trying to write a map-reduce job to convert csv files to
> sequencefiles, but the job fails with the following error:
> java.lang.RuntimeException: Error while running command to get file
> permissions : java.io.IOException: Cannot run program "/bin/ls":
> error=12, Not enough space
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
>        at org.apache.hadoop.util.Shell.run(Shell.java:182)
>        at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>        at
> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>        at
> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
> Caused by: java.io.IOException: error=12, Not enough space
>        at java.lang.UNIXProcess.forkAndExec(Native Method)
>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>        ... 16 more
>
>        at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
>        at
> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
>        at
> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>