You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by fab wol <da...@gmail.com> on 2013/10/11 13:59:15 UTC

Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
try some MR Stuff on it, but starting a Job is already not possible (even
the wordcount example). The error log of the jobtracker produces a log 700k
lines long but it consists mainly of these lines repeatedly:

2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
2013-10-11 10:24:53,033 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:mapred (auth:SIMPLE) cause:java.io.IOException:
java.lang.NullPointerException
2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 22 on 8021, call
heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true, true,
true, -1), rpc version=2, client version=32, methodsFingerPrint=-159967141
from 10.160.25.250:44389: error: java.io.IOException:
java.lang.NullPointerException
java.io.IOException: java.lang.NullPointerException
at
org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
at
org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
Exception running
/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
10.160.25.249
java.io.IOException: Cannot run program
"/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
java.io.IOException: error=13, Permission denied
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
at org.apache.hadoop.util.Shell.run(Shell.java:188)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
at
org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
at
org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
at
org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
at
org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
at
org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
Caused by: java.io.IOException: java.io.IOException: error=13, Permission
denied
at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
... 21 more

it doesn't matter if it is a pure hadoop job or a oozie submitted job.
there seems to be something wrong in the basic configuration. Anyone an
idea?

Cheers
Wolli

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by fab wol <da...@gmail.com>.
this looks like it belongs to my problem, right?

https://issues.apache.org/jira/browse/MAPREDUCE-50

Cheers
Wolli


2013/10/11 DSuiter RDX <ds...@rdx.com>

> It looks like you are correct, and I did not have the right solution, I
> apologize. I'm not sure if the other nodes need to be involved either. Now
> I'm hoping someone with deeper knowledge will step in, because I'm curious
> also! Some of the most knowledgeable people on here are on US Pacific Time,
> so you will probably get more responses in a few hours. Sorry I couldn't be
> of more assistance.
>
> Sincerely,
> *Devin Suiter*
> Jr. Data Solutions Software Engineer
> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
> Google Voice: 412-256-8556 | www.rdx.com
>
>
> On Fri, Oct 11, 2013 at 8:52 AM, fab wol <da...@gmail.com> wrote:
>
>> this line:
>>
>> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
>> PriviledgedActionException as:mapred (auth:SIMPLE)
>> cause:java.io.IOException: java.lang.NullPointerException
>>
>> is imho indicating that i am using the user "mapred" for executing (fyi:
>> submitting the job from the CLI (  hadoop jar
>> /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
>> wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
>> permissions for this file are:
>>
>> -rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*
>>
>> i temporarly had set the permissions to 777 to see if something changes,
>> but it didn't ... I checked only the jobtracker, are the other nodes
>> important for this as well?
>>
>> thx already in advance, especially for the quick response!
>> Wolli
>>
>>
>> 2013/10/11 DSuiter RDX <ds...@rdx.com>
>>
>>> The user running the job (might not be your username depending on your
>>> setup) does not appear to have executable permissions on the jobtracker
>>> cluster topology python script - I'm basing this on the lines:
>>>
>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>> 10.160.25.249
>>> java.io.IOException: Cannot run program
>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>> java.io.IOException: error=13, Permission denied
>>>
>>> So checking on the permissions for that file, determining what user is
>>> kicking off your job, which depends on how you submit it, and making sure
>>> that user has the execute permission on that file will probably fix this.
>>>
>>> If you are using a management console, such as Cloudera SCM, when you
>>> submit jobs, they are run as an application user, so, Flume services run
>>> under the "Flume" user, HBase jobs will typically run under the HBase user,
>>> and so on. It can cause some surprises if you do not expect it.
>>>
>>> *Devin Suiter*
>>> Jr. Data Solutions Software Engineer
>>> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
>>> Google Voice: 412-256-8556 | www.rdx.com
>>>
>>>
>>> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>>>
>>>> Hey everyone, I've got supplied with a decent ten node CDH 4.4
>>>> cluster, only 7 days old, and someone tried some HBase stuff on it. Now I
>>>> wanted to try some MR Stuff on it, but starting a Job is already not
>>>> possible (even the wordcount example). The error log of the jobtracker
>>>> produces a log 700k lines long but it consists mainly of these lines
>>>> repeatedly:
>>>>
>>>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>>>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>>>> 2013-10-11 10:24:53,033 ERROR
>>>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>>>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>>>> java.lang.NullPointerException
>>>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>>>> handler 22 on 8021, call
>>>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>>>> true, true, -1), rpc version=2, client version=32,
>>>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>>>> java.io.IOException: java.lang.NullPointerException
>>>> java.io.IOException: java.lang.NullPointerException
>>>> at
>>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>>  at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>>  at
>>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>  at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>>> Exception running
>>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>>> 10.160.25.249
>>>> java.io.IOException: Cannot run program
>>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>>> java.io.IOException: error=13, Permission denied
>>>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>  at
>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>>  at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>>>  at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>>>> at
>>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>> at
>>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>> at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>  at java.lang.reflect.Method.invoke(Method.java:597)
>>>> at
>>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>>> Caused by: java.io.IOException: java.io.IOException: error=13,
>>>> Permission denied
>>>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>>  ... 21 more
>>>>
>>>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>>>> there seems to be something wrong in the basic configuration. Anyone an
>>>> idea?
>>>>
>>>> Cheers
>>>> Wolli
>>>>
>>>
>>>
>>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by fab wol <da...@gmail.com>.
this looks like it belongs to my problem, right?

https://issues.apache.org/jira/browse/MAPREDUCE-50

Cheers
Wolli


2013/10/11 DSuiter RDX <ds...@rdx.com>

> It looks like you are correct, and I did not have the right solution, I
> apologize. I'm not sure if the other nodes need to be involved either. Now
> I'm hoping someone with deeper knowledge will step in, because I'm curious
> also! Some of the most knowledgeable people on here are on US Pacific Time,
> so you will probably get more responses in a few hours. Sorry I couldn't be
> of more assistance.
>
> Sincerely,
> *Devin Suiter*
> Jr. Data Solutions Software Engineer
> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
> Google Voice: 412-256-8556 | www.rdx.com
>
>
> On Fri, Oct 11, 2013 at 8:52 AM, fab wol <da...@gmail.com> wrote:
>
>> this line:
>>
>> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
>> PriviledgedActionException as:mapred (auth:SIMPLE)
>> cause:java.io.IOException: java.lang.NullPointerException
>>
>> is imho indicating that i am using the user "mapred" for executing (fyi:
>> submitting the job from the CLI (  hadoop jar
>> /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
>> wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
>> permissions for this file are:
>>
>> -rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*
>>
>> i temporarly had set the permissions to 777 to see if something changes,
>> but it didn't ... I checked only the jobtracker, are the other nodes
>> important for this as well?
>>
>> thx already in advance, especially for the quick response!
>> Wolli
>>
>>
>> 2013/10/11 DSuiter RDX <ds...@rdx.com>
>>
>>> The user running the job (might not be your username depending on your
>>> setup) does not appear to have executable permissions on the jobtracker
>>> cluster topology python script - I'm basing this on the lines:
>>>
>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>> 10.160.25.249
>>> java.io.IOException: Cannot run program
>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>> java.io.IOException: error=13, Permission denied
>>>
>>> So checking on the permissions for that file, determining what user is
>>> kicking off your job, which depends on how you submit it, and making sure
>>> that user has the execute permission on that file will probably fix this.
>>>
>>> If you are using a management console, such as Cloudera SCM, when you
>>> submit jobs, they are run as an application user, so, Flume services run
>>> under the "Flume" user, HBase jobs will typically run under the HBase user,
>>> and so on. It can cause some surprises if you do not expect it.
>>>
>>> *Devin Suiter*
>>> Jr. Data Solutions Software Engineer
>>> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
>>> Google Voice: 412-256-8556 | www.rdx.com
>>>
>>>
>>> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>>>
>>>> Hey everyone, I've got supplied with a decent ten node CDH 4.4
>>>> cluster, only 7 days old, and someone tried some HBase stuff on it. Now I
>>>> wanted to try some MR Stuff on it, but starting a Job is already not
>>>> possible (even the wordcount example). The error log of the jobtracker
>>>> produces a log 700k lines long but it consists mainly of these lines
>>>> repeatedly:
>>>>
>>>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>>>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>>>> 2013-10-11 10:24:53,033 ERROR
>>>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>>>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>>>> java.lang.NullPointerException
>>>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>>>> handler 22 on 8021, call
>>>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>>>> true, true, -1), rpc version=2, client version=32,
>>>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>>>> java.io.IOException: java.lang.NullPointerException
>>>> java.io.IOException: java.lang.NullPointerException
>>>> at
>>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>>  at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>>  at
>>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>  at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>>> Exception running
>>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>>> 10.160.25.249
>>>> java.io.IOException: Cannot run program
>>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>>> java.io.IOException: error=13, Permission denied
>>>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>  at
>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>>  at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>>>  at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>>>> at
>>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>> at
>>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>> at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>  at java.lang.reflect.Method.invoke(Method.java:597)
>>>> at
>>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>>> Caused by: java.io.IOException: java.io.IOException: error=13,
>>>> Permission denied
>>>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>>  ... 21 more
>>>>
>>>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>>>> there seems to be something wrong in the basic configuration. Anyone an
>>>> idea?
>>>>
>>>> Cheers
>>>> Wolli
>>>>
>>>
>>>
>>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by fab wol <da...@gmail.com>.
this looks like it belongs to my problem, right?

https://issues.apache.org/jira/browse/MAPREDUCE-50

Cheers
Wolli


2013/10/11 DSuiter RDX <ds...@rdx.com>

> It looks like you are correct, and I did not have the right solution, I
> apologize. I'm not sure if the other nodes need to be involved either. Now
> I'm hoping someone with deeper knowledge will step in, because I'm curious
> also! Some of the most knowledgeable people on here are on US Pacific Time,
> so you will probably get more responses in a few hours. Sorry I couldn't be
> of more assistance.
>
> Sincerely,
> *Devin Suiter*
> Jr. Data Solutions Software Engineer
> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
> Google Voice: 412-256-8556 | www.rdx.com
>
>
> On Fri, Oct 11, 2013 at 8:52 AM, fab wol <da...@gmail.com> wrote:
>
>> this line:
>>
>> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
>> PriviledgedActionException as:mapred (auth:SIMPLE)
>> cause:java.io.IOException: java.lang.NullPointerException
>>
>> is imho indicating that i am using the user "mapred" for executing (fyi:
>> submitting the job from the CLI (  hadoop jar
>> /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
>> wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
>> permissions for this file are:
>>
>> -rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*
>>
>> i temporarly had set the permissions to 777 to see if something changes,
>> but it didn't ... I checked only the jobtracker, are the other nodes
>> important for this as well?
>>
>> thx already in advance, especially for the quick response!
>> Wolli
>>
>>
>> 2013/10/11 DSuiter RDX <ds...@rdx.com>
>>
>>> The user running the job (might not be your username depending on your
>>> setup) does not appear to have executable permissions on the jobtracker
>>> cluster topology python script - I'm basing this on the lines:
>>>
>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>> 10.160.25.249
>>> java.io.IOException: Cannot run program
>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>> java.io.IOException: error=13, Permission denied
>>>
>>> So checking on the permissions for that file, determining what user is
>>> kicking off your job, which depends on how you submit it, and making sure
>>> that user has the execute permission on that file will probably fix this.
>>>
>>> If you are using a management console, such as Cloudera SCM, when you
>>> submit jobs, they are run as an application user, so, Flume services run
>>> under the "Flume" user, HBase jobs will typically run under the HBase user,
>>> and so on. It can cause some surprises if you do not expect it.
>>>
>>> *Devin Suiter*
>>> Jr. Data Solutions Software Engineer
>>> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
>>> Google Voice: 412-256-8556 | www.rdx.com
>>>
>>>
>>> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>>>
>>>> Hey everyone, I've got supplied with a decent ten node CDH 4.4
>>>> cluster, only 7 days old, and someone tried some HBase stuff on it. Now I
>>>> wanted to try some MR Stuff on it, but starting a Job is already not
>>>> possible (even the wordcount example). The error log of the jobtracker
>>>> produces a log 700k lines long but it consists mainly of these lines
>>>> repeatedly:
>>>>
>>>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>>>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>>>> 2013-10-11 10:24:53,033 ERROR
>>>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>>>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>>>> java.lang.NullPointerException
>>>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>>>> handler 22 on 8021, call
>>>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>>>> true, true, -1), rpc version=2, client version=32,
>>>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>>>> java.io.IOException: java.lang.NullPointerException
>>>> java.io.IOException: java.lang.NullPointerException
>>>> at
>>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>>  at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>>  at
>>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>  at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>>> Exception running
>>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>>> 10.160.25.249
>>>> java.io.IOException: Cannot run program
>>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>>> java.io.IOException: error=13, Permission denied
>>>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>  at
>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>>  at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>>>  at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>>>> at
>>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>> at
>>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>> at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>  at java.lang.reflect.Method.invoke(Method.java:597)
>>>> at
>>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>>> Caused by: java.io.IOException: java.io.IOException: error=13,
>>>> Permission denied
>>>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>>  ... 21 more
>>>>
>>>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>>>> there seems to be something wrong in the basic configuration. Anyone an
>>>> idea?
>>>>
>>>> Cheers
>>>> Wolli
>>>>
>>>
>>>
>>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by fab wol <da...@gmail.com>.
this looks like it belongs to my problem, right?

https://issues.apache.org/jira/browse/MAPREDUCE-50

Cheers
Wolli


2013/10/11 DSuiter RDX <ds...@rdx.com>

> It looks like you are correct, and I did not have the right solution, I
> apologize. I'm not sure if the other nodes need to be involved either. Now
> I'm hoping someone with deeper knowledge will step in, because I'm curious
> also! Some of the most knowledgeable people on here are on US Pacific Time,
> so you will probably get more responses in a few hours. Sorry I couldn't be
> of more assistance.
>
> Sincerely,
> *Devin Suiter*
> Jr. Data Solutions Software Engineer
> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
> Google Voice: 412-256-8556 | www.rdx.com
>
>
> On Fri, Oct 11, 2013 at 8:52 AM, fab wol <da...@gmail.com> wrote:
>
>> this line:
>>
>> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
>> PriviledgedActionException as:mapred (auth:SIMPLE)
>> cause:java.io.IOException: java.lang.NullPointerException
>>
>> is imho indicating that i am using the user "mapred" for executing (fyi:
>> submitting the job from the CLI (  hadoop jar
>> /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
>> wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
>> permissions for this file are:
>>
>> -rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*
>>
>> i temporarly had set the permissions to 777 to see if something changes,
>> but it didn't ... I checked only the jobtracker, are the other nodes
>> important for this as well?
>>
>> thx already in advance, especially for the quick response!
>> Wolli
>>
>>
>> 2013/10/11 DSuiter RDX <ds...@rdx.com>
>>
>>> The user running the job (might not be your username depending on your
>>> setup) does not appear to have executable permissions on the jobtracker
>>> cluster topology python script - I'm basing this on the lines:
>>>
>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>> 10.160.25.249
>>> java.io.IOException: Cannot run program
>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>> java.io.IOException: error=13, Permission denied
>>>
>>> So checking on the permissions for that file, determining what user is
>>> kicking off your job, which depends on how you submit it, and making sure
>>> that user has the execute permission on that file will probably fix this.
>>>
>>> If you are using a management console, such as Cloudera SCM, when you
>>> submit jobs, they are run as an application user, so, Flume services run
>>> under the "Flume" user, HBase jobs will typically run under the HBase user,
>>> and so on. It can cause some surprises if you do not expect it.
>>>
>>> *Devin Suiter*
>>> Jr. Data Solutions Software Engineer
>>> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
>>> Google Voice: 412-256-8556 | www.rdx.com
>>>
>>>
>>> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>>>
>>>> Hey everyone, I've got supplied with a decent ten node CDH 4.4
>>>> cluster, only 7 days old, and someone tried some HBase stuff on it. Now I
>>>> wanted to try some MR Stuff on it, but starting a Job is already not
>>>> possible (even the wordcount example). The error log of the jobtracker
>>>> produces a log 700k lines long but it consists mainly of these lines
>>>> repeatedly:
>>>>
>>>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>>>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>>>> 2013-10-11 10:24:53,033 ERROR
>>>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>>>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>>>> java.lang.NullPointerException
>>>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>>>> handler 22 on 8021, call
>>>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>>>> true, true, -1), rpc version=2, client version=32,
>>>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>>>> java.io.IOException: java.lang.NullPointerException
>>>> java.io.IOException: java.lang.NullPointerException
>>>> at
>>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>>  at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>>  at
>>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>  at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>>> Exception running
>>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>>> 10.160.25.249
>>>> java.io.IOException: Cannot run program
>>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>>> java.io.IOException: error=13, Permission denied
>>>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>>  at
>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>>  at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>>>  at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>>>> at
>>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>> at
>>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>> at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>  at java.lang.reflect.Method.invoke(Method.java:597)
>>>> at
>>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>>> Caused by: java.io.IOException: java.io.IOException: error=13,
>>>> Permission denied
>>>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>>  ... 21 more
>>>>
>>>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>>>> there seems to be something wrong in the basic configuration. Anyone an
>>>> idea?
>>>>
>>>> Cheers
>>>> Wolli
>>>>
>>>
>>>
>>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by DSuiter RDX <ds...@rdx.com>.
It looks like you are correct, and I did not have the right solution, I
apologize. I'm not sure if the other nodes need to be involved either. Now
I'm hoping someone with deeper knowledge will step in, because I'm curious
also! Some of the most knowledgeable people on here are on US Pacific Time,
so you will probably get more responses in a few hours. Sorry I couldn't be
of more assistance.

Sincerely,
*Devin Suiter*
Jr. Data Solutions Software Engineer
100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 | www.rdx.com


On Fri, Oct 11, 2013 at 8:52 AM, fab wol <da...@gmail.com> wrote:

> this line:
>
> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException as:mapred (auth:SIMPLE)
> cause:java.io.IOException: java.lang.NullPointerException
>
> is imho indicating that i am using the user "mapred" for executing (fyi:
> submitting the job from the CLI (  hadoop jar
> /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
> wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
> permissions for this file are:
>
> -rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*
>
> i temporarly had set the permissions to 777 to see if something changes,
> but it didn't ... I checked only the jobtracker, are the other nodes
> important for this as well?
>
> thx already in advance, especially for the quick response!
> Wolli
>
>
> 2013/10/11 DSuiter RDX <ds...@rdx.com>
>
>> The user running the job (might not be your username depending on your
>> setup) does not appear to have executable permissions on the jobtracker
>> cluster topology python script - I'm basing this on the lines:
>>
>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>> 10.160.25.249
>> java.io.IOException: Cannot run program
>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>> java.io.IOException: error=13, Permission denied
>>
>> So checking on the permissions for that file, determining what user is
>> kicking off your job, which depends on how you submit it, and making sure
>> that user has the execute permission on that file will probably fix this.
>>
>> If you are using a management console, such as Cloudera SCM, when you
>> submit jobs, they are run as an application user, so, Flume services run
>> under the "Flume" user, HBase jobs will typically run under the HBase user,
>> and so on. It can cause some surprises if you do not expect it.
>>
>> *Devin Suiter*
>> Jr. Data Solutions Software Engineer
>> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
>> Google Voice: 412-256-8556 | www.rdx.com
>>
>>
>> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>>
>>> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
>>> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
>>> try some MR Stuff on it, but starting a Job is already not possible (even
>>> the wordcount example). The error log of the jobtracker produces a log 700k
>>> lines long but it consists mainly of these lines repeatedly:
>>>
>>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>>> 2013-10-11 10:24:53,033 ERROR
>>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>>> java.lang.NullPointerException
>>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 22 on 8021, call
>>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>>> true, true, -1), rpc version=2, client version=32,
>>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>>> java.io.IOException: java.lang.NullPointerException
>>> java.io.IOException: java.lang.NullPointerException
>>> at
>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>  at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>  at
>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>  at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>>  at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>> 10.160.25.249
>>> java.io.IOException: Cannot run program
>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>> java.io.IOException: error=13, Permission denied
>>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>  at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>  at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>>  at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>>> at
>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>> at
>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>  at java.lang.reflect.Method.invoke(Method.java:597)
>>> at
>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>> Caused by: java.io.IOException: java.io.IOException: error=13,
>>> Permission denied
>>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>  ... 21 more
>>>
>>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>>> there seems to be something wrong in the basic configuration. Anyone an
>>> idea?
>>>
>>> Cheers
>>> Wolli
>>>
>>
>>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by DSuiter RDX <ds...@rdx.com>.
It looks like you are correct, and I did not have the right solution, I
apologize. I'm not sure if the other nodes need to be involved either. Now
I'm hoping someone with deeper knowledge will step in, because I'm curious
also! Some of the most knowledgeable people on here are on US Pacific Time,
so you will probably get more responses in a few hours. Sorry I couldn't be
of more assistance.

Sincerely,
*Devin Suiter*
Jr. Data Solutions Software Engineer
100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 | www.rdx.com


On Fri, Oct 11, 2013 at 8:52 AM, fab wol <da...@gmail.com> wrote:

> this line:
>
> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException as:mapred (auth:SIMPLE)
> cause:java.io.IOException: java.lang.NullPointerException
>
> is imho indicating that i am using the user "mapred" for executing (fyi:
> submitting the job from the CLI (  hadoop jar
> /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
> wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
> permissions for this file are:
>
> -rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*
>
> i temporarly had set the permissions to 777 to see if something changes,
> but it didn't ... I checked only the jobtracker, are the other nodes
> important for this as well?
>
> thx already in advance, especially for the quick response!
> Wolli
>
>
> 2013/10/11 DSuiter RDX <ds...@rdx.com>
>
>> The user running the job (might not be your username depending on your
>> setup) does not appear to have executable permissions on the jobtracker
>> cluster topology python script - I'm basing this on the lines:
>>
>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>> 10.160.25.249
>> java.io.IOException: Cannot run program
>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>> java.io.IOException: error=13, Permission denied
>>
>> So checking on the permissions for that file, determining what user is
>> kicking off your job, which depends on how you submit it, and making sure
>> that user has the execute permission on that file will probably fix this.
>>
>> If you are using a management console, such as Cloudera SCM, when you
>> submit jobs, they are run as an application user, so, Flume services run
>> under the "Flume" user, HBase jobs will typically run under the HBase user,
>> and so on. It can cause some surprises if you do not expect it.
>>
>> *Devin Suiter*
>> Jr. Data Solutions Software Engineer
>> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
>> Google Voice: 412-256-8556 | www.rdx.com
>>
>>
>> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>>
>>> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
>>> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
>>> try some MR Stuff on it, but starting a Job is already not possible (even
>>> the wordcount example). The error log of the jobtracker produces a log 700k
>>> lines long but it consists mainly of these lines repeatedly:
>>>
>>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>>> 2013-10-11 10:24:53,033 ERROR
>>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>>> java.lang.NullPointerException
>>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 22 on 8021, call
>>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>>> true, true, -1), rpc version=2, client version=32,
>>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>>> java.io.IOException: java.lang.NullPointerException
>>> java.io.IOException: java.lang.NullPointerException
>>> at
>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>  at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>  at
>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>  at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>>  at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>> 10.160.25.249
>>> java.io.IOException: Cannot run program
>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>> java.io.IOException: error=13, Permission denied
>>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>  at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>  at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>>  at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>>> at
>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>> at
>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>  at java.lang.reflect.Method.invoke(Method.java:597)
>>> at
>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>> Caused by: java.io.IOException: java.io.IOException: error=13,
>>> Permission denied
>>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>  ... 21 more
>>>
>>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>>> there seems to be something wrong in the basic configuration. Anyone an
>>> idea?
>>>
>>> Cheers
>>> Wolli
>>>
>>
>>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by DSuiter RDX <ds...@rdx.com>.
It looks like you are correct, and I did not have the right solution, I
apologize. I'm not sure if the other nodes need to be involved either. Now
I'm hoping someone with deeper knowledge will step in, because I'm curious
also! Some of the most knowledgeable people on here are on US Pacific Time,
so you will probably get more responses in a few hours. Sorry I couldn't be
of more assistance.

Sincerely,
*Devin Suiter*
Jr. Data Solutions Software Engineer
100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 | www.rdx.com


On Fri, Oct 11, 2013 at 8:52 AM, fab wol <da...@gmail.com> wrote:

> this line:
>
> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException as:mapred (auth:SIMPLE)
> cause:java.io.IOException: java.lang.NullPointerException
>
> is imho indicating that i am using the user "mapred" for executing (fyi:
> submitting the job from the CLI (  hadoop jar
> /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
> wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
> permissions for this file are:
>
> -rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*
>
> i temporarly had set the permissions to 777 to see if something changes,
> but it didn't ... I checked only the jobtracker, are the other nodes
> important for this as well?
>
> thx already in advance, especially for the quick response!
> Wolli
>
>
> 2013/10/11 DSuiter RDX <ds...@rdx.com>
>
>> The user running the job (might not be your username depending on your
>> setup) does not appear to have executable permissions on the jobtracker
>> cluster topology python script - I'm basing this on the lines:
>>
>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>> 10.160.25.249
>> java.io.IOException: Cannot run program
>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>> java.io.IOException: error=13, Permission denied
>>
>> So checking on the permissions for that file, determining what user is
>> kicking off your job, which depends on how you submit it, and making sure
>> that user has the execute permission on that file will probably fix this.
>>
>> If you are using a management console, such as Cloudera SCM, when you
>> submit jobs, they are run as an application user, so, Flume services run
>> under the "Flume" user, HBase jobs will typically run under the HBase user,
>> and so on. It can cause some surprises if you do not expect it.
>>
>> *Devin Suiter*
>> Jr. Data Solutions Software Engineer
>> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
>> Google Voice: 412-256-8556 | www.rdx.com
>>
>>
>> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>>
>>> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
>>> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
>>> try some MR Stuff on it, but starting a Job is already not possible (even
>>> the wordcount example). The error log of the jobtracker produces a log 700k
>>> lines long but it consists mainly of these lines repeatedly:
>>>
>>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>>> 2013-10-11 10:24:53,033 ERROR
>>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>>> java.lang.NullPointerException
>>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 22 on 8021, call
>>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>>> true, true, -1), rpc version=2, client version=32,
>>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>>> java.io.IOException: java.lang.NullPointerException
>>> java.io.IOException: java.lang.NullPointerException
>>> at
>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>  at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>  at
>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>  at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>>  at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>> 10.160.25.249
>>> java.io.IOException: Cannot run program
>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>> java.io.IOException: error=13, Permission denied
>>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>  at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>  at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>>  at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>>> at
>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>> at
>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>  at java.lang.reflect.Method.invoke(Method.java:597)
>>> at
>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>> Caused by: java.io.IOException: java.io.IOException: error=13,
>>> Permission denied
>>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>  ... 21 more
>>>
>>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>>> there seems to be something wrong in the basic configuration. Anyone an
>>> idea?
>>>
>>> Cheers
>>> Wolli
>>>
>>
>>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by DSuiter RDX <ds...@rdx.com>.
It looks like you are correct, and I did not have the right solution, I
apologize. I'm not sure if the other nodes need to be involved either. Now
I'm hoping someone with deeper knowledge will step in, because I'm curious
also! Some of the most knowledgeable people on here are on US Pacific Time,
so you will probably get more responses in a few hours. Sorry I couldn't be
of more assistance.

Sincerely,
*Devin Suiter*
Jr. Data Solutions Software Engineer
100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 | www.rdx.com


On Fri, Oct 11, 2013 at 8:52 AM, fab wol <da...@gmail.com> wrote:

> this line:
>
> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException as:mapred (auth:SIMPLE)
> cause:java.io.IOException: java.lang.NullPointerException
>
> is imho indicating that i am using the user "mapred" for executing (fyi:
> submitting the job from the CLI (  hadoop jar
> /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
> wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
> permissions for this file are:
>
> -rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*
>
> i temporarly had set the permissions to 777 to see if something changes,
> but it didn't ... I checked only the jobtracker, are the other nodes
> important for this as well?
>
> thx already in advance, especially for the quick response!
> Wolli
>
>
> 2013/10/11 DSuiter RDX <ds...@rdx.com>
>
>> The user running the job (might not be your username depending on your
>> setup) does not appear to have executable permissions on the jobtracker
>> cluster topology python script - I'm basing this on the lines:
>>
>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>> 10.160.25.249
>> java.io.IOException: Cannot run program
>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>> java.io.IOException: error=13, Permission denied
>>
>> So checking on the permissions for that file, determining what user is
>> kicking off your job, which depends on how you submit it, and making sure
>> that user has the execute permission on that file will probably fix this.
>>
>> If you are using a management console, such as Cloudera SCM, when you
>> submit jobs, they are run as an application user, so, Flume services run
>> under the "Flume" user, HBase jobs will typically run under the HBase user,
>> and so on. It can cause some surprises if you do not expect it.
>>
>> *Devin Suiter*
>> Jr. Data Solutions Software Engineer
>> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
>> Google Voice: 412-256-8556 | www.rdx.com
>>
>>
>> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>>
>>> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
>>> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
>>> try some MR Stuff on it, but starting a Job is already not possible (even
>>> the wordcount example). The error log of the jobtracker produces a log 700k
>>> lines long but it consists mainly of these lines repeatedly:
>>>
>>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>>> 2013-10-11 10:24:53,033 ERROR
>>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>>> java.lang.NullPointerException
>>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 22 on 8021, call
>>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>>> true, true, -1), rpc version=2, client version=32,
>>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>>> java.io.IOException: java.lang.NullPointerException
>>> java.io.IOException: java.lang.NullPointerException
>>> at
>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>>  at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>  at
>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>  at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>>  at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>>> 10.160.25.249
>>> java.io.IOException: Cannot run program
>>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>>> java.io.IOException: error=13, Permission denied
>>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>>  at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>>  at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>>  at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>>> at
>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>>  at
>>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>> at
>>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>  at java.lang.reflect.Method.invoke(Method.java:597)
>>> at
>>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>>  at java.security.AccessController.doPrivileged(Native Method)
>>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>>> Caused by: java.io.IOException: java.io.IOException: error=13,
>>> Permission denied
>>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>  ... 21 more
>>>
>>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>>> there seems to be something wrong in the basic configuration. Anyone an
>>> idea?
>>>
>>> Cheers
>>> Wolli
>>>
>>
>>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by fab wol <da...@gmail.com>.
this line:

2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:mapred (auth:SIMPLE)
cause:java.io.IOException: java.lang.NullPointerException

is imho indicating that i am using the user "mapred" for executing (fyi:
submitting the job from the CLI (  hadoop jar
/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
permissions for this file are:

-rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*

i temporarly had set the permissions to 777 to see if something changes,
but it didn't ... I checked only the jobtracker, are the other nodes
important for this as well?

thx already in advance, especially for the quick response!
Wolli


2013/10/11 DSuiter RDX <ds...@rdx.com>

> The user running the job (might not be your username depending on your
> setup) does not appear to have executable permissions on the jobtracker
> cluster topology python script - I'm basing this on the lines:
>
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
> 10.160.25.249
> java.io.IOException: Cannot run program
> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
> java.io.IOException: error=13, Permission denied
>
> So checking on the permissions for that file, determining what user is
> kicking off your job, which depends on how you submit it, and making sure
> that user has the execute permission on that file will probably fix this.
>
> If you are using a management console, such as Cloudera SCM, when you
> submit jobs, they are run as an application user, so, Flume services run
> under the "Flume" user, HBase jobs will typically run under the HBase user,
> and so on. It can cause some surprises if you do not expect it.
>
> *Devin Suiter*
> Jr. Data Solutions Software Engineer
> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
> Google Voice: 412-256-8556 | www.rdx.com
>
>
> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>
>> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
>> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
>> try some MR Stuff on it, but starting a Job is already not possible (even
>> the wordcount example). The error log of the jobtracker produces a log 700k
>> lines long but it consists mainly of these lines repeatedly:
>>
>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>> 2013-10-11 10:24:53,033 ERROR
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>> java.lang.NullPointerException
>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 22 on 8021, call
>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>> true, true, -1), rpc version=2, client version=32,
>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>> java.io.IOException: java.lang.NullPointerException
>> java.io.IOException: java.lang.NullPointerException
>> at
>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>  at
>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>  at
>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>>  at
>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>  at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>  at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>> 10.160.25.249
>> java.io.IOException: Cannot run program
>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>> java.io.IOException: error=13, Permission denied
>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>  at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>  at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>  at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>> at
>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>  at
>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>  at
>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>> at
>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>  at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>  at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>> Caused by: java.io.IOException: java.io.IOException: error=13, Permission
>> denied
>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>  ... 21 more
>>
>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>> there seems to be something wrong in the basic configuration. Anyone an
>> idea?
>>
>> Cheers
>> Wolli
>>
>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by fab wol <da...@gmail.com>.
this line:

2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:mapred (auth:SIMPLE)
cause:java.io.IOException: java.lang.NullPointerException

is imho indicating that i am using the user "mapred" for executing (fyi:
submitting the job from the CLI (  hadoop jar
/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
permissions for this file are:

-rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*

i temporarly had set the permissions to 777 to see if something changes,
but it didn't ... I checked only the jobtracker, are the other nodes
important for this as well?

thx already in advance, especially for the quick response!
Wolli


2013/10/11 DSuiter RDX <ds...@rdx.com>

> The user running the job (might not be your username depending on your
> setup) does not appear to have executable permissions on the jobtracker
> cluster topology python script - I'm basing this on the lines:
>
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
> 10.160.25.249
> java.io.IOException: Cannot run program
> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
> java.io.IOException: error=13, Permission denied
>
> So checking on the permissions for that file, determining what user is
> kicking off your job, which depends on how you submit it, and making sure
> that user has the execute permission on that file will probably fix this.
>
> If you are using a management console, such as Cloudera SCM, when you
> submit jobs, they are run as an application user, so, Flume services run
> under the "Flume" user, HBase jobs will typically run under the HBase user,
> and so on. It can cause some surprises if you do not expect it.
>
> *Devin Suiter*
> Jr. Data Solutions Software Engineer
> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
> Google Voice: 412-256-8556 | www.rdx.com
>
>
> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>
>> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
>> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
>> try some MR Stuff on it, but starting a Job is already not possible (even
>> the wordcount example). The error log of the jobtracker produces a log 700k
>> lines long but it consists mainly of these lines repeatedly:
>>
>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>> 2013-10-11 10:24:53,033 ERROR
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>> java.lang.NullPointerException
>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 22 on 8021, call
>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>> true, true, -1), rpc version=2, client version=32,
>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>> java.io.IOException: java.lang.NullPointerException
>> java.io.IOException: java.lang.NullPointerException
>> at
>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>  at
>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>  at
>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>>  at
>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>  at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>  at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>> 10.160.25.249
>> java.io.IOException: Cannot run program
>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>> java.io.IOException: error=13, Permission denied
>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>  at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>  at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>  at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>> at
>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>  at
>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>  at
>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>> at
>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>  at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>  at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>> Caused by: java.io.IOException: java.io.IOException: error=13, Permission
>> denied
>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>  ... 21 more
>>
>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>> there seems to be something wrong in the basic configuration. Anyone an
>> idea?
>>
>> Cheers
>> Wolli
>>
>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by fab wol <da...@gmail.com>.
this line:

2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:mapred (auth:SIMPLE)
cause:java.io.IOException: java.lang.NullPointerException

is imho indicating that i am using the user "mapred" for executing (fyi:
submitting the job from the CLI (  hadoop jar
/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
permissions for this file are:

-rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*

i temporarly had set the permissions to 777 to see if something changes,
but it didn't ... I checked only the jobtracker, are the other nodes
important for this as well?

thx already in advance, especially for the quick response!
Wolli


2013/10/11 DSuiter RDX <ds...@rdx.com>

> The user running the job (might not be your username depending on your
> setup) does not appear to have executable permissions on the jobtracker
> cluster topology python script - I'm basing this on the lines:
>
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
> 10.160.25.249
> java.io.IOException: Cannot run program
> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
> java.io.IOException: error=13, Permission denied
>
> So checking on the permissions for that file, determining what user is
> kicking off your job, which depends on how you submit it, and making sure
> that user has the execute permission on that file will probably fix this.
>
> If you are using a management console, such as Cloudera SCM, when you
> submit jobs, they are run as an application user, so, Flume services run
> under the "Flume" user, HBase jobs will typically run under the HBase user,
> and so on. It can cause some surprises if you do not expect it.
>
> *Devin Suiter*
> Jr. Data Solutions Software Engineer
> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
> Google Voice: 412-256-8556 | www.rdx.com
>
>
> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>
>> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
>> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
>> try some MR Stuff on it, but starting a Job is already not possible (even
>> the wordcount example). The error log of the jobtracker produces a log 700k
>> lines long but it consists mainly of these lines repeatedly:
>>
>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>> 2013-10-11 10:24:53,033 ERROR
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>> java.lang.NullPointerException
>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 22 on 8021, call
>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>> true, true, -1), rpc version=2, client version=32,
>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>> java.io.IOException: java.lang.NullPointerException
>> java.io.IOException: java.lang.NullPointerException
>> at
>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>  at
>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>  at
>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>>  at
>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>  at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>  at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>> 10.160.25.249
>> java.io.IOException: Cannot run program
>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>> java.io.IOException: error=13, Permission denied
>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>  at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>  at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>  at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>> at
>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>  at
>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>  at
>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>> at
>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>  at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>  at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>> Caused by: java.io.IOException: java.io.IOException: error=13, Permission
>> denied
>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>  ... 21 more
>>
>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>> there seems to be something wrong in the basic configuration. Anyone an
>> idea?
>>
>> Cheers
>> Wolli
>>
>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by fab wol <da...@gmail.com>.
this line:

2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:mapred (auth:SIMPLE)
cause:java.io.IOException: java.lang.NullPointerException

is imho indicating that i am using the user "mapred" for executing (fyi:
submitting the job from the CLI (  hadoop jar
/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
wordcount hdfs_input_path hdfs_output_path) from another node) ... the file
permissions for this file are:

-rwxr-x--x  1 mapred hadoop 1382 Oct 10 15:02 topology.py*

i temporarly had set the permissions to 777 to see if something changes,
but it didn't ... I checked only the jobtracker, are the other nodes
important for this as well?

thx already in advance, especially for the quick response!
Wolli


2013/10/11 DSuiter RDX <ds...@rdx.com>

> The user running the job (might not be your username depending on your
> setup) does not appear to have executable permissions on the jobtracker
> cluster topology python script - I'm basing this on the lines:
>
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
> 10.160.25.249
> java.io.IOException: Cannot run program
> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
> java.io.IOException: error=13, Permission denied
>
> So checking on the permissions for that file, determining what user is
> kicking off your job, which depends on how you submit it, and making sure
> that user has the execute permission on that file will probably fix this.
>
> If you are using a management console, such as Cloudera SCM, when you
> submit jobs, they are run as an application user, so, Flume services run
> under the "Flume" user, HBase jobs will typically run under the HBase user,
> and so on. It can cause some surprises if you do not expect it.
>
> *Devin Suiter*
> Jr. Data Solutions Software Engineer
> 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
> Google Voice: 412-256-8556 | www.rdx.com
>
>
> On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:
>
>> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
>> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
>> try some MR Stuff on it, but starting a Job is already not possible (even
>> the wordcount example). The error log of the jobtracker produces a log 700k
>> lines long but it consists mainly of these lines repeatedly:
>>
>> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
>> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
>> 2013-10-11 10:24:53,033 ERROR
>> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
>> as:mapred (auth:SIMPLE) cause:java.io.IOException:
>> java.lang.NullPointerException
>> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 22 on 8021, call
>> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
>> true, true, -1), rpc version=2, client version=32,
>> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
>> java.io.IOException: java.lang.NullPointerException
>> java.io.IOException: java.lang.NullPointerException
>> at
>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>>  at
>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>>  at
>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>>  at
>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>  at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>  at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
>> 10.160.25.249
>> java.io.IOException: Cannot run program
>> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
>> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
>> java.io.IOException: error=13, Permission denied
>>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
>> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>>  at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>>  at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>>  at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
>> at
>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>  at
>> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>>  at
>> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>> at
>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>  at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>>  at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
>> Caused by: java.io.IOException: java.io.IOException: error=13, Permission
>> denied
>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>  ... 21 more
>>
>> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
>> there seems to be something wrong in the basic configuration. Anyone an
>> idea?
>>
>> Cheers
>> Wolli
>>
>
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by DSuiter RDX <ds...@rdx.com>.
The user running the job (might not be your username depending on your
setup) does not appear to have executable permissions on the jobtracker
cluster topology python script - I'm basing this on the lines:

2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
Exception running
/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
10.160.25.249
java.io.IOException: Cannot run program
"/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
java.io.IOException: error=13, Permission denied

So checking on the permissions for that file, determining what user is
kicking off your job, which depends on how you submit it, and making sure
that user has the execute permission on that file will probably fix this.

If you are using a management console, such as Cloudera SCM, when you
submit jobs, they are run as an application user, so, Flume services run
under the "Flume" user, HBase jobs will typically run under the HBase user,
and so on. It can cause some surprises if you do not expect it.

*Devin Suiter*
Jr. Data Solutions Software Engineer
100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 | www.rdx.com


On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:

> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
> try some MR Stuff on it, but starting a Job is already not possible (even
> the wordcount example). The error log of the jobtracker produces a log 700k
> lines long but it consists mainly of these lines repeatedly:
>
> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
> 2013-10-11 10:24:53,033 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:mapred (auth:SIMPLE) cause:java.io.IOException:
> java.lang.NullPointerException
> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 22 on 8021, call
> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
> true, true, -1), rpc version=2, client version=32,
> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
> java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> at
> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>  at
> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>  at
> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
> 10.160.25.249
> java.io.IOException: Cannot run program
> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
> java.io.IOException: error=13, Permission denied
>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>  at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>  at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>  at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
> at
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>  at
> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>  at
> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> at
> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> Caused by: java.io.IOException: java.io.IOException: error=13, Permission
> denied
> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>  ... 21 more
>
> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
> there seems to be something wrong in the basic configuration. Anyone an
> idea?
>
> Cheers
> Wolli
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by DSuiter RDX <ds...@rdx.com>.
The user running the job (might not be your username depending on your
setup) does not appear to have executable permissions on the jobtracker
cluster topology python script - I'm basing this on the lines:

2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
Exception running
/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
10.160.25.249
java.io.IOException: Cannot run program
"/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
java.io.IOException: error=13, Permission denied

So checking on the permissions for that file, determining what user is
kicking off your job, which depends on how you submit it, and making sure
that user has the execute permission on that file will probably fix this.

If you are using a management console, such as Cloudera SCM, when you
submit jobs, they are run as an application user, so, Flume services run
under the "Flume" user, HBase jobs will typically run under the HBase user,
and so on. It can cause some surprises if you do not expect it.

*Devin Suiter*
Jr. Data Solutions Software Engineer
100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 | www.rdx.com


On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:

> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
> try some MR Stuff on it, but starting a Job is already not possible (even
> the wordcount example). The error log of the jobtracker produces a log 700k
> lines long but it consists mainly of these lines repeatedly:
>
> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
> 2013-10-11 10:24:53,033 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:mapred (auth:SIMPLE) cause:java.io.IOException:
> java.lang.NullPointerException
> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 22 on 8021, call
> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
> true, true, -1), rpc version=2, client version=32,
> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
> java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> at
> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>  at
> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>  at
> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
> 10.160.25.249
> java.io.IOException: Cannot run program
> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
> java.io.IOException: error=13, Permission denied
>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>  at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>  at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>  at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
> at
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>  at
> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>  at
> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> at
> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> Caused by: java.io.IOException: java.io.IOException: error=13, Permission
> denied
> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>  ... 21 more
>
> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
> there seems to be something wrong in the basic configuration. Anyone an
> idea?
>
> Cheers
> Wolli
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by DSuiter RDX <ds...@rdx.com>.
The user running the job (might not be your username depending on your
setup) does not appear to have executable permissions on the jobtracker
cluster topology python script - I'm basing this on the lines:

2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
Exception running
/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
10.160.25.249
java.io.IOException: Cannot run program
"/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
java.io.IOException: error=13, Permission denied

So checking on the permissions for that file, determining what user is
kicking off your job, which depends on how you submit it, and making sure
that user has the execute permission on that file will probably fix this.

If you are using a management console, such as Cloudera SCM, when you
submit jobs, they are run as an application user, so, Flume services run
under the "Flume" user, HBase jobs will typically run under the HBase user,
and so on. It can cause some surprises if you do not expect it.

*Devin Suiter*
Jr. Data Solutions Software Engineer
100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 | www.rdx.com


On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:

> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
> try some MR Stuff on it, but starting a Job is already not possible (even
> the wordcount example). The error log of the jobtracker produces a log 700k
> lines long but it consists mainly of these lines repeatedly:
>
> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
> 2013-10-11 10:24:53,033 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:mapred (auth:SIMPLE) cause:java.io.IOException:
> java.lang.NullPointerException
> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 22 on 8021, call
> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
> true, true, -1), rpc version=2, client version=32,
> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
> java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> at
> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>  at
> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>  at
> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
> 10.160.25.249
> java.io.IOException: Cannot run program
> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
> java.io.IOException: error=13, Permission denied
>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>  at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>  at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>  at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
> at
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>  at
> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>  at
> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> at
> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> Caused by: java.io.IOException: java.io.IOException: error=13, Permission
> denied
> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>  ... 21 more
>
> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
> there seems to be something wrong in the basic configuration. Anyone an
> idea?
>
> Cheers
> Wolli
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by Arun C Murthy <ac...@hortonworks.com>.
Please ask CDH lists. 

Arun

On Oct 11, 2013, at 4:59 AM, fab wol <da...@gmail.com> wrote:

> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster, only 7 days old, and someone tried some HBase stuff on it. Now I wanted to try some MR Stuff on it, but starting a Job is already not possible (even the wordcount example). The error log of the jobtracker produces a log 700k lines long but it consists mainly of these lines repeatedly:
> 
> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:java.io.IOException: java.lang.NullPointerException
> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server handler 22 on 8021, call heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true, true, true, -1), rpc version=2, client version=32, methodsFingerPrint=-159967141 from 10.160.25.250:44389: error: java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> 	at org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
> 	at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> 	at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> 	at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
> 	at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py 10.160.25.249 
> java.io.IOException: Cannot run program "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"): java.io.IOException: error=13, Permission denied
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:188)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
> 	at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> 	at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> 	at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
> 	at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> Caused by: java.io.IOException: java.io.IOException: error=13, Permission denied
> 	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
> 	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
> 	... 21 more
> 
> it doesn't matter if it is a pure hadoop job or a oozie submitted job. there seems to be something wrong in the basic configuration. Anyone an idea?
> 
> Cheers
> Wolli

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by Arun C Murthy <ac...@hortonworks.com>.
Please ask CDH lists. 

Arun

On Oct 11, 2013, at 4:59 AM, fab wol <da...@gmail.com> wrote:

> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster, only 7 days old, and someone tried some HBase stuff on it. Now I wanted to try some MR Stuff on it, but starting a Job is already not possible (even the wordcount example). The error log of the jobtracker produces a log 700k lines long but it consists mainly of these lines repeatedly:
> 
> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:java.io.IOException: java.lang.NullPointerException
> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server handler 22 on 8021, call heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true, true, true, -1), rpc version=2, client version=32, methodsFingerPrint=-159967141 from 10.160.25.250:44389: error: java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> 	at org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
> 	at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> 	at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> 	at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
> 	at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py 10.160.25.249 
> java.io.IOException: Cannot run program "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"): java.io.IOException: error=13, Permission denied
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:188)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
> 	at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> 	at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> 	at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
> 	at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> Caused by: java.io.IOException: java.io.IOException: error=13, Permission denied
> 	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
> 	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
> 	... 21 more
> 
> it doesn't matter if it is a pure hadoop job or a oozie submitted job. there seems to be something wrong in the basic configuration. Anyone an idea?
> 
> Cheers
> Wolli

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by Arun C Murthy <ac...@hortonworks.com>.
Please ask CDH lists. 

Arun

On Oct 11, 2013, at 4:59 AM, fab wol <da...@gmail.com> wrote:

> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster, only 7 days old, and someone tried some HBase stuff on it. Now I wanted to try some MR Stuff on it, but starting a Job is already not possible (even the wordcount example). The error log of the jobtracker produces a log 700k lines long but it consists mainly of these lines repeatedly:
> 
> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:java.io.IOException: java.lang.NullPointerException
> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server handler 22 on 8021, call heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true, true, true, -1), rpc version=2, client version=32, methodsFingerPrint=-159967141 from 10.160.25.250:44389: error: java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> 	at org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
> 	at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> 	at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> 	at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
> 	at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py 10.160.25.249 
> java.io.IOException: Cannot run program "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"): java.io.IOException: error=13, Permission denied
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:188)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
> 	at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> 	at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> 	at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
> 	at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> Caused by: java.io.IOException: java.io.IOException: error=13, Permission denied
> 	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
> 	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
> 	... 21 more
> 
> it doesn't matter if it is a pure hadoop job or a oozie submitted job. there seems to be something wrong in the basic configuration. Anyone an idea?
> 
> Cheers
> Wolli

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by DSuiter RDX <ds...@rdx.com>.
The user running the job (might not be your username depending on your
setup) does not appear to have executable permissions on the jobtracker
cluster topology python script - I'm basing this on the lines:

2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
Exception running
/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
10.160.25.249
java.io.IOException: Cannot run program
"/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
java.io.IOException: error=13, Permission denied

So checking on the permissions for that file, determining what user is
kicking off your job, which depends on how you submit it, and making sure
that user has the execute permission on that file will probably fix this.

If you are using a management console, such as Cloudera SCM, when you
submit jobs, they are run as an application user, so, Flume services run
under the "Flume" user, HBase jobs will typically run under the HBase user,
and so on. It can cause some surprises if you do not expect it.

*Devin Suiter*
Jr. Data Solutions Software Engineer
100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 | www.rdx.com


On Fri, Oct 11, 2013 at 7:59 AM, fab wol <da...@gmail.com> wrote:

> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster,
> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to
> try some MR Stuff on it, but starting a Job is already not possible (even
> the wordcount example). The error log of the jobtracker produces a log 700k
> lines long but it consists mainly of these lines repeatedly:
>
> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost
> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
> 2013-10-11 10:24:53,033 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:mapred (auth:SIMPLE) cause:java.io.IOException:
> java.lang.NullPointerException
> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 22 on 8021, call
> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true,
> true, true, -1), rpc version=2, client version=32,
> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error:
> java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> at
> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
>  at
> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
>  at
> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py
> 10.160.25.249
> java.io.IOException: Cannot run program
> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in
> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"):
> java.io.IOException: error=13, Permission denied
>  at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> at org.apache.hadoop.util.Shell.run(Shell.java:188)
>  at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>  at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
>  at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
> at
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>  at
> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
>  at
> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> at
> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
>  at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
>  at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> Caused by: java.io.IOException: java.io.IOException: error=13, Permission
> denied
> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>  at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>  ... 21 more
>
> it doesn't matter if it is a pure hadoop job or a oozie submitted job.
> there seems to be something wrong in the basic configuration. Anyone an
> idea?
>
> Cheers
> Wolli
>

Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology

Posted by Arun C Murthy <ac...@hortonworks.com>.
Please ask CDH lists. 

Arun

On Oct 11, 2013, at 4:59 AM, fab wol <da...@gmail.com> wrote:

> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster, only 7 days old, and someone tried some HBase stuff on it. Now I wanted to try some MR Stuff on it, but starting a Job is already not possible (even the wordcount example). The error log of the jobtracker produces a log 700k lines long but it consists mainly of these lines repeatedly:
> 
> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
> 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:java.io.IOException: java.lang.NullPointerException
> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server handler 22 on 8021, call heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true, true, true, -1), rpc version=2, client version=32, methodsFingerPrint=-159967141 from 10.160.25.250:44389: error: java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> 	at org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751)
> 	at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> 	at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> 	at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
> 	at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py 10.160.25.249 
> java.io.IOException: Cannot run program "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"): java.io.IOException: error=13, Permission denied
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:188)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750)
> 	at org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731)
> 	at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227)
> 	at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931)
> 	at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)
> Caused by: java.io.IOException: java.io.IOException: error=13, Permission denied
> 	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
> 	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
> 	... 21 more
> 
> it doesn't matter if it is a pure hadoop job or a oozie submitted job. there seems to be something wrong in the basic configuration. Anyone an idea?
> 
> Cheers
> Wolli

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.