You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@tez.apache.org by Grandl Robert <rg...@yahoo.com> on 2014/06/19 23:59:27 UTC

Re: crossPlatformifyMREnv exception

Hmm, 


Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
    at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
    at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
    


If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?

robert




On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
 


The problem looks like it's referencing an old jar which doesn't have this
method. are you running on single node cluster? can you check you cluster
setting about jar dependency. If it's easy for you, just refresh the
environment and re-deploy the single node cluster.

Jian


On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
wrote:

> Any suggestion related to this ?
>
> Thanks,
> robert
>
>
>
> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
> <rg...@yahoo.com.INVALID> wrote:
>
>
>
> I am using 2.4 for client as well. Actually I took a tar gz from
> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>
> and I am trying to run. I tried even one node. I am lack of ideas why this
> happens.
>
>
>
>
>
> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>
>
>
> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
> which version of MR client are you using?
> can you make sure you have the same version of client jars
>
> Jian
>
>
>
> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
> wrote:
>
> Hi guys,
> >
> >I don't know what I did but my hadoop yarn went crazy. I am not able to
> submit any job, as it throws the following exception.
> >
> >4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
> >14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1403155404621_0001
> >14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
> >java.lang.NoSuchMethodError:
> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
> >    at
> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
> >    at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
> >    at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
> >    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> >    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:415)
> >    at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> >    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> >    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
> >    at
> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
> >    at
> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
> >    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >    at
> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >    at java.lang.reflect.Method.invoke(Method.java:601)
> >    at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
> >
> >
> >I configured all class path and variables as such:
> >export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
> >export HADOOP_HOME=$HADOOP_COMMON_HOME
> >export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
> >export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
> >export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
> >export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
> >export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
> >export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
> >export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
> >export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
> >export HADOOP_LOGS=$HADOOP_HOME/logs
> >export HADOOP_LOG_DIR=$HADOOP_HOME/logs
> >export YARN_LOG_DIR=$HADOOP_HOME/logs
> >
> >export JAVA_HOME=/home/hadoop/rgrandl/java/
> >export HADOOP_USER_CLASSPATH_FIRST=1
> >export YARN_HOME=/home/hadoop/hadoop-2.4.0
> >export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
> >export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
> >
> >export HADOOP_PREFIX=$HADOOP_COMMON_HOME
> >
> >export
> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
> >
> >export
> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
> >
> >
> >Everything seems to be correct, but I cannot understand this error. Is
> something I never encountered before.
> >
> >
> >Do you have any hints on it ?
> >
> >Thanks,
> >robert
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: crossPlatformifyMREnv exception

Posted by Grandl Robert <rg...@yahoo.com.INVALID>.
Hi Hitesh,

It seems tez is using 2.2 while hadoop-2.4 has 2-4 for mapreduce. .....Is this the problem ?
If yes, how can I fix ?


For tez-0.4 they are:
hadoop@nectar-11:~/rgrandl$ ls tez/tez-0.4.0-incubating/lib/
avro-1.7.4.jar                             hadoop-mapreduce-client-core-2.2.0.jar
commons-cli-1.2.jar                        hadoop-mapreduce-client-shuffle-2.2.0.jar
commons-collections4-4.0.jar               jettison-1.3.4.jar
guava-11.0.2.jar                           protobuf-java-2.5.0.jar
guice-3.0.jar                              snappy-java-1.0.4.1.jar
hadoop-mapreduce-client-common-2.2.0.jar   


For Hadoop-2.4 they are:
hadoop@nectar-11:~/rgrandl$ ls ~/hadoop-2.4.0/share/hadoop/mapreduce/
hadoop-mapreduce-client-app-2.4.0.jar          hadoop-mapreduce-client-jobclient-2.4.0-tests.jar
hadoop-mapreduce-client-common-2.4.0.jar      hadoop-mapreduce-client-shuffle-2.4.0.jar
hadoop-mapreduce-client-core-2.4.0.jar          hadoop-mapreduce-examples-2.4.0.jar
hadoop-mapreduce-client-hs-2.4.0.jar          lib
hadoop-mapreduce-client-hs-plugins-2.4.0.jar  lib-examples
hadoop-mapreduce-client-jobclient-2.4.0.jar   sources


robert



On Thursday, June 19, 2014 4:15 PM, Hitesh Shah <hi...@apache.org> wrote:
 


Hi Robert, 

If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 

— Hitesh


On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> Hmm, 
> 
> 
> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>     at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>    
> 
> 
> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
> 
> robert
> 
> 
> 
> 
> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
> 
> 
> 
> The problem looks like it's referencing an old jar which doesn't have this
> method. are you running on single node cluster? can you check you cluster
> setting about jar dependency. If it's easy for you, just refresh the
> environment and re-deploy the single node cluster.
> 
> Jian
> 
> 
> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
> wrote:
> 
>> Any suggestion related to this ?
>> 
>> Thanks,
>> robert
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>> <rg...@yahoo.com.INVALID> wrote:
>> 
>> 
>> 
>> I am using 2.4 for client as well. Actually I took a tar gz from
>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>> 
>> and I am trying to run. I tried even one node. I am lack of ideas why this
>> happens.
>> 
>> 
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>> which version of MR client are you using?
>> can you make sure you have the same version of client jars
>> 
>> Jian
>> 
>> 
>> 
>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>> Hi guys,
>>> 
>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>> submit any job, as it throws the following exception.
>>> 
>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>> job_1403155404621_0001
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>> java.lang.NoSuchMethodError:
>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>     at
>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>     at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>     at java.lang.reflect.Method.invoke(Method.java:601)
>>>     at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>> 
>>> 
>>> I configured all class path and variables as such:
>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>> 
>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>> 
>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>> 
>>> export
>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>> 
>>> export
>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>> 
>>> 
>>> Everything seems to be correct, but I cannot understand this error. Is
>> something I never encountered before.
>>> 
>>> 
>>> Do you have any hints on it ?
>>> 
>>> Thanks,
>>> robert
>> 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
> 
>> 
> 
> -- 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, you are hereby notified that 
> any printing, copying, dissemination, distribution, disclosure or 
> forwarding of this communication is strictly prohibited. If you have 
> received this communication in error, please contact the sender immediately 
> and delete it from your system. Thank You.

Re: crossPlatformifyMREnv exception

Posted by Grandl Robert <rg...@yahoo.com>.
Hi Hitesh,

It seems tez is using 2.2 while hadoop-2.4 has 2-4 for mapreduce. .....Is this the problem ?
If yes, how can I fix ?


For tez-0.4 they are:
hadoop@nectar-11:~/rgrandl$ ls tez/tez-0.4.0-incubating/lib/
avro-1.7.4.jar                             hadoop-mapreduce-client-core-2.2.0.jar
commons-cli-1.2.jar                        hadoop-mapreduce-client-shuffle-2.2.0.jar
commons-collections4-4.0.jar               jettison-1.3.4.jar
guava-11.0.2.jar                           protobuf-java-2.5.0.jar
guice-3.0.jar                              snappy-java-1.0.4.1.jar
hadoop-mapreduce-client-common-2.2.0.jar   


For Hadoop-2.4 they are:
hadoop@nectar-11:~/rgrandl$ ls ~/hadoop-2.4.0/share/hadoop/mapreduce/
hadoop-mapreduce-client-app-2.4.0.jar          hadoop-mapreduce-client-jobclient-2.4.0-tests.jar
hadoop-mapreduce-client-common-2.4.0.jar      hadoop-mapreduce-client-shuffle-2.4.0.jar
hadoop-mapreduce-client-core-2.4.0.jar          hadoop-mapreduce-examples-2.4.0.jar
hadoop-mapreduce-client-hs-2.4.0.jar          lib
hadoop-mapreduce-client-hs-plugins-2.4.0.jar  lib-examples
hadoop-mapreduce-client-jobclient-2.4.0.jar   sources


robert



On Thursday, June 19, 2014 4:15 PM, Hitesh Shah <hi...@apache.org> wrote:
 


Hi Robert, 

If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 

— Hitesh


On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> Hmm, 
> 
> 
> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>     at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>    
> 
> 
> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
> 
> robert
> 
> 
> 
> 
> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
> 
> 
> 
> The problem looks like it's referencing an old jar which doesn't have this
> method. are you running on single node cluster? can you check you cluster
> setting about jar dependency. If it's easy for you, just refresh the
> environment and re-deploy the single node cluster.
> 
> Jian
> 
> 
> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
> wrote:
> 
>> Any suggestion related to this ?
>> 
>> Thanks,
>> robert
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>> <rg...@yahoo.com.INVALID> wrote:
>> 
>> 
>> 
>> I am using 2.4 for client as well. Actually I took a tar gz from
>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>> 
>> and I am trying to run. I tried even one node. I am lack of ideas why this
>> happens.
>> 
>> 
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>> which version of MR client are you using?
>> can you make sure you have the same version of client jars
>> 
>> Jian
>> 
>> 
>> 
>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>> Hi guys,
>>> 
>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>> submit any job, as it throws the following exception.
>>> 
>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>> job_1403155404621_0001
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>> java.lang.NoSuchMethodError:
>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>     at
>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>     at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>     at java.lang.reflect.Method.invoke(Method.java:601)
>>>     at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>> 
>>> 
>>> I configured all class path and variables as such:
>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>> 
>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>> 
>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>> 
>>> export
>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>> 
>>> export
>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>> 
>>> 
>>> Everything seems to be correct, but I cannot understand this error. Is
>> something I never encountered before.
>>> 
>>> 
>>> Do you have any hints on it ?
>>> 
>>> Thanks,
>>> robert
>> 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
> 
>> 
> 
> -- 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, you are hereby notified that 
> any printing, copying, dissemination, distribution, disclosure or 
> forwarding of this communication is strictly prohibited. If you have 
> received this communication in error, please contact the sender immediately 
> and delete it from your system. Thank You.

Re: crossPlatformifyMREnv exception

Posted by Grandl Robert <rg...@yahoo.com>.
Hitesh,

Thanks so much for your advices. You are right, there it seems to be an issue with setting hive execution engine to mr after it is tez. I configure it in hive-site.xml plus moving mapreduce-2.4 jars in tez-0.4 and now works just fine.

Hurray ! 


Thanks again for your help,
robert



On Thursday, June 19, 2014 5:12 PM, Hitesh Shah <hi...@apache.org> wrote:
 


HI Robert, 

Copying the hadoop-mapreduce-*-2.4 jars to the tez dir was what I would have recommended. Tez is compatible with both 2.2 and 2.4 so either set should work. 

For everything running as tez, I am guessing somehow you have yarn-tez set in one of the config files. For hive queries, I am guessing you are already familiar with this - you can check that by just running "set hive.execution.engine” to see what the actual value is. I believe there might be an issue in hive where switching from mode=tez to mode=mr sometimes ends up re-setting mapreduce.framework.name=yarn-tez. I would suggest explicitly setting mode=mr or mode=tez in hive-site.xml itself to see if it addresses your issue. 

thanks
— Hitesh


On Jun 19, 2014, at 4:45 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> I tried to copy hadoop-mapreduce-client-2.4* from hadoop-2.4 to tez-0.4 and also copy to hdfs in /apps/tez/lib, and run Hive. But even if I set hive.execution.engine=tez or mr, jobs are running such as tez only :). (framework.name=yarn)
> 
> 
> I know I have tried before hive-0.13 and tez-0.5 and someones said they are not compatible. Now I am running hive-0.13 + tez-0.4 + hadoop-2.4, but I am still not able to run correctly hive queries over tez or mapreduce. 
> 
> 
> I already spent way too much time with this stuff and still is not running properly. Do you have any other suggestions, what to try ?
> 
> 
> 
> 
> On Thursday, June 19, 2014 4:30 PM, Hitesh Shah <hi...@apache.org> wrote:
> 
> 
> 
> Hi Robert, 
> 
> If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 
> 
> — Hitesh
> 
> 
> On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:
> 
>> Hmm, 
>> 
>> 
>> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
>> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
>> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>      at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>      at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>    
>> 
>> 
>> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
>> 
>> robert
>> 
>> 
>> 
>> 
>> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> The problem looks like it's referencing an old jar which doesn't have this
>> method. are you running on single node cluster? can you check you cluster
>> setting about jar dependency. If it's easy for you, just refresh the
>> environment and re-deploy the single node cluster.
>> 
>> Jian
>> 
>> 
>> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>>> Any suggestion related to this ?
>>> 
>>> Thanks,
>>> robert
>>> 
>>> 
>>> 
>>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>>> <rg...@yahoo.com.INVALID> wrote:
>>> 
>>> 
>>> 
>>> I am using 2.4 for client as well. Actually I took a tar gz from
>>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>>> 
>>> and I am trying to run. I tried even one node. I am lack of ideas why this
>>> happens.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>>> 
>>> 
>>> 
>>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>>> which version of MR client are you using?
>>> can you make sure you have the same version of client jars
>>> 
>>> Jian
>>> 
>>> 
>>> 
>>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>>> wrote:
>>> 
>>> Hi guys,
>>>> 
>>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>>> submit any job, as it throws the following exception.
>>>> 
>>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>>> job_1403155404621_0001
>>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>>> java.lang.NoSuchMethodError:
>>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>>      at
>>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>>      at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>>      at
>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>>      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>>      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>>      at java.security.AccessController.doPrivileged(Native Method)
>>>>      at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>      at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>>      at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>>      at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>>      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>      at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>      at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>      at java.lang.reflect.Method.invoke(Method.java:601)
>>>>      at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>>> 
>>>> 
>>>> I configured all class path and variables as such:
>>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>>> 
>>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>>> 
>>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>>> 
>>>> export
>>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>>> 
>>>> export
>>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>>> 
>>>> 
>>>> Everything seems to be correct, but I cannot understand this error. Is
>>> something I never encountered before.
>>>> 
>>>> 
>>>> Do you have any hints on it ?
>>>> 
>>>> Thanks,
>>>> robert
>>> 
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity
>>> to which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender immediately
>>> and delete it from your system. Thank You.
>> 
>>> 
>> 
>> -- 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to 
>> which it is addressed and may contain information that is confidential, 
>> privileged and exempt from disclosure under applicable law. If the reader 
>> of this message is not the intended recipient, you are hereby notified that 
>> any printing, copying, dissemination, distribution, disclosure or 
>> forwarding of this communication is strictly prohibited. If you have 
>> received this communication in error, please contact the sender immediately 
>> and delete it from your system. Thank You.

Re: crossPlatformifyMREnv exception

Posted by Grandl Robert <rg...@yahoo.com.INVALID>.
Hitesh,

Thanks so much for your advices. You are right, there it seems to be an issue with setting hive execution engine to mr after it is tez. I configure it in hive-site.xml plus moving mapreduce-2.4 jars in tez-0.4 and now works just fine.

Hurray ! 


Thanks again for your help,
robert



On Thursday, June 19, 2014 5:12 PM, Hitesh Shah <hi...@apache.org> wrote:
 


HI Robert, 

Copying the hadoop-mapreduce-*-2.4 jars to the tez dir was what I would have recommended. Tez is compatible with both 2.2 and 2.4 so either set should work. 

For everything running as tez, I am guessing somehow you have yarn-tez set in one of the config files. For hive queries, I am guessing you are already familiar with this - you can check that by just running "set hive.execution.engine” to see what the actual value is. I believe there might be an issue in hive where switching from mode=tez to mode=mr sometimes ends up re-setting mapreduce.framework.name=yarn-tez. I would suggest explicitly setting mode=mr or mode=tez in hive-site.xml itself to see if it addresses your issue. 

thanks
— Hitesh


On Jun 19, 2014, at 4:45 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> I tried to copy hadoop-mapreduce-client-2.4* from hadoop-2.4 to tez-0.4 and also copy to hdfs in /apps/tez/lib, and run Hive. But even if I set hive.execution.engine=tez or mr, jobs are running such as tez only :). (framework.name=yarn)
> 
> 
> I know I have tried before hive-0.13 and tez-0.5 and someones said they are not compatible. Now I am running hive-0.13 + tez-0.4 + hadoop-2.4, but I am still not able to run correctly hive queries over tez or mapreduce. 
> 
> 
> I already spent way too much time with this stuff and still is not running properly. Do you have any other suggestions, what to try ?
> 
> 
> 
> 
> On Thursday, June 19, 2014 4:30 PM, Hitesh Shah <hi...@apache.org> wrote:
> 
> 
> 
> Hi Robert, 
> 
> If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 
> 
> — Hitesh
> 
> 
> On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:
> 
>> Hmm, 
>> 
>> 
>> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
>> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
>> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>      at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>      at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>    
>> 
>> 
>> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
>> 
>> robert
>> 
>> 
>> 
>> 
>> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> The problem looks like it's referencing an old jar which doesn't have this
>> method. are you running on single node cluster? can you check you cluster
>> setting about jar dependency. If it's easy for you, just refresh the
>> environment and re-deploy the single node cluster.
>> 
>> Jian
>> 
>> 
>> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>>> Any suggestion related to this ?
>>> 
>>> Thanks,
>>> robert
>>> 
>>> 
>>> 
>>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>>> <rg...@yahoo.com.INVALID> wrote:
>>> 
>>> 
>>> 
>>> I am using 2.4 for client as well. Actually I took a tar gz from
>>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>>> 
>>> and I am trying to run. I tried even one node. I am lack of ideas why this
>>> happens.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>>> 
>>> 
>>> 
>>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>>> which version of MR client are you using?
>>> can you make sure you have the same version of client jars
>>> 
>>> Jian
>>> 
>>> 
>>> 
>>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>>> wrote:
>>> 
>>> Hi guys,
>>>> 
>>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>>> submit any job, as it throws the following exception.
>>>> 
>>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>>> job_1403155404621_0001
>>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>>> java.lang.NoSuchMethodError:
>>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>>      at
>>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>>      at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>>      at
>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>>      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>>      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>>      at java.security.AccessController.doPrivileged(Native Method)
>>>>      at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>      at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>>      at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>>      at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>>      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>      at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>      at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>      at java.lang.reflect.Method.invoke(Method.java:601)
>>>>      at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>>> 
>>>> 
>>>> I configured all class path and variables as such:
>>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>>> 
>>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>>> 
>>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>>> 
>>>> export
>>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>>> 
>>>> export
>>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>>> 
>>>> 
>>>> Everything seems to be correct, but I cannot understand this error. Is
>>> something I never encountered before.
>>>> 
>>>> 
>>>> Do you have any hints on it ?
>>>> 
>>>> Thanks,
>>>> robert
>>> 
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity
>>> to which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender immediately
>>> and delete it from your system. Thank You.
>> 
>>> 
>> 
>> -- 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to 
>> which it is addressed and may contain information that is confidential, 
>> privileged and exempt from disclosure under applicable law. If the reader 
>> of this message is not the intended recipient, you are hereby notified that 
>> any printing, copying, dissemination, distribution, disclosure or 
>> forwarding of this communication is strictly prohibited. If you have 
>> received this communication in error, please contact the sender immediately 
>> and delete it from your system. Thank You.

Re: crossPlatformifyMREnv exception

Posted by Hitesh Shah <hi...@apache.org>.
HI Robert, 

Copying the hadoop-mapreduce-*-2.4 jars to the tez dir was what I would have recommended. Tez is compatible with both 2.2 and 2.4 so either set should work. 

For everything running as tez, I am guessing somehow you have yarn-tez set in one of the config files. For hive queries, I am guessing you are already familiar with this - you can check that by just running "set hive.execution.engine” to see what the actual value is. I believe there might be an issue in hive where switching from mode=tez to mode=mr sometimes ends up re-setting mapreduce.framework.name=yarn-tez. I would suggest explicitly setting mode=mr or mode=tez in hive-site.xml itself to see if it addresses your issue. 

thanks
— Hitesh

On Jun 19, 2014, at 4:45 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> I tried to copy hadoop-mapreduce-client-2.4* from hadoop-2.4 to tez-0.4 and also copy to hdfs in /apps/tez/lib, and run Hive. But even if I set hive.execution.engine=tez or mr, jobs are running such as tez only :). (framework.name=yarn)
> 
> 
> I know I have tried before hive-0.13 and tez-0.5 and someones said they are not compatible. Now I am running hive-0.13 + tez-0.4 + hadoop-2.4, but I am still not able to run correctly hive queries over tez or mapreduce. 
> 
> 
> I already spent way too much time with this stuff and still is not running properly. Do you have any other suggestions, what to try ?
> 
> 
> 
> 
> On Thursday, June 19, 2014 4:30 PM, Hitesh Shah <hi...@apache.org> wrote:
> 
> 
> 
> Hi Robert, 
> 
> If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 
> 
> — Hitesh
> 
> 
> On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:
> 
>> Hmm, 
>> 
>> 
>> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
>> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
>> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>      at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>      at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>     
>> 
>> 
>> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
>> 
>> robert
>> 
>> 
>> 
>> 
>> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> The problem looks like it's referencing an old jar which doesn't have this
>> method. are you running on single node cluster? can you check you cluster
>> setting about jar dependency. If it's easy for you, just refresh the
>> environment and re-deploy the single node cluster.
>> 
>> Jian
>> 
>> 
>> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>>> Any suggestion related to this ?
>>> 
>>> Thanks,
>>> robert
>>> 
>>> 
>>> 
>>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>>> <rg...@yahoo.com.INVALID> wrote:
>>> 
>>> 
>>> 
>>> I am using 2.4 for client as well. Actually I took a tar gz from
>>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>>> 
>>> and I am trying to run. I tried even one node. I am lack of ideas why this
>>> happens.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>>> 
>>> 
>>> 
>>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>>> which version of MR client are you using?
>>> can you make sure you have the same version of client jars
>>> 
>>> Jian
>>> 
>>> 
>>> 
>>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>>> wrote:
>>> 
>>> Hi guys,
>>>> 
>>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>>> submit any job, as it throws the following exception.
>>>> 
>>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>>> job_1403155404621_0001
>>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>>> java.lang.NoSuchMethodError:
>>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>>      at
>>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>>      at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>>      at
>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>>      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>>      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>>      at java.security.AccessController.doPrivileged(Native Method)
>>>>      at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>      at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>>      at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>>      at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>>      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>      at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>      at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>      at java.lang.reflect.Method.invoke(Method.java:601)
>>>>      at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>>> 
>>>> 
>>>> I configured all class path and variables as such:
>>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>>> 
>>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>>> 
>>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>>> 
>>>> export
>>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>>> 
>>>> export
>>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>>> 
>>>> 
>>>> Everything seems to be correct, but I cannot understand this error. Is
>>> something I never encountered before.
>>>> 
>>>> 
>>>> Do you have any hints on it ?
>>>> 
>>>> Thanks,
>>>> robert
>>> 
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity
>>> to which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender immediately
>>> and delete it from your system. Thank You.
>> 
>>> 
>> 
>> -- 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to 
>> which it is addressed and may contain information that is confidential, 
>> privileged and exempt from disclosure under applicable law. If the reader 
>> of this message is not the intended recipient, you are hereby notified that 
>> any printing, copying, dissemination, distribution, disclosure or 
>> forwarding of this communication is strictly prohibited. If you have 
>> received this communication in error, please contact the sender immediately 
>> and delete it from your system. Thank You.


Re: crossPlatformifyMREnv exception

Posted by Hitesh Shah <hi...@apache.org>.
HI Robert, 

Copying the hadoop-mapreduce-*-2.4 jars to the tez dir was what I would have recommended. Tez is compatible with both 2.2 and 2.4 so either set should work. 

For everything running as tez, I am guessing somehow you have yarn-tez set in one of the config files. For hive queries, I am guessing you are already familiar with this - you can check that by just running "set hive.execution.engine” to see what the actual value is. I believe there might be an issue in hive where switching from mode=tez to mode=mr sometimes ends up re-setting mapreduce.framework.name=yarn-tez. I would suggest explicitly setting mode=mr or mode=tez in hive-site.xml itself to see if it addresses your issue. 

thanks
— Hitesh

On Jun 19, 2014, at 4:45 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> I tried to copy hadoop-mapreduce-client-2.4* from hadoop-2.4 to tez-0.4 and also copy to hdfs in /apps/tez/lib, and run Hive. But even if I set hive.execution.engine=tez or mr, jobs are running such as tez only :). (framework.name=yarn)
> 
> 
> I know I have tried before hive-0.13 and tez-0.5 and someones said they are not compatible. Now I am running hive-0.13 + tez-0.4 + hadoop-2.4, but I am still not able to run correctly hive queries over tez or mapreduce. 
> 
> 
> I already spent way too much time with this stuff and still is not running properly. Do you have any other suggestions, what to try ?
> 
> 
> 
> 
> On Thursday, June 19, 2014 4:30 PM, Hitesh Shah <hi...@apache.org> wrote:
> 
> 
> 
> Hi Robert, 
> 
> If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 
> 
> — Hitesh
> 
> 
> On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:
> 
>> Hmm, 
>> 
>> 
>> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
>> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
>> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>      at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>      at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>     
>> 
>> 
>> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
>> 
>> robert
>> 
>> 
>> 
>> 
>> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> The problem looks like it's referencing an old jar which doesn't have this
>> method. are you running on single node cluster? can you check you cluster
>> setting about jar dependency. If it's easy for you, just refresh the
>> environment and re-deploy the single node cluster.
>> 
>> Jian
>> 
>> 
>> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>>> Any suggestion related to this ?
>>> 
>>> Thanks,
>>> robert
>>> 
>>> 
>>> 
>>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>>> <rg...@yahoo.com.INVALID> wrote:
>>> 
>>> 
>>> 
>>> I am using 2.4 for client as well. Actually I took a tar gz from
>>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>>> 
>>> and I am trying to run. I tried even one node. I am lack of ideas why this
>>> happens.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>>> 
>>> 
>>> 
>>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>>> which version of MR client are you using?
>>> can you make sure you have the same version of client jars
>>> 
>>> Jian
>>> 
>>> 
>>> 
>>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>>> wrote:
>>> 
>>> Hi guys,
>>>> 
>>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>>> submit any job, as it throws the following exception.
>>>> 
>>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>>> job_1403155404621_0001
>>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>>> java.lang.NoSuchMethodError:
>>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>>      at
>>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>>      at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>>      at
>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>>      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>>      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>>      at java.security.AccessController.doPrivileged(Native Method)
>>>>      at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>      at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>>      at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>>      at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>>      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>      at
>>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>      at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>      at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>      at java.lang.reflect.Method.invoke(Method.java:601)
>>>>      at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>>> 
>>>> 
>>>> I configured all class path and variables as such:
>>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>>> 
>>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>>> 
>>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>>> 
>>>> export
>>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>>> 
>>>> export
>>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>>> 
>>>> 
>>>> Everything seems to be correct, but I cannot understand this error. Is
>>> something I never encountered before.
>>>> 
>>>> 
>>>> Do you have any hints on it ?
>>>> 
>>>> Thanks,
>>>> robert
>>> 
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity
>>> to which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender immediately
>>> and delete it from your system. Thank You.
>> 
>>> 
>> 
>> -- 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to 
>> which it is addressed and may contain information that is confidential, 
>> privileged and exempt from disclosure under applicable law. If the reader 
>> of this message is not the intended recipient, you are hereby notified that 
>> any printing, copying, dissemination, distribution, disclosure or 
>> forwarding of this communication is strictly prohibited. If you have 
>> received this communication in error, please contact the sender immediately 
>> and delete it from your system. Thank You.


Re: crossPlatformifyMREnv exception

Posted by Grandl Robert <rg...@yahoo.com.INVALID>.
I tried to copy hadoop-mapreduce-client-2.4* from hadoop-2.4 to tez-0.4 and also copy to hdfs in /apps/tez/lib, and run Hive. But even if I set hive.execution.engine=tez or mr, jobs are running such as tez only :). (framework.name=yarn)


I know I have tried before hive-0.13 and tez-0.5 and someones said they are not compatible. Now I am running hive-0.13 + tez-0.4 + hadoop-2.4, but I am still not able to run correctly hive queries over tez or mapreduce. 


I already spent way too much time with this stuff and still is not running properly. Do you have any other suggestions, what to try ?




On Thursday, June 19, 2014 4:30 PM, Hitesh Shah <hi...@apache.org> wrote:
 


Hi Robert, 

If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 

— Hitesh


On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> Hmm, 
> 
> 
> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>     at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>    
> 
> 
> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
> 
> robert
> 
> 
> 
> 
> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
> 
> 
> 
> The problem looks like it's referencing an old jar which doesn't have this
> method. are you running on single node cluster? can you check you cluster
> setting about jar dependency. If it's easy for you, just refresh the
> environment and re-deploy the single node cluster.
> 
> Jian
> 
> 
> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
> wrote:
> 
>> Any suggestion related to this ?
>> 
>> Thanks,
>> robert
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>> <rg...@yahoo.com.INVALID> wrote:
>> 
>> 
>> 
>> I am using 2.4 for client as well. Actually I took a tar gz from
>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>> 
>> and I am trying to run. I tried even one node. I am lack of ideas why this
>> happens.
>> 
>> 
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>> which version of MR client are you using?
>> can you make sure you have the same version of client jars
>> 
>> Jian
>> 
>> 
>> 
>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>> Hi guys,
>>> 
>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>> submit any job, as it throws the following exception.
>>> 
>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>> job_1403155404621_0001
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>> java.lang.NoSuchMethodError:
>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>     at
>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>     at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>     at java.lang.reflect.Method.invoke(Method.java:601)
>>>     at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>> 
>>> 
>>> I configured all class path and variables as such:
>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>> 
>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>> 
>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>> 
>>> export
>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>> 
>>> export
>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>> 
>>> 
>>> Everything seems to be correct, but I cannot understand this error. Is
>> something I never encountered before.
>>> 
>>> 
>>> Do you have any hints on it ?
>>> 
>>> Thanks,
>>> robert
>> 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
> 
>> 
> 
> -- 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, you are hereby notified that 
> any printing, copying, dissemination, distribution, disclosure or 
> forwarding of this communication is strictly prohibited. If you have 
> received this communication in error, please contact the sender immediately 
> and delete it from your system. Thank You.

Re: crossPlatformifyMREnv exception

Posted by Grandl Robert <rg...@yahoo.com>.
I tried to copy hadoop-mapreduce-client-2.4* from hadoop-2.4 to tez-0.4 and also copy to hdfs in /apps/tez/lib, and run Hive. But even if I set hive.execution.engine=tez or mr, jobs are running such as tez only :). (framework.name=yarn)


I know I have tried before hive-0.13 and tez-0.5 and someones said they are not compatible. Now I am running hive-0.13 + tez-0.4 + hadoop-2.4, but I am still not able to run correctly hive queries over tez or mapreduce. 


I already spent way too much time with this stuff and still is not running properly. Do you have any other suggestions, what to try ?




On Thursday, June 19, 2014 4:30 PM, Hitesh Shah <hi...@apache.org> wrote:
 


Hi Robert, 

If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 

— Hitesh


On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> Hmm, 
> 
> 
> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>     at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>    
> 
> 
> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
> 
> robert
> 
> 
> 
> 
> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
> 
> 
> 
> The problem looks like it's referencing an old jar which doesn't have this
> method. are you running on single node cluster? can you check you cluster
> setting about jar dependency. If it's easy for you, just refresh the
> environment and re-deploy the single node cluster.
> 
> Jian
> 
> 
> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
> wrote:
> 
>> Any suggestion related to this ?
>> 
>> Thanks,
>> robert
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>> <rg...@yahoo.com.INVALID> wrote:
>> 
>> 
>> 
>> I am using 2.4 for client as well. Actually I took a tar gz from
>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>> 
>> and I am trying to run. I tried even one node. I am lack of ideas why this
>> happens.
>> 
>> 
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>> which version of MR client are you using?
>> can you make sure you have the same version of client jars
>> 
>> Jian
>> 
>> 
>> 
>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>> Hi guys,
>>> 
>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>> submit any job, as it throws the following exception.
>>> 
>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>> job_1403155404621_0001
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>> java.lang.NoSuchMethodError:
>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>     at
>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>     at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>     at java.lang.reflect.Method.invoke(Method.java:601)
>>>     at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>> 
>>> 
>>> I configured all class path and variables as such:
>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>> 
>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>> 
>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>> 
>>> export
>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>> 
>>> export
>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>> 
>>> 
>>> Everything seems to be correct, but I cannot understand this error. Is
>> something I never encountered before.
>>> 
>>> 
>>> Do you have any hints on it ?
>>> 
>>> Thanks,
>>> robert
>> 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
> 
>> 
> 
> -- 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, you are hereby notified that 
> any printing, copying, dissemination, distribution, disclosure or 
> forwarding of this communication is strictly prohibited. If you have 
> received this communication in error, please contact the sender immediately 
> and delete it from your system. Thank You.

Re: crossPlatformifyMREnv exception

Posted by Hitesh Shah <hi...@apache.org>.
Hi Robert, 

If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 

— Hitesh

On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> Hmm, 
> 
> 
> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>     at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>     
> 
> 
> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
> 
> robert
> 
> 
> 
> 
> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
> 
> 
> 
> The problem looks like it's referencing an old jar which doesn't have this
> method. are you running on single node cluster? can you check you cluster
> setting about jar dependency. If it's easy for you, just refresh the
> environment and re-deploy the single node cluster.
> 
> Jian
> 
> 
> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
> wrote:
> 
>> Any suggestion related to this ?
>> 
>> Thanks,
>> robert
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>> <rg...@yahoo.com.INVALID> wrote:
>> 
>> 
>> 
>> I am using 2.4 for client as well. Actually I took a tar gz from
>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>> 
>> and I am trying to run. I tried even one node. I am lack of ideas why this
>> happens.
>> 
>> 
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>> which version of MR client are you using?
>> can you make sure you have the same version of client jars
>> 
>> Jian
>> 
>> 
>> 
>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>> Hi guys,
>>> 
>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>> submit any job, as it throws the following exception.
>>> 
>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>> job_1403155404621_0001
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>> java.lang.NoSuchMethodError:
>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>     at
>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>     at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>     at java.lang.reflect.Method.invoke(Method.java:601)
>>>     at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>> 
>>> 
>>> I configured all class path and variables as such:
>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>> 
>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>> 
>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>> 
>>> export
>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>> 
>>> export
>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>> 
>>> 
>>> Everything seems to be correct, but I cannot understand this error. Is
>> something I never encountered before.
>>> 
>>> 
>>> Do you have any hints on it ?
>>> 
>>> Thanks,
>>> robert
>> 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
> 
>> 
> 
> -- 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, you are hereby notified that 
> any printing, copying, dissemination, distribution, disclosure or 
> forwarding of this communication is strictly prohibited. If you have 
> received this communication in error, please contact the sender immediately 
> and delete it from your system. Thank You.


Re: crossPlatformifyMREnv exception

Posted by Hitesh Shah <hi...@apache.org>.
Hi Robert, 

If you look at the tez jars, you will probably see a couple of mapreduce jars that Tez depends on under TEZ_BASE_DIR/lib/. Are these jars inconsistent with respect to the version of hadoop/mapreduce on the cluster? 

— Hitesh

On Jun 19, 2014, at 2:59 PM, Grandl Robert <rg...@yahoo.com.INVALID> wrote:

> Hmm, 
> 
> 
> Things are really weird. So If I don't add TEZ jars/conf in hadoop_classpath simply running Mapreduce jobs works. If I add TEZ jars/conf to hadoop_classpath and try to run mapreduce jobs, it fails with the exception:
> 14/06/19 14:46:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403214320220_0001
> java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>     at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>     
> 
> 
> If I add TEZ jars/conf to hadoop classpath BUT also change framework name from yarn to yarn-tez and run tez application(like orderedwordcount) it succeeds. Now I can also run hadoop mapreduce apps, but still as tez. I am completely confused why I cannot run just mapreduce jobs(with framework name = yarn) when adding TEZ in classpath. Is this a bug ?
> 
> robert
> 
> 
> 
> 
> On Thursday, June 19, 2014 2:34 PM, Jian He <jh...@hortonworks.com> wrote:
> 
> 
> 
> The problem looks like it's referencing an old jar which doesn't have this
> method. are you running on single node cluster? can you check you cluster
> setting about jar dependency. If it's easy for you, just refresh the
> environment and re-deploy the single node cluster.
> 
> Jian
> 
> 
> On Thu, Jun 19, 2014 at 9:15 AM, Grandl Robert <rg...@yahoo.com.invalid>
> wrote:
> 
>> Any suggestion related to this ?
>> 
>> Thanks,
>> robert
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:56 PM, Grandl Robert
>> <rg...@yahoo.com.INVALID> wrote:
>> 
>> 
>> 
>> I am using 2.4 for client as well. Actually I took a tar gz from
>> http://apache.petsads.us/hadoop/common/hadoop-2.4.0/,
>> 
>> and I am trying to run. I tried even one node. I am lack of ideas why this
>> happens.
>> 
>> 
>> 
>> 
>> 
>> On Wednesday, June 18, 2014 11:52 PM, Jian He <jh...@hortonworks.com> wrote:
>> 
>> 
>> 
>> This new method crossPlatformifyMREnv  is newly added in 2.4.0 release,
>> which version of MR client are you using?
>> can you make sure you have the same version of client jars
>> 
>> Jian
>> 
>> 
>> 
>> On Wed, Jun 18, 2014 at 10:37 PM, Grandl Robert <rg...@yahoo.com.invalid>
>> wrote:
>> 
>> Hi guys,
>>> 
>>> I don't know what I did but my hadoop yarn went crazy. I am not able to
>> submit any job, as it throws the following exception.
>>> 
>>> 4/06/18 22:25:19 INFO mapreduce.JobSubmitter: number of splits:1
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>> job_1403155404621_0001
>>> 14/06/18 22:25:19 INFO mapreduce.JobSubmitter: Cleaning up the staging
>> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1403155404621_0001
>>> java.lang.NoSuchMethodError:
>> org.apache.hadoop.mapreduce.v2.util.MRApps.crossPlatformifyMREnv(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/yarn/api/ApplicationConstants$Environment;)Ljava/lang/String;
>>>     at
>> org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:390)
>>>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:284)
>>>     at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>>>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>>>     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
>>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>     at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>     at java.lang.reflect.Method.invoke(Method.java:601)
>>>     at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>> 
>>> 
>>> I configured all class path and variables as such:
>>> export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.4.0
>>> export HADOOP_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_HDFS_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_YARN_HOME=$HADOOP_COMMON_HOME
>>> export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_MAPRED_HOME/lib/native/
>>> export HADOOP_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export YARN_CONF_DIR=/home/hadoop/rgrandl/conf/
>>> export HADOOP_BIN_PATH=$HADOOP_MAPRED_HOME/bin/
>>> export HADOOP_SBIN=$HADOOP_MAPRED_HOME/sbin/
>>> export HADOOP_LOGS=$HADOOP_HOME/logs
>>> export HADOOP_LOG_DIR=$HADOOP_HOME/logs
>>> export YARN_LOG_DIR=$HADOOP_HOME/logs
>>> 
>>> export JAVA_HOME=/home/hadoop/rgrandl/java/
>>> export HADOOP_USER_CLASSPATH_FIRST=1
>>> export YARN_HOME=/home/hadoop/hadoop-2.4.0
>>> export TEZ_CONF_DIR=/home/hadoop/rgrandl/conf
>>> export TEZ_JARS=/home/hadoop/rgrandl/tez/tez-0.4.0-incubating
>>> 
>>> export HADOOP_PREFIX=$HADOOP_COMMON_HOME
>>> 
>>> export
>> HADOOP_CLASSPATH=$HADOOP_HOME:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/*:/home/hadoop/rgrandl/tez/tez-0.4.0-incubating/lib/*:/home/hadoop/rgrandl/hive:/home/hadoop/rgrandl/conf
>>> 
>>> export
>> PATH=$PATH:$HADOOP_BIN_PATH:$HADOOP_SBIN:$YARN_CONF_DIR:$HADOOP_YARN_HOME:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME:$JAVA_HOME/bin/:/home/hadoop/rgrandl/hive/bin
>>> 
>>> 
>>> Everything seems to be correct, but I cannot understand this error. Is
>> something I never encountered before.
>>> 
>>> 
>>> Do you have any hints on it ?
>>> 
>>> Thanks,
>>> robert
>> 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
> 
>> 
> 
> -- 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, you are hereby notified that 
> any printing, copying, dissemination, distribution, disclosure or 
> forwarding of this communication is strictly prohibited. If you have 
> received this communication in error, please contact the sender immediately 
> and delete it from your system. Thank You.