You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Margusja <ma...@roo.ee> on 2014/03/03 17:01:04 UTC

class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Hi

I even don't know what information to provide but my container log is:

2014-03-03 17:36:05,311 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.VerifyError: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
	at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at java.lang.Class.getDeclaredConstructors0(Native Method)
	at java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
	at java.lang.Class.getConstructor0(Class.java:2803)
	at java.lang.Class.getConstructor(Class.java:1718)
	at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
	at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
	at org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
	at org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
	at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)


Where to start digging?

-- 
Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Stanley Shi <ss...@gopivotal.com>.
Why you have 2 hadoop version in the same pom file? In this case, you are
not going to know which hadoop class you are actually using.

<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>2.3.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-core</artifactId>
        <version>1.2.1</version>
    </dependency>



Regards,
*Stanley Shi,*



On Tue, Mar 4, 2014 at 1:15 AM, Margusja <ma...@roo.ee> wrote:

> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>     <dependency>
>       <groupId>org.apache.hadoop</groupId>
>       <artifactId>hadoop-client</artifactId>
>       <version>2.3.0</version>
>     </dependency>
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-core</artifactId>
>         <version>1.2.1</version>
>     </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to
> process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
> job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in
> uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed
> with state FAILED due to: Application application_1393848686226_0018 failed
> 2 times due to AM Container for appattempt_1393848686226_0018_000002
> exited with  exitCode: 1 due to: Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:589)
>     at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
> launchContainer(DefaultContainerExecutor.java:195)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900
> Total time for transactions(ms): 69 Number of transactions batched in
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
> 1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data. BP-802201089-90.190.106.33-1393506052071
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742056
> _1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
> 1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.info. BP-802201089-90.190.106.33-1393506052071
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742057
> _1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data.info is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.jar.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742058_1234{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742058
> _1234{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing
> replication from 3 to 10 for /user/hduser/.staging/job_
> 1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing
> replication from 3 to 10 for /user/hduser/.staging/job_
> 1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.split.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742059_1235{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742059
> _1235{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742060_1236{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742060
> _1236{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed
> by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.xml.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742061_1237{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742061
> _1237{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:589)
>         at org.apache.hadoop.yarn.server.nodemanager.
> DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:
> 195)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO org.apache.hadoop.yarn.server.
> nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Container exited
> with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.Container: Container
> container_1393848686226_0019_02_000001 transitioned from RUNNING to
> EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up
> container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path : /tmp/hadoop-hdfs/nm-local-dir/
> usercache/hduser/appcache/application_1393848686226_
> 0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=hduser       OPERATION=Container Finished - Failed
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container failed
> with state: EXITED_WITH_FAILURE APPID=application_1393848686226_0019
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.Container: Container
> container_1393848686226_0019_02_000001 transitioned from
> EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Removing
> container_1393848686226_0019_02_000001 from application
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for
> appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Sending out status for container: container_id { app_attempt_id {
> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 }
> id: 1 } state: C_COMPLETE diagnostics: "Exception from container-launch:
> \norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat
> org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
> launchContainer(DefaultContainerExecutor.java:195)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)\n\tat java.util.concurrent.
> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting
> resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping
> resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Application
> application_1393848686226_0019 transitioned from RUNNING to
> APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path : /tmp/hadoop-hdfs/nm-local-dir/
> usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for
> appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Application
> application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP
> to FINISHED
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1393848686226_0019,
> with delay of 10800 seconds
> ...
>
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee <mailto:
>> margus@roo.ee>> wrote:
>>
>>     Hi
>>
>>     I even don't know what information to provide but my container log is:
>>
>>     2014-03-03 17:36:05,311 FATAL [main]
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>     MRAppMaster
>>     java.lang.VerifyError: class
>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>     overrides final method
>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>             at
>>     java.security.SecureClassLoader.defineClass(
>> SecureClassLoader.java:142)
>>             at
>>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>             at java.security.AccessController.doPrivileged(Native Method)
>>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>             at
>>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>             at
>>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>             at java.lang.Class.getConstructor(Class.java:1718)
>>             at
>>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.
>> newRecordInstance(RecordFactoryPBImpl.java:62)
>>             at
>>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>             at
>>     org.apache.hadoop.yarn.api.records.ApplicationId.
>> newInstance(ApplicationId.java:49)
>>             at
>>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(
>> ConverterUtils.java:137)
>>             at
>>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
>> ConverterUtils.java:177)
>>             at
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
>> MRAppMaster.java:1343)
>>
>>
>>     Where to start digging?
>>
>>     --     Tervitades, Margus (Margusja) Roo
>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>
>>     http://margus.roo.ee
>>     http://ee.linkedin.com/in/margusroo
>>     skype: margusja
>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>
>>     "(serialNumber=37303140314)"
>>     -----BEGIN PUBLIC KEY-----
>>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>     BjM8j36yJvoBVsfOHQIDAQAB
>>     -----END PUBLIC KEY-----
>>
>>
>>
>

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Thank you for replay, I got it work.

[hduser@vm38 ~]$ /usr/lib/hadoop-yarn/bin/yarn version
Hadoop 2.2.0.2.0.6.0-101
Subversion git@github.com:hortonworks/hadoop.git -r 
b07b2906c36defd389c8b5bd22bebc1bead8115b
Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
 From source with checksum 704f1e463ebc4fb89353011407e965
This command was run using 
/usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar
[hduser@vm38 ~]$

The main problem I think was I had yarn binary in two places and I used 
wrong one that didn't use my yarn-site.xml.
Every time I look into .staging/job.../job.xml there were values from 
<source>yarn-default.xml</source> even I set them in yarn-site.xml.

Typical mess up :)

Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 04/03/14 05:14, Rohith Sharma K S wrote:
> Hi
>
>        The reason for " org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet" is hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of protobuf is present.
>
> 1. Check MRAppMaster classpath, which version of protobuf is in classpath. Expected to have 2.5.0 version.
>     
>
> Thanks & Regards
> Rohith Sharma K S
>
>
>
> -----Original Message-----
> From: Margusja [mailto:margus@roo.ee]
> Sent: 03 March 2014 22:45
> To: user@hadoop.apache.org
> Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields
>
> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>       <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-client</artifactId>
>         <version>2.3.0</version>
>       </dependency>
>       <dependency>
>           <groupId>org.apache.hadoop</groupId>
>           <artifactId>hadoop-core</artifactId>
>           <version>1.2.1</version>
>       </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
> job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with state FAILED due to: Application application_1393848686226_0018 failed 2 times due to AM Container for
> appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to:
> Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>       at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>       at org.apache.hadoop.util.Shell.run(Shell.java:379)
>       at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>       at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>       at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>       at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total time for transactions(ms): 69 Number of transactions batched in
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.info.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data.info is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.jar.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.split.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.xml.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>           at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>           at org.apache.hadoop.util.Shell.run(Shell.java:379)
>           at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>           at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>           at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>           at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>           at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>           at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>           at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>           at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Container exited with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Container container_1393848686226_0019_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Cleaning up container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path :
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=hduser       OPERATION=Container Finished - Failed
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container
> failed with state: EXITED_WITH_FAILURE
> APPID=application_1393848686226_0019
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Container container_1393848686226_0019_02_000001 transitioned from EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Removing container_1393848686226_0019_02_000001 from application
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
> Got event CONTAINER_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 }
> state: C_COMPLETE diagnostics: "Exception from container-launch:
> \norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Starting resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Stopping resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Application application_1393848686226_0019 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path :
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
> Got event APPLICATION_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Application application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1393848686226_0019, with delay of 10800 seconds ...
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee
>> <ma...@roo.ee>> wrote:
>>
>>      Hi
>>
>>      I even don't know what information to provide but my container log is:
>>
>>      2014-03-03 17:36:05,311 FATAL [main]
>>      org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>      MRAppMaster
>>      java.lang.VerifyError: class
>>      org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>      overrides final method
>>      getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>              at java.lang.ClassLoader.defineClass1(Native Method)
>>              at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>              at
>>      java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>              at
>>      java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>              at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>              at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>              at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>              at java.security.AccessController.doPrivileged(Native Method)
>>              at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>              at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>              at
>>      sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>              at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>              at java.lang.Class.getDeclaredConstructors0(Native Method)
>>              at
>>      java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>              at java.lang.Class.getConstructor0(Class.java:2803)
>>              at java.lang.Class.getConstructor(Class.java:1718)
>>              at
>>      org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>>              at
>>      org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>              at
>>      org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>>              at
>>      org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>              at
>>      org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>              at
>>      
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1
>> 343)
>>
>>
>>      Where to start digging?
>>
>>      --
>>      Tervitades, Margus (Margusja) Roo
>>      +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>      http://margus.roo.ee
>>      http://ee.linkedin.com/in/margusroo
>>      skype: margusja
>>      ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>      "(serialNumber=37303140314)"
>>      -----BEGIN PUBLIC KEY-----
>>      MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>      5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>      RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>      BjM8j36yJvoBVsfOHQIDAQAB
>>      -----END PUBLIC KEY-----
>>
>>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Thank you for replay, I got it work.

[hduser@vm38 ~]$ /usr/lib/hadoop-yarn/bin/yarn version
Hadoop 2.2.0.2.0.6.0-101
Subversion git@github.com:hortonworks/hadoop.git -r 
b07b2906c36defd389c8b5bd22bebc1bead8115b
Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
 From source with checksum 704f1e463ebc4fb89353011407e965
This command was run using 
/usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar
[hduser@vm38 ~]$

The main problem I think was I had yarn binary in two places and I used 
wrong one that didn't use my yarn-site.xml.
Every time I look into .staging/job.../job.xml there were values from 
<source>yarn-default.xml</source> even I set them in yarn-site.xml.

Typical mess up :)

Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 04/03/14 05:14, Rohith Sharma K S wrote:
> Hi
>
>        The reason for " org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet" is hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of protobuf is present.
>
> 1. Check MRAppMaster classpath, which version of protobuf is in classpath. Expected to have 2.5.0 version.
>     
>
> Thanks & Regards
> Rohith Sharma K S
>
>
>
> -----Original Message-----
> From: Margusja [mailto:margus@roo.ee]
> Sent: 03 March 2014 22:45
> To: user@hadoop.apache.org
> Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields
>
> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>       <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-client</artifactId>
>         <version>2.3.0</version>
>       </dependency>
>       <dependency>
>           <groupId>org.apache.hadoop</groupId>
>           <artifactId>hadoop-core</artifactId>
>           <version>1.2.1</version>
>       </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
> job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with state FAILED due to: Application application_1393848686226_0018 failed 2 times due to AM Container for
> appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to:
> Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>       at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>       at org.apache.hadoop.util.Shell.run(Shell.java:379)
>       at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>       at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>       at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>       at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total time for transactions(ms): 69 Number of transactions batched in
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.info.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data.info is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.jar.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.split.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.xml.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>           at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>           at org.apache.hadoop.util.Shell.run(Shell.java:379)
>           at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>           at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>           at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>           at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>           at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>           at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>           at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>           at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Container exited with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Container container_1393848686226_0019_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Cleaning up container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path :
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=hduser       OPERATION=Container Finished - Failed
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container
> failed with state: EXITED_WITH_FAILURE
> APPID=application_1393848686226_0019
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Container container_1393848686226_0019_02_000001 transitioned from EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Removing container_1393848686226_0019_02_000001 from application
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
> Got event CONTAINER_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 }
> state: C_COMPLETE diagnostics: "Exception from container-launch:
> \norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Starting resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Stopping resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Application application_1393848686226_0019 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path :
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
> Got event APPLICATION_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Application application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1393848686226_0019, with delay of 10800 seconds ...
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee
>> <ma...@roo.ee>> wrote:
>>
>>      Hi
>>
>>      I even don't know what information to provide but my container log is:
>>
>>      2014-03-03 17:36:05,311 FATAL [main]
>>      org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>      MRAppMaster
>>      java.lang.VerifyError: class
>>      org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>      overrides final method
>>      getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>              at java.lang.ClassLoader.defineClass1(Native Method)
>>              at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>              at
>>      java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>              at
>>      java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>              at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>              at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>              at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>              at java.security.AccessController.doPrivileged(Native Method)
>>              at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>              at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>              at
>>      sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>              at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>              at java.lang.Class.getDeclaredConstructors0(Native Method)
>>              at
>>      java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>              at java.lang.Class.getConstructor0(Class.java:2803)
>>              at java.lang.Class.getConstructor(Class.java:1718)
>>              at
>>      org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>>              at
>>      org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>              at
>>      org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>>              at
>>      org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>              at
>>      org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>              at
>>      
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1
>> 343)
>>
>>
>>      Where to start digging?
>>
>>      --
>>      Tervitades, Margus (Margusja) Roo
>>      +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>      http://margus.roo.ee
>>      http://ee.linkedin.com/in/margusroo
>>      skype: margusja
>>      ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>      "(serialNumber=37303140314)"
>>      -----BEGIN PUBLIC KEY-----
>>      MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>      5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>      RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>      BjM8j36yJvoBVsfOHQIDAQAB
>>      -----END PUBLIC KEY-----
>>
>>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Thank you for replay, I got it work.

[hduser@vm38 ~]$ /usr/lib/hadoop-yarn/bin/yarn version
Hadoop 2.2.0.2.0.6.0-101
Subversion git@github.com:hortonworks/hadoop.git -r 
b07b2906c36defd389c8b5bd22bebc1bead8115b
Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
 From source with checksum 704f1e463ebc4fb89353011407e965
This command was run using 
/usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar
[hduser@vm38 ~]$

The main problem I think was I had yarn binary in two places and I used 
wrong one that didn't use my yarn-site.xml.
Every time I look into .staging/job.../job.xml there were values from 
<source>yarn-default.xml</source> even I set them in yarn-site.xml.

Typical mess up :)

Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 04/03/14 05:14, Rohith Sharma K S wrote:
> Hi
>
>        The reason for " org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet" is hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of protobuf is present.
>
> 1. Check MRAppMaster classpath, which version of protobuf is in classpath. Expected to have 2.5.0 version.
>     
>
> Thanks & Regards
> Rohith Sharma K S
>
>
>
> -----Original Message-----
> From: Margusja [mailto:margus@roo.ee]
> Sent: 03 March 2014 22:45
> To: user@hadoop.apache.org
> Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields
>
> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>       <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-client</artifactId>
>         <version>2.3.0</version>
>       </dependency>
>       <dependency>
>           <groupId>org.apache.hadoop</groupId>
>           <artifactId>hadoop-core</artifactId>
>           <version>1.2.1</version>
>       </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
> job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with state FAILED due to: Application application_1393848686226_0018 failed 2 times due to AM Container for
> appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to:
> Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>       at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>       at org.apache.hadoop.util.Shell.run(Shell.java:379)
>       at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>       at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>       at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>       at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total time for transactions(ms): 69 Number of transactions batched in
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.info.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data.info is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.jar.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.split.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.xml.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>           at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>           at org.apache.hadoop.util.Shell.run(Shell.java:379)
>           at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>           at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>           at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>           at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>           at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>           at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>           at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>           at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Container exited with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Container container_1393848686226_0019_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Cleaning up container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path :
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=hduser       OPERATION=Container Finished - Failed
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container
> failed with state: EXITED_WITH_FAILURE
> APPID=application_1393848686226_0019
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Container container_1393848686226_0019_02_000001 transitioned from EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Removing container_1393848686226_0019_02_000001 from application
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
> Got event CONTAINER_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 }
> state: C_COMPLETE diagnostics: "Exception from container-launch:
> \norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Starting resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Stopping resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Application application_1393848686226_0019 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path :
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
> Got event APPLICATION_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Application application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1393848686226_0019, with delay of 10800 seconds ...
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee
>> <ma...@roo.ee>> wrote:
>>
>>      Hi
>>
>>      I even don't know what information to provide but my container log is:
>>
>>      2014-03-03 17:36:05,311 FATAL [main]
>>      org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>      MRAppMaster
>>      java.lang.VerifyError: class
>>      org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>      overrides final method
>>      getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>              at java.lang.ClassLoader.defineClass1(Native Method)
>>              at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>              at
>>      java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>              at
>>      java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>              at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>              at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>              at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>              at java.security.AccessController.doPrivileged(Native Method)
>>              at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>              at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>              at
>>      sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>              at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>              at java.lang.Class.getDeclaredConstructors0(Native Method)
>>              at
>>      java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>              at java.lang.Class.getConstructor0(Class.java:2803)
>>              at java.lang.Class.getConstructor(Class.java:1718)
>>              at
>>      org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>>              at
>>      org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>              at
>>      org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>>              at
>>      org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>              at
>>      org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>              at
>>      
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1
>> 343)
>>
>>
>>      Where to start digging?
>>
>>      --
>>      Tervitades, Margus (Margusja) Roo
>>      +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>      http://margus.roo.ee
>>      http://ee.linkedin.com/in/margusroo
>>      skype: margusja
>>      ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>      "(serialNumber=37303140314)"
>>      -----BEGIN PUBLIC KEY-----
>>      MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>      5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>      RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>      BjM8j36yJvoBVsfOHQIDAQAB
>>      -----END PUBLIC KEY-----
>>
>>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Thank you for replay, I got it work.

[hduser@vm38 ~]$ /usr/lib/hadoop-yarn/bin/yarn version
Hadoop 2.2.0.2.0.6.0-101
Subversion git@github.com:hortonworks/hadoop.git -r 
b07b2906c36defd389c8b5bd22bebc1bead8115b
Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
 From source with checksum 704f1e463ebc4fb89353011407e965
This command was run using 
/usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar
[hduser@vm38 ~]$

The main problem I think was I had yarn binary in two places and I used 
wrong one that didn't use my yarn-site.xml.
Every time I look into .staging/job.../job.xml there were values from 
<source>yarn-default.xml</source> even I set them in yarn-site.xml.

Typical mess up :)

Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 04/03/14 05:14, Rohith Sharma K S wrote:
> Hi
>
>        The reason for " org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet" is hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of protobuf is present.
>
> 1. Check MRAppMaster classpath, which version of protobuf is in classpath. Expected to have 2.5.0 version.
>     
>
> Thanks & Regards
> Rohith Sharma K S
>
>
>
> -----Original Message-----
> From: Margusja [mailto:margus@roo.ee]
> Sent: 03 March 2014 22:45
> To: user@hadoop.apache.org
> Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields
>
> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>       <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-client</artifactId>
>         <version>2.3.0</version>
>       </dependency>
>       <dependency>
>           <groupId>org.apache.hadoop</groupId>
>           <artifactId>hadoop-core</artifactId>
>           <version>1.2.1</version>
>       </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
> job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with state FAILED due to: Application application_1393848686226_0018 failed 2 times due to AM Container for
> appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to:
> Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>       at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>       at org.apache.hadoop.util.Shell.run(Shell.java:379)
>       at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>       at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>       at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>       at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total time for transactions(ms): 69 Number of transactions batched in
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.info.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data.info is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.jar.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.split.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.xml.
> BP-802201089-90.190.106.33-1393506052071
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>           at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>           at org.apache.hadoop.util.Shell.run(Shell.java:379)
>           at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>           at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>           at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>           at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>           at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>           at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>           at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>           at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Container exited with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Container container_1393848686226_0019_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Cleaning up container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path :
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=hduser       OPERATION=Container Finished - Failed
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container
> failed with state: EXITED_WITH_FAILURE
> APPID=application_1393848686226_0019
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Container container_1393848686226_0019_02_000001 transitioned from EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Removing container_1393848686226_0019_02_000001 from application
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
> Got event CONTAINER_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 }
> state: C_COMPLETE diagnostics: "Exception from container-launch:
> \norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Starting resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Stopping resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Application application_1393848686226_0019 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path :
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
> Got event APPLICATION_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
> Application application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2014-03-03 19:13:21,165 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1393848686226_0019, with delay of 10800 seconds ...
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee
>> <ma...@roo.ee>> wrote:
>>
>>      Hi
>>
>>      I even don't know what information to provide but my container log is:
>>
>>      2014-03-03 17:36:05,311 FATAL [main]
>>      org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>      MRAppMaster
>>      java.lang.VerifyError: class
>>      org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>      overrides final method
>>      getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>              at java.lang.ClassLoader.defineClass1(Native Method)
>>              at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>              at
>>      java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>              at
>>      java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>              at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>              at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>              at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>              at java.security.AccessController.doPrivileged(Native Method)
>>              at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>              at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>              at
>>      sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>              at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>              at java.lang.Class.getDeclaredConstructors0(Native Method)
>>              at
>>      java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>              at java.lang.Class.getConstructor0(Class.java:2803)
>>              at java.lang.Class.getConstructor(Class.java:1718)
>>              at
>>      org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>>              at
>>      org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>              at
>>      org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>>              at
>>      org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>              at
>>      org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>              at
>>      
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1
>> 343)
>>
>>
>>      Where to start digging?
>>
>>      --
>>      Tervitades, Margus (Margusja) Roo
>>      +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>      http://margus.roo.ee
>>      http://ee.linkedin.com/in/margusroo
>>      skype: margusja
>>      ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>      "(serialNumber=37303140314)"
>>      -----BEGIN PUBLIC KEY-----
>>      MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>      5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>      RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>      BjM8j36yJvoBVsfOHQIDAQAB
>>      -----END PUBLIC KEY-----
>>
>>


RE: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Rohith Sharma K S <ro...@huawei.com>.
Hi

      The reason for " org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet" is hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of protobuf is present.

1. Check MRAppMaster classpath, which version of protobuf is in classpath. Expected to have 2.5.0 version.
   

Thanks & Regards
Rohith Sharma K S



-----Original Message-----
From: Margusja [mailto:margus@roo.ee] 
Sent: 03 March 2014 22:45
To: user@hadoop.apache.org
Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client</artifactId>
       <version>2.3.0</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-core</artifactId>
         <version>1.2.1</version>
     </dependency>

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with state FAILED due to: Application application_1393848686226_0018 failed 2 times due to AM Container for
appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
     at org.apache.hadoop.util.Shell.run(Shell.java:379)
     at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
     at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
     at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
     at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total time for transactions(ms): 69 Number of transactions batched in
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.jar. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete [blk_1073742051_1227]
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.jar
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.split
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.split. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.split is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.xml. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
DFSClient_NONMAPREDUCE_-915999412_15
...

Lines from namemanager log:
...
2014-03-03 19:13:19,473 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1393848686226_0019_02_000001 is : 1
2014-03-03 19:13:19,474 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Exception from container-launch with container ID: 
container_1393848686226_0019_02_000001 and exit code: 1
org.apache.hadoop.util.Shell$ExitCodeException:
         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
         at org.apache.hadoop.util.Shell.run(Shell.java:379)
         at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
         at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
         at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
         at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:744)
2014-03-03 19:13:19,474 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2014-03-03 19:13:19,474 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Container exited with a non-zero exit code 1
2014-03-03 19:13:19,475 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
2014-03-03 19:13:19,475 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Cleaning up container container_1393848686226_0019_02_000001
2014-03-03 19:13:19,496 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 WARN
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
USER=hduser       OPERATION=Container Finished - Failed 
TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
failed with state: EXITED_WITH_FAILURE
APPID=application_1393848686226_0019
CONTAINERID=container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from EXITED_WITH_FAILURE to DONE
2014-03-03 19:13:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Removing container_1393848686226_0019_02_000001 from application
application_1393848686226_0019
2014-03-03 19:13:19,499 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event CONTAINER_STOP for appId application_1393848686226_0019
2014-03-03 19:13:20,160 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 }
state: C_COMPLETE diagnostics: "Exception from container-launch: 
\norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
2014-03-03 19:13:20,161 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1393848686226_0019_02_000001
2014-03-03 19:13:20,542 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Starting resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:20,543 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Stopping resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:21,164 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2014-03-03 19:13:21,164 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event APPLICATION_STOP for appId application_1393848686226_0019
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
Scheduling Log Deletion for application: application_1393848686226_0019, with delay of 10800 seconds ...


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:05, Ted Yu wrote:
> Can you tell us the hadoop release you're using ?
>
> Seems there is inconsistency in protobuf library.
>
>
> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
> <ma...@roo.ee>> wrote:
>
>     Hi
>
>     I even don't know what information to provide but my container log is:
>
>     2014-03-03 17:36:05,311 FATAL [main]
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>     MRAppMaster
>     java.lang.VerifyError: class
>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>     overrides final method
>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>             at java.lang.ClassLoader.defineClass1(Native Method)
>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>             at
>     java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>             at
>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>             at
>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>             at
>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>             at java.lang.Class.getConstructor0(Class.java:2803)
>             at java.lang.Class.getConstructor(Class.java:1718)
>             at
>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>             at
>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>             at
>     org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>             at
>     
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1
> 343)
>
>
>     Where to start digging?
>
>     -- 
>     Tervitades, Margus (Margusja) Roo
>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>     http://margus.roo.ee
>     http://ee.linkedin.com/in/margusroo
>     skype: margusja
>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>     "(serialNumber=37303140314)"
>     -----BEGIN PUBLIC KEY-----
>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>     BjM8j36yJvoBVsfOHQIDAQAB
>     -----END PUBLIC KEY-----
>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Okey sorry for the mess

[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.version=2.2.0 - did the trick

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 17/03/14 12:16, Margusja wrote:
> Hi thanks for your replay.
>
> What I did:
> [speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
> install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found 
> it from pom profile section so I used it.
>
> ...
> it compiled:
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Mahout Build Tools ................................ SUCCESS [  
> 1.751 s]
> [INFO] Apache Mahout ..................................... SUCCESS [  
> 0.484 s]
> [INFO] Mahout Math ....................................... SUCCESS [ 
> 12.946 s]
> [INFO] Mahout Core ....................................... SUCCESS [ 
> 14.192 s]
> [INFO] Mahout Integration ................................ SUCCESS [  
> 1.857 s]
> [INFO] Mahout Examples ................................... SUCCESS [ 
> 10.762 s]
> [INFO] Mahout Release Package ............................ SUCCESS [  
> 0.012 s]
> [INFO] Mahout Math/Scala wrappers ........................ SUCCESS [ 
> 25.431 s]
> [INFO] Mahout Spark bindings ............................. SUCCESS [ 
> 40.376 s]
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] Total time: 01:48 min
> [INFO] Finished at: 2014-03-17T12:06:31+02:00
> [INFO] Final Memory: 79M/2947M
> [INFO] 
> ------------------------------------------------------------------------
>
> How to check is there hadoop2 libs in use?
>
> but unfortunately again:
> [speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
> /user/speech/demo -o demo-seqfiles
> MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
> Running on hadoop, using /usr/bin/hadoop and 
> HADOOP_CONF_DIR=/etc/hadoop/conf
> MAHOUT-JOB: 
> /home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
> 14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
> {--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
> --fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
> --input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
> --output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
> 14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
> 14/03/17 12:07:22 INFO Configuration.deprecation: 
> mapred.compress.map.output is deprecated. Instead, use 
> mapreduce.map.output.compress
> 14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
> 14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
> deprecated. Instead, use dfs.metrics.session-id
> 14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
> processName=JobTracker, sessionId=
> 14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
> process : 10
> 14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
> node allocation with : CompletedNodes: 4, size left: 29775
> 14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
> 14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
> deprecated. Instead, use mapreduce.job.user.name
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.compress is deprecated. Instead, use 
> mapreduce.output.fileoutputformat.compress
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
> deprecated. Instead, use mapreduce.job.jar
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks 
> is deprecated. Instead, use mapreduce.job.reduces
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.value.class is deprecated. Instead, use 
> mapreduce.job.output.value.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.mapoutput.value.class is deprecated. Instead, use 
> mapreduce.map.output.value.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class 
> is deprecated. Instead, use mapreduce.job.map.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
> deprecated. Instead, use mapreduce.job.name
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapreduce.inputformat.class is deprecated. Instead, use 
> mapreduce.job.inputformat.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.max.split.size is deprecated. Instead, use 
> mapreduce.input.fileinputformat.split.maxsize
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapreduce.outputformat.class is deprecated. Instead, use 
> mapreduce.job.outputformat.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.map.tasks is 
> deprecated. Instead, use mapreduce.job.maps
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.key.class is deprecated. Instead, use 
> mapreduce.job.output.key.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.mapoutput.key.class is deprecated. Instead, use 
> mapreduce.map.output.key.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.working.dir 
> is deprecated. Instead, use mapreduce.job.working.dir
> 14/03/17 12:07:23 INFO mapreduce.JobSubmitter: Submitting tokens for 
> job: job_local1589554356_0001
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 14/03/17 12:07:23 INFO mapreduce.Job: The url to track the job: 
> http://localhost:8080/
> 14/03/17 12:07:23 INFO mapreduce.Job: Running job: 
> job_local1589554356_0001
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter set in 
> config null
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter is 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Waiting for map tasks
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Starting task: 
> attempt_local1589554356_0001_m_000000_0
> 14/03/17 12:07:23 INFO mapred.Task:  Using 
> ResourceCalculatorProcessTree : [ ]
> 14/03/17 12:07:23 INFO mapred.MapTask: Processing split: 
> Paths:/user/speech/demo/text1.txt:0+628,/user/speech/demo/text10.txt:0+1327,/user/speech/demo/text2.txt:0+5165,/user/speech/demo/text3.txt:0+3736,/user/speech/demo/text4.txt:0+4338,/user/speech/demo/text5.txt:0+3338,/user/speech/demo/text6.txt:0+5836,/user/speech/demo/text7.txt:0+2936,/user/speech/demo/text8.txt:0+905,/user/speech/demo/text9.txt:0+1566
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Map task executor complete.
> 14/03/17 12:07:23 WARN mapred.LocalJobRunner: job_local1589554356_0001
> java.lang.Exception: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>     at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
> Caused by: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
>     at 
> org.apache.mahout.text.MultipleTextFileInputFormat.createRecordReader(MultipleTextFileInputFormat.java:43)
>     at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:491)
>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:734)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
>     at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:701)
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
>     ... 12 more
> Caused by: java.lang.IncompatibleClassChangeError: Found interface 
> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>     at 
> org.apache.mahout.text.WholeFileRecordReader.<init>(WholeFileRecordReader.java:59)
>     ... 17 more
> 14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
> running in uber mode : false
> 14/03/17 12:07:24 INFO mapreduce.Job:  map 0% reduce 0%
> 14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
> failed with state FAILED due to: NA
> 14/03/17 12:07:24 INFO mapreduce.Job: Counters: 0
> 14/03/17 12:07:24 INFO driver.MahoutDriver: Program took 3343 ms 
> (Minutes: 0.055716666666666664)
>
> Obviously I am doing something wrong :)
>
> Best regards, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:15, Margusja wrote:
>> Hi
>>
>> 2.2.0 and 2.3.0 gave me the same container log.
>>
>> A little bit more details.
>> I'll try to use external java client who submits job.
>> some lines from maven pom.xml file:
>>     <dependency>
>>       <groupId>org.apache.hadoop</groupId>
>>       <artifactId>hadoop-client</artifactId>
>>       <version>2.3.0</version>
>>     </dependency>
>>     <dependency>
>>         <groupId>org.apache.hadoop</groupId>
>>         <artifactId>hadoop-core</artifactId>
>>         <version>1.2.1</version>
>>     </dependency>
>>
>> lines from external client:
>> ...
>> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
>> process : 1
>> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
>> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for 
>> job: job_1393848686226_0018
>> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
>> application_1393848686226_0018
>> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
>> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
>> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
>> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 
>> running in uber mode : false
>> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
>> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 
>> failed with state FAILED due to: Application 
>> application_1393848686226_0018 failed 2 times due to AM Container for 
>> appattempt_1393848686226_0018_000002 exited with exitCode: 1 due to: 
>> Exception from container-launch:
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>>     at 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>     at java.lang.Thread.run(Thread.java:744)
>> ...
>>
>> Lines from namenode:
>> ...
>> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 
>> 900 Total time for transactions(ms): 69 Number of transactions 
>> batched in Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
>> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
>> blk_1073742050_1226 90.190.106.33:50010
>> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/input/data666.noheader.data. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
>> 90.190.106.33:50010 to delete [blk_1073742050_1226]
>> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/input/data666.noheader.data is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
>> blk_1073742051_1227 90.190.106.33:50010
>> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/input/data666.noheader.data.info. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/input/data666.noheader.data.info is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
>> 90.190.106.33:50010 to delete [blk_1073742051_1227]
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
>> replication from 3 to 10 for 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar
>> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
>> replication from 3 to 10 for 
>> /user/hduser/.staging/job_1393848686226_0019/job.split
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.split. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is 
>> closed by DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.xml. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> ...
>>
>> Lines from namemanager log:
>> ...
>> 2014-03-03 19:13:19,473 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Exit code from container container_1393848686226_0019_02_000001 is : 1
>> 2014-03-03 19:13:19,474 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Exception from container-launch with container ID: 
>> container_1393848686226_0019_02_000001 and exit code: 1
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>>         at 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>> 2014-03-03 19:13:19,474 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
>> 2014-03-03 19:13:19,474 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
>> Container exited with a non-zero exit code 1
>> 2014-03-03 19:13:19,475 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
>> Container container_1393848686226_0019_02_000001 transitioned from 
>> RUNNING to EXITED_WITH_FAILURE
>> 2014-03-03 19:13:19,475 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
>> Cleaning up container container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,496 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Deleting absolute path : 
>> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,498 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
>> USER=hduser       OPERATION=Container Finished - Failed 
>> TARGET=ContainerImpl    RESULT=FAILURE DESCRIPTION=Container failed 
>> with state: EXITED_WITH_FAILURE APPID=application_1393848686226_0019 
>> CONTAINERID=container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,498 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
>> Container container_1393848686226_0019_02_000001 transitioned from 
>> EXITED_WITH_FAILURE to DONE
>> 2014-03-03 19:13:19,498 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Removing container_1393848686226_0019_02_000001 from application 
>> application_1393848686226_0019
>> 2014-03-03 19:13:19,499 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
>> Got event CONTAINER_STOP for appId application_1393848686226_0019
>> 2014-03-03 19:13:20,160 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
>> Sending out status for container: container_id { app_attempt_id { 
>> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 
>> 2 } id: 1 } state: C_COMPLETE diagnostics: "Exception from 
>> container-launch: \norg.apache.hadoop.util.Shell$ExitCodeException: 
>> \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
>> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
>> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
>> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
>> 2014-03-03 19:13:20,161 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
>> Removed completed container container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:20,542 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
>> Starting resource-monitoring for container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:20,543 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
>> Stopping resource-monitoring for container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:21,164 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Application application_1393848686226_0019 transitioned from RUNNING 
>> to APPLICATION_RESOURCES_CLEANINGUP
>> 2014-03-03 19:13:21,164 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Deleting absolute path : 
>> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
>> Got event APPLICATION_STOP for appId application_1393848686226_0019
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Application application_1393848686226_0019 transitioned from 
>> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
>> Scheduling Log Deletion for application: 
>> application_1393848686226_0019, with delay of 10800 seconds
>> ...
>>
>>
>> Tervitades, Margus (Margusja) Roo
>> +372 51 48 780
>> http://margus.roo.ee
>> http://ee.linkedin.com/in/margusroo
>> skype: margusja
>> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
>> -----BEGIN PUBLIC KEY-----
>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>> BjM8j36yJvoBVsfOHQIDAQAB
>> -----END PUBLIC KEY-----
>>
>> On 03/03/14 19:05, Ted Yu wrote:
>>> Can you tell us the hadoop release you're using ?
>>>
>>> Seems there is inconsistency in protobuf library.
>>>
>>>
>>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
>>> <ma...@roo.ee>> wrote:
>>>
>>>     Hi
>>>
>>>     I even don't know what information to provide but my container 
>>> log is:
>>>
>>>     2014-03-03 17:36:05,311 FATAL [main]
>>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>>     MRAppMaster
>>>     java.lang.VerifyError: class
>>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>>     overrides final method
>>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>>             at
>>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>>             at
>>> java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>>             at 
>>> java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>             at java.security.AccessController.doPrivileged(Native 
>>> Method)
>>>             at 
>>> java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>>             at
>>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>>             at
>>> java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>>             at java.lang.Class.getConstructor(Class.java:1718)
>>>             at
>>> org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>>             at
>>> org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177) 
>>>
>>>             at
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343) 
>>>
>>>
>>>
>>>     Where to start digging?
>>>
>>>     --     Tervitades, Margus (Margusja) Roo
>>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>>     http://margus.roo.ee
>>>     http://ee.linkedin.com/in/margusroo
>>>     skype: margusja
>>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>>     "(serialNumber=37303140314)"
>>>     -----BEGIN PUBLIC KEY-----
>>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>>     BjM8j36yJvoBVsfOHQIDAQAB
>>>     -----END PUBLIC KEY-----
>>>
>>>
>>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Okey sorry for the mess

[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.version=2.2.0 - did the trick

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 17/03/14 12:16, Margusja wrote:
> Hi thanks for your replay.
>
> What I did:
> [speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
> install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found 
> it from pom profile section so I used it.
>
> ...
> it compiled:
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Mahout Build Tools ................................ SUCCESS [  
> 1.751 s]
> [INFO] Apache Mahout ..................................... SUCCESS [  
> 0.484 s]
> [INFO] Mahout Math ....................................... SUCCESS [ 
> 12.946 s]
> [INFO] Mahout Core ....................................... SUCCESS [ 
> 14.192 s]
> [INFO] Mahout Integration ................................ SUCCESS [  
> 1.857 s]
> [INFO] Mahout Examples ................................... SUCCESS [ 
> 10.762 s]
> [INFO] Mahout Release Package ............................ SUCCESS [  
> 0.012 s]
> [INFO] Mahout Math/Scala wrappers ........................ SUCCESS [ 
> 25.431 s]
> [INFO] Mahout Spark bindings ............................. SUCCESS [ 
> 40.376 s]
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] Total time: 01:48 min
> [INFO] Finished at: 2014-03-17T12:06:31+02:00
> [INFO] Final Memory: 79M/2947M
> [INFO] 
> ------------------------------------------------------------------------
>
> How to check is there hadoop2 libs in use?
>
> but unfortunately again:
> [speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
> /user/speech/demo -o demo-seqfiles
> MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
> Running on hadoop, using /usr/bin/hadoop and 
> HADOOP_CONF_DIR=/etc/hadoop/conf
> MAHOUT-JOB: 
> /home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
> 14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
> {--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
> --fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
> --input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
> --output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
> 14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
> 14/03/17 12:07:22 INFO Configuration.deprecation: 
> mapred.compress.map.output is deprecated. Instead, use 
> mapreduce.map.output.compress
> 14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
> 14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
> deprecated. Instead, use dfs.metrics.session-id
> 14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
> processName=JobTracker, sessionId=
> 14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
> process : 10
> 14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
> node allocation with : CompletedNodes: 4, size left: 29775
> 14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
> 14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
> deprecated. Instead, use mapreduce.job.user.name
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.compress is deprecated. Instead, use 
> mapreduce.output.fileoutputformat.compress
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
> deprecated. Instead, use mapreduce.job.jar
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks 
> is deprecated. Instead, use mapreduce.job.reduces
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.value.class is deprecated. Instead, use 
> mapreduce.job.output.value.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.mapoutput.value.class is deprecated. Instead, use 
> mapreduce.map.output.value.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class 
> is deprecated. Instead, use mapreduce.job.map.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
> deprecated. Instead, use mapreduce.job.name
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapreduce.inputformat.class is deprecated. Instead, use 
> mapreduce.job.inputformat.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.max.split.size is deprecated. Instead, use 
> mapreduce.input.fileinputformat.split.maxsize
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapreduce.outputformat.class is deprecated. Instead, use 
> mapreduce.job.outputformat.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.map.tasks is 
> deprecated. Instead, use mapreduce.job.maps
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.key.class is deprecated. Instead, use 
> mapreduce.job.output.key.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.mapoutput.key.class is deprecated. Instead, use 
> mapreduce.map.output.key.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.working.dir 
> is deprecated. Instead, use mapreduce.job.working.dir
> 14/03/17 12:07:23 INFO mapreduce.JobSubmitter: Submitting tokens for 
> job: job_local1589554356_0001
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 14/03/17 12:07:23 INFO mapreduce.Job: The url to track the job: 
> http://localhost:8080/
> 14/03/17 12:07:23 INFO mapreduce.Job: Running job: 
> job_local1589554356_0001
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter set in 
> config null
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter is 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Waiting for map tasks
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Starting task: 
> attempt_local1589554356_0001_m_000000_0
> 14/03/17 12:07:23 INFO mapred.Task:  Using 
> ResourceCalculatorProcessTree : [ ]
> 14/03/17 12:07:23 INFO mapred.MapTask: Processing split: 
> Paths:/user/speech/demo/text1.txt:0+628,/user/speech/demo/text10.txt:0+1327,/user/speech/demo/text2.txt:0+5165,/user/speech/demo/text3.txt:0+3736,/user/speech/demo/text4.txt:0+4338,/user/speech/demo/text5.txt:0+3338,/user/speech/demo/text6.txt:0+5836,/user/speech/demo/text7.txt:0+2936,/user/speech/demo/text8.txt:0+905,/user/speech/demo/text9.txt:0+1566
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Map task executor complete.
> 14/03/17 12:07:23 WARN mapred.LocalJobRunner: job_local1589554356_0001
> java.lang.Exception: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>     at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
> Caused by: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
>     at 
> org.apache.mahout.text.MultipleTextFileInputFormat.createRecordReader(MultipleTextFileInputFormat.java:43)
>     at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:491)
>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:734)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
>     at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:701)
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
>     ... 12 more
> Caused by: java.lang.IncompatibleClassChangeError: Found interface 
> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>     at 
> org.apache.mahout.text.WholeFileRecordReader.<init>(WholeFileRecordReader.java:59)
>     ... 17 more
> 14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
> running in uber mode : false
> 14/03/17 12:07:24 INFO mapreduce.Job:  map 0% reduce 0%
> 14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
> failed with state FAILED due to: NA
> 14/03/17 12:07:24 INFO mapreduce.Job: Counters: 0
> 14/03/17 12:07:24 INFO driver.MahoutDriver: Program took 3343 ms 
> (Minutes: 0.055716666666666664)
>
> Obviously I am doing something wrong :)
>
> Best regards, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:15, Margusja wrote:
>> Hi
>>
>> 2.2.0 and 2.3.0 gave me the same container log.
>>
>> A little bit more details.
>> I'll try to use external java client who submits job.
>> some lines from maven pom.xml file:
>>     <dependency>
>>       <groupId>org.apache.hadoop</groupId>
>>       <artifactId>hadoop-client</artifactId>
>>       <version>2.3.0</version>
>>     </dependency>
>>     <dependency>
>>         <groupId>org.apache.hadoop</groupId>
>>         <artifactId>hadoop-core</artifactId>
>>         <version>1.2.1</version>
>>     </dependency>
>>
>> lines from external client:
>> ...
>> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
>> process : 1
>> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
>> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for 
>> job: job_1393848686226_0018
>> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
>> application_1393848686226_0018
>> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
>> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
>> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
>> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 
>> running in uber mode : false
>> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
>> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 
>> failed with state FAILED due to: Application 
>> application_1393848686226_0018 failed 2 times due to AM Container for 
>> appattempt_1393848686226_0018_000002 exited with exitCode: 1 due to: 
>> Exception from container-launch:
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>>     at 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>     at java.lang.Thread.run(Thread.java:744)
>> ...
>>
>> Lines from namenode:
>> ...
>> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 
>> 900 Total time for transactions(ms): 69 Number of transactions 
>> batched in Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
>> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
>> blk_1073742050_1226 90.190.106.33:50010
>> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/input/data666.noheader.data. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
>> 90.190.106.33:50010 to delete [blk_1073742050_1226]
>> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/input/data666.noheader.data is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
>> blk_1073742051_1227 90.190.106.33:50010
>> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/input/data666.noheader.data.info. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/input/data666.noheader.data.info is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
>> 90.190.106.33:50010 to delete [blk_1073742051_1227]
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
>> replication from 3 to 10 for 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar
>> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
>> replication from 3 to 10 for 
>> /user/hduser/.staging/job_1393848686226_0019/job.split
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.split. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is 
>> closed by DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.xml. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> ...
>>
>> Lines from namemanager log:
>> ...
>> 2014-03-03 19:13:19,473 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Exit code from container container_1393848686226_0019_02_000001 is : 1
>> 2014-03-03 19:13:19,474 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Exception from container-launch with container ID: 
>> container_1393848686226_0019_02_000001 and exit code: 1
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>>         at 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>> 2014-03-03 19:13:19,474 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
>> 2014-03-03 19:13:19,474 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
>> Container exited with a non-zero exit code 1
>> 2014-03-03 19:13:19,475 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
>> Container container_1393848686226_0019_02_000001 transitioned from 
>> RUNNING to EXITED_WITH_FAILURE
>> 2014-03-03 19:13:19,475 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
>> Cleaning up container container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,496 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Deleting absolute path : 
>> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,498 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
>> USER=hduser       OPERATION=Container Finished - Failed 
>> TARGET=ContainerImpl    RESULT=FAILURE DESCRIPTION=Container failed 
>> with state: EXITED_WITH_FAILURE APPID=application_1393848686226_0019 
>> CONTAINERID=container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,498 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
>> Container container_1393848686226_0019_02_000001 transitioned from 
>> EXITED_WITH_FAILURE to DONE
>> 2014-03-03 19:13:19,498 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Removing container_1393848686226_0019_02_000001 from application 
>> application_1393848686226_0019
>> 2014-03-03 19:13:19,499 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
>> Got event CONTAINER_STOP for appId application_1393848686226_0019
>> 2014-03-03 19:13:20,160 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
>> Sending out status for container: container_id { app_attempt_id { 
>> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 
>> 2 } id: 1 } state: C_COMPLETE diagnostics: "Exception from 
>> container-launch: \norg.apache.hadoop.util.Shell$ExitCodeException: 
>> \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
>> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
>> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
>> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
>> 2014-03-03 19:13:20,161 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
>> Removed completed container container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:20,542 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
>> Starting resource-monitoring for container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:20,543 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
>> Stopping resource-monitoring for container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:21,164 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Application application_1393848686226_0019 transitioned from RUNNING 
>> to APPLICATION_RESOURCES_CLEANINGUP
>> 2014-03-03 19:13:21,164 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Deleting absolute path : 
>> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
>> Got event APPLICATION_STOP for appId application_1393848686226_0019
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Application application_1393848686226_0019 transitioned from 
>> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
>> Scheduling Log Deletion for application: 
>> application_1393848686226_0019, with delay of 10800 seconds
>> ...
>>
>>
>> Tervitades, Margus (Margusja) Roo
>> +372 51 48 780
>> http://margus.roo.ee
>> http://ee.linkedin.com/in/margusroo
>> skype: margusja
>> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
>> -----BEGIN PUBLIC KEY-----
>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>> BjM8j36yJvoBVsfOHQIDAQAB
>> -----END PUBLIC KEY-----
>>
>> On 03/03/14 19:05, Ted Yu wrote:
>>> Can you tell us the hadoop release you're using ?
>>>
>>> Seems there is inconsistency in protobuf library.
>>>
>>>
>>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
>>> <ma...@roo.ee>> wrote:
>>>
>>>     Hi
>>>
>>>     I even don't know what information to provide but my container 
>>> log is:
>>>
>>>     2014-03-03 17:36:05,311 FATAL [main]
>>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>>     MRAppMaster
>>>     java.lang.VerifyError: class
>>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>>     overrides final method
>>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>>             at
>>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>>             at
>>> java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>>             at 
>>> java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>             at java.security.AccessController.doPrivileged(Native 
>>> Method)
>>>             at 
>>> java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>>             at
>>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>>             at
>>> java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>>             at java.lang.Class.getConstructor(Class.java:1718)
>>>             at
>>> org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>>             at
>>> org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177) 
>>>
>>>             at
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343) 
>>>
>>>
>>>
>>>     Where to start digging?
>>>
>>>     --     Tervitades, Margus (Margusja) Roo
>>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>>     http://margus.roo.ee
>>>     http://ee.linkedin.com/in/margusroo
>>>     skype: margusja
>>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>>     "(serialNumber=37303140314)"
>>>     -----BEGIN PUBLIC KEY-----
>>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>>     BjM8j36yJvoBVsfOHQIDAQAB
>>>     -----END PUBLIC KEY-----
>>>
>>>
>>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Okey sorry for the mess

[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.version=2.2.0 - did the trick

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 17/03/14 12:16, Margusja wrote:
> Hi thanks for your replay.
>
> What I did:
> [speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
> install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found 
> it from pom profile section so I used it.
>
> ...
> it compiled:
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Mahout Build Tools ................................ SUCCESS [  
> 1.751 s]
> [INFO] Apache Mahout ..................................... SUCCESS [  
> 0.484 s]
> [INFO] Mahout Math ....................................... SUCCESS [ 
> 12.946 s]
> [INFO] Mahout Core ....................................... SUCCESS [ 
> 14.192 s]
> [INFO] Mahout Integration ................................ SUCCESS [  
> 1.857 s]
> [INFO] Mahout Examples ................................... SUCCESS [ 
> 10.762 s]
> [INFO] Mahout Release Package ............................ SUCCESS [  
> 0.012 s]
> [INFO] Mahout Math/Scala wrappers ........................ SUCCESS [ 
> 25.431 s]
> [INFO] Mahout Spark bindings ............................. SUCCESS [ 
> 40.376 s]
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] Total time: 01:48 min
> [INFO] Finished at: 2014-03-17T12:06:31+02:00
> [INFO] Final Memory: 79M/2947M
> [INFO] 
> ------------------------------------------------------------------------
>
> How to check is there hadoop2 libs in use?
>
> but unfortunately again:
> [speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
> /user/speech/demo -o demo-seqfiles
> MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
> Running on hadoop, using /usr/bin/hadoop and 
> HADOOP_CONF_DIR=/etc/hadoop/conf
> MAHOUT-JOB: 
> /home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
> 14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
> {--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
> --fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
> --input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
> --output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
> 14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
> 14/03/17 12:07:22 INFO Configuration.deprecation: 
> mapred.compress.map.output is deprecated. Instead, use 
> mapreduce.map.output.compress
> 14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
> 14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
> deprecated. Instead, use dfs.metrics.session-id
> 14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
> processName=JobTracker, sessionId=
> 14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
> process : 10
> 14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
> node allocation with : CompletedNodes: 4, size left: 29775
> 14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
> 14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
> deprecated. Instead, use mapreduce.job.user.name
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.compress is deprecated. Instead, use 
> mapreduce.output.fileoutputformat.compress
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
> deprecated. Instead, use mapreduce.job.jar
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks 
> is deprecated. Instead, use mapreduce.job.reduces
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.value.class is deprecated. Instead, use 
> mapreduce.job.output.value.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.mapoutput.value.class is deprecated. Instead, use 
> mapreduce.map.output.value.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class 
> is deprecated. Instead, use mapreduce.job.map.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
> deprecated. Instead, use mapreduce.job.name
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapreduce.inputformat.class is deprecated. Instead, use 
> mapreduce.job.inputformat.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.max.split.size is deprecated. Instead, use 
> mapreduce.input.fileinputformat.split.maxsize
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapreduce.outputformat.class is deprecated. Instead, use 
> mapreduce.job.outputformat.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.map.tasks is 
> deprecated. Instead, use mapreduce.job.maps
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.key.class is deprecated. Instead, use 
> mapreduce.job.output.key.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.mapoutput.key.class is deprecated. Instead, use 
> mapreduce.map.output.key.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.working.dir 
> is deprecated. Instead, use mapreduce.job.working.dir
> 14/03/17 12:07:23 INFO mapreduce.JobSubmitter: Submitting tokens for 
> job: job_local1589554356_0001
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 14/03/17 12:07:23 INFO mapreduce.Job: The url to track the job: 
> http://localhost:8080/
> 14/03/17 12:07:23 INFO mapreduce.Job: Running job: 
> job_local1589554356_0001
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter set in 
> config null
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter is 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Waiting for map tasks
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Starting task: 
> attempt_local1589554356_0001_m_000000_0
> 14/03/17 12:07:23 INFO mapred.Task:  Using 
> ResourceCalculatorProcessTree : [ ]
> 14/03/17 12:07:23 INFO mapred.MapTask: Processing split: 
> Paths:/user/speech/demo/text1.txt:0+628,/user/speech/demo/text10.txt:0+1327,/user/speech/demo/text2.txt:0+5165,/user/speech/demo/text3.txt:0+3736,/user/speech/demo/text4.txt:0+4338,/user/speech/demo/text5.txt:0+3338,/user/speech/demo/text6.txt:0+5836,/user/speech/demo/text7.txt:0+2936,/user/speech/demo/text8.txt:0+905,/user/speech/demo/text9.txt:0+1566
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Map task executor complete.
> 14/03/17 12:07:23 WARN mapred.LocalJobRunner: job_local1589554356_0001
> java.lang.Exception: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>     at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
> Caused by: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
>     at 
> org.apache.mahout.text.MultipleTextFileInputFormat.createRecordReader(MultipleTextFileInputFormat.java:43)
>     at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:491)
>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:734)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
>     at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:701)
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
>     ... 12 more
> Caused by: java.lang.IncompatibleClassChangeError: Found interface 
> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>     at 
> org.apache.mahout.text.WholeFileRecordReader.<init>(WholeFileRecordReader.java:59)
>     ... 17 more
> 14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
> running in uber mode : false
> 14/03/17 12:07:24 INFO mapreduce.Job:  map 0% reduce 0%
> 14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
> failed with state FAILED due to: NA
> 14/03/17 12:07:24 INFO mapreduce.Job: Counters: 0
> 14/03/17 12:07:24 INFO driver.MahoutDriver: Program took 3343 ms 
> (Minutes: 0.055716666666666664)
>
> Obviously I am doing something wrong :)
>
> Best regards, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:15, Margusja wrote:
>> Hi
>>
>> 2.2.0 and 2.3.0 gave me the same container log.
>>
>> A little bit more details.
>> I'll try to use external java client who submits job.
>> some lines from maven pom.xml file:
>>     <dependency>
>>       <groupId>org.apache.hadoop</groupId>
>>       <artifactId>hadoop-client</artifactId>
>>       <version>2.3.0</version>
>>     </dependency>
>>     <dependency>
>>         <groupId>org.apache.hadoop</groupId>
>>         <artifactId>hadoop-core</artifactId>
>>         <version>1.2.1</version>
>>     </dependency>
>>
>> lines from external client:
>> ...
>> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
>> process : 1
>> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
>> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for 
>> job: job_1393848686226_0018
>> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
>> application_1393848686226_0018
>> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
>> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
>> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
>> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 
>> running in uber mode : false
>> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
>> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 
>> failed with state FAILED due to: Application 
>> application_1393848686226_0018 failed 2 times due to AM Container for 
>> appattempt_1393848686226_0018_000002 exited with exitCode: 1 due to: 
>> Exception from container-launch:
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>>     at 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>     at java.lang.Thread.run(Thread.java:744)
>> ...
>>
>> Lines from namenode:
>> ...
>> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 
>> 900 Total time for transactions(ms): 69 Number of transactions 
>> batched in Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
>> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
>> blk_1073742050_1226 90.190.106.33:50010
>> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/input/data666.noheader.data. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
>> 90.190.106.33:50010 to delete [blk_1073742050_1226]
>> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/input/data666.noheader.data is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
>> blk_1073742051_1227 90.190.106.33:50010
>> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/input/data666.noheader.data.info. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/input/data666.noheader.data.info is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
>> 90.190.106.33:50010 to delete [blk_1073742051_1227]
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
>> replication from 3 to 10 for 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar
>> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
>> replication from 3 to 10 for 
>> /user/hduser/.staging/job_1393848686226_0019/job.split
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.split. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is 
>> closed by DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.xml. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> ...
>>
>> Lines from namemanager log:
>> ...
>> 2014-03-03 19:13:19,473 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Exit code from container container_1393848686226_0019_02_000001 is : 1
>> 2014-03-03 19:13:19,474 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Exception from container-launch with container ID: 
>> container_1393848686226_0019_02_000001 and exit code: 1
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>>         at 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>> 2014-03-03 19:13:19,474 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
>> 2014-03-03 19:13:19,474 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
>> Container exited with a non-zero exit code 1
>> 2014-03-03 19:13:19,475 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
>> Container container_1393848686226_0019_02_000001 transitioned from 
>> RUNNING to EXITED_WITH_FAILURE
>> 2014-03-03 19:13:19,475 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
>> Cleaning up container container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,496 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Deleting absolute path : 
>> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,498 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
>> USER=hduser       OPERATION=Container Finished - Failed 
>> TARGET=ContainerImpl    RESULT=FAILURE DESCRIPTION=Container failed 
>> with state: EXITED_WITH_FAILURE APPID=application_1393848686226_0019 
>> CONTAINERID=container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,498 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
>> Container container_1393848686226_0019_02_000001 transitioned from 
>> EXITED_WITH_FAILURE to DONE
>> 2014-03-03 19:13:19,498 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Removing container_1393848686226_0019_02_000001 from application 
>> application_1393848686226_0019
>> 2014-03-03 19:13:19,499 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
>> Got event CONTAINER_STOP for appId application_1393848686226_0019
>> 2014-03-03 19:13:20,160 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
>> Sending out status for container: container_id { app_attempt_id { 
>> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 
>> 2 } id: 1 } state: C_COMPLETE diagnostics: "Exception from 
>> container-launch: \norg.apache.hadoop.util.Shell$ExitCodeException: 
>> \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
>> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
>> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
>> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
>> 2014-03-03 19:13:20,161 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
>> Removed completed container container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:20,542 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
>> Starting resource-monitoring for container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:20,543 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
>> Stopping resource-monitoring for container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:21,164 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Application application_1393848686226_0019 transitioned from RUNNING 
>> to APPLICATION_RESOURCES_CLEANINGUP
>> 2014-03-03 19:13:21,164 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Deleting absolute path : 
>> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
>> Got event APPLICATION_STOP for appId application_1393848686226_0019
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Application application_1393848686226_0019 transitioned from 
>> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
>> Scheduling Log Deletion for application: 
>> application_1393848686226_0019, with delay of 10800 seconds
>> ...
>>
>>
>> Tervitades, Margus (Margusja) Roo
>> +372 51 48 780
>> http://margus.roo.ee
>> http://ee.linkedin.com/in/margusroo
>> skype: margusja
>> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
>> -----BEGIN PUBLIC KEY-----
>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>> BjM8j36yJvoBVsfOHQIDAQAB
>> -----END PUBLIC KEY-----
>>
>> On 03/03/14 19:05, Ted Yu wrote:
>>> Can you tell us the hadoop release you're using ?
>>>
>>> Seems there is inconsistency in protobuf library.
>>>
>>>
>>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
>>> <ma...@roo.ee>> wrote:
>>>
>>>     Hi
>>>
>>>     I even don't know what information to provide but my container 
>>> log is:
>>>
>>>     2014-03-03 17:36:05,311 FATAL [main]
>>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>>     MRAppMaster
>>>     java.lang.VerifyError: class
>>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>>     overrides final method
>>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>>             at
>>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>>             at
>>> java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>>             at 
>>> java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>             at java.security.AccessController.doPrivileged(Native 
>>> Method)
>>>             at 
>>> java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>>             at
>>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>>             at
>>> java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>>             at java.lang.Class.getConstructor(Class.java:1718)
>>>             at
>>> org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>>             at
>>> org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177) 
>>>
>>>             at
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343) 
>>>
>>>
>>>
>>>     Where to start digging?
>>>
>>>     --     Tervitades, Margus (Margusja) Roo
>>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>>     http://margus.roo.ee
>>>     http://ee.linkedin.com/in/margusroo
>>>     skype: margusja
>>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>>     "(serialNumber=37303140314)"
>>>     -----BEGIN PUBLIC KEY-----
>>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>>     BjM8j36yJvoBVsfOHQIDAQAB
>>>     -----END PUBLIC KEY-----
>>>
>>>
>>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Okey sorry for the mess

[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.version=2.2.0 - did the trick

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 17/03/14 12:16, Margusja wrote:
> Hi thanks for your replay.
>
> What I did:
> [speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
> install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found 
> it from pom profile section so I used it.
>
> ...
> it compiled:
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Mahout Build Tools ................................ SUCCESS [  
> 1.751 s]
> [INFO] Apache Mahout ..................................... SUCCESS [  
> 0.484 s]
> [INFO] Mahout Math ....................................... SUCCESS [ 
> 12.946 s]
> [INFO] Mahout Core ....................................... SUCCESS [ 
> 14.192 s]
> [INFO] Mahout Integration ................................ SUCCESS [  
> 1.857 s]
> [INFO] Mahout Examples ................................... SUCCESS [ 
> 10.762 s]
> [INFO] Mahout Release Package ............................ SUCCESS [  
> 0.012 s]
> [INFO] Mahout Math/Scala wrappers ........................ SUCCESS [ 
> 25.431 s]
> [INFO] Mahout Spark bindings ............................. SUCCESS [ 
> 40.376 s]
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] 
> ------------------------------------------------------------------------
> [INFO] Total time: 01:48 min
> [INFO] Finished at: 2014-03-17T12:06:31+02:00
> [INFO] Final Memory: 79M/2947M
> [INFO] 
> ------------------------------------------------------------------------
>
> How to check is there hadoop2 libs in use?
>
> but unfortunately again:
> [speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
> /user/speech/demo -o demo-seqfiles
> MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
> Running on hadoop, using /usr/bin/hadoop and 
> HADOOP_CONF_DIR=/etc/hadoop/conf
> MAHOUT-JOB: 
> /home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
> 14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
> {--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
> --fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
> --input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
> --output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
> 14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
> 14/03/17 12:07:22 INFO Configuration.deprecation: 
> mapred.compress.map.output is deprecated. Instead, use 
> mapreduce.map.output.compress
> 14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
> 14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
> deprecated. Instead, use dfs.metrics.session-id
> 14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
> processName=JobTracker, sessionId=
> 14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
> process : 10
> 14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
> node allocation with : CompletedNodes: 4, size left: 29775
> 14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
> 14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
> deprecated. Instead, use mapreduce.job.user.name
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.compress is deprecated. Instead, use 
> mapreduce.output.fileoutputformat.compress
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
> deprecated. Instead, use mapreduce.job.jar
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks 
> is deprecated. Instead, use mapreduce.job.reduces
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.value.class is deprecated. Instead, use 
> mapreduce.job.output.value.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.mapoutput.value.class is deprecated. Instead, use 
> mapreduce.map.output.value.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class 
> is deprecated. Instead, use mapreduce.job.map.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
> deprecated. Instead, use mapreduce.job.name
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapreduce.inputformat.class is deprecated. Instead, use 
> mapreduce.job.inputformat.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.max.split.size is deprecated. Instead, use 
> mapreduce.input.fileinputformat.split.maxsize
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapreduce.outputformat.class is deprecated. Instead, use 
> mapreduce.job.outputformat.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.map.tasks is 
> deprecated. Instead, use mapreduce.job.maps
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.output.key.class is deprecated. Instead, use 
> mapreduce.job.output.key.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: 
> mapred.mapoutput.key.class is deprecated. Instead, use 
> mapreduce.map.output.key.class
> 14/03/17 12:07:23 INFO Configuration.deprecation: mapred.working.dir 
> is deprecated. Instead, use mapreduce.job.working.dir
> 14/03/17 12:07:23 INFO mapreduce.JobSubmitter: Submitting tokens for 
> job: job_local1589554356_0001
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 14/03/17 12:07:23 WARN conf.Configuration: 
> file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
> attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 14/03/17 12:07:23 INFO mapreduce.Job: The url to track the job: 
> http://localhost:8080/
> 14/03/17 12:07:23 INFO mapreduce.Job: Running job: 
> job_local1589554356_0001
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter set in 
> config null
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter is 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Waiting for map tasks
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Starting task: 
> attempt_local1589554356_0001_m_000000_0
> 14/03/17 12:07:23 INFO mapred.Task:  Using 
> ResourceCalculatorProcessTree : [ ]
> 14/03/17 12:07:23 INFO mapred.MapTask: Processing split: 
> Paths:/user/speech/demo/text1.txt:0+628,/user/speech/demo/text10.txt:0+1327,/user/speech/demo/text2.txt:0+5165,/user/speech/demo/text3.txt:0+3736,/user/speech/demo/text4.txt:0+4338,/user/speech/demo/text5.txt:0+3338,/user/speech/demo/text6.txt:0+5836,/user/speech/demo/text7.txt:0+2936,/user/speech/demo/text8.txt:0+905,/user/speech/demo/text9.txt:0+1566
> 14/03/17 12:07:23 INFO mapred.LocalJobRunner: Map task executor complete.
> 14/03/17 12:07:23 WARN mapred.LocalJobRunner: job_local1589554356_0001
> java.lang.Exception: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>     at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
> Caused by: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
>     at 
> org.apache.mahout.text.MultipleTextFileInputFormat.createRecordReader(MultipleTextFileInputFormat.java:43)
>     at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:491)
>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:734)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
>     at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:701)
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
>     at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
>     ... 12 more
> Caused by: java.lang.IncompatibleClassChangeError: Found interface 
> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>     at 
> org.apache.mahout.text.WholeFileRecordReader.<init>(WholeFileRecordReader.java:59)
>     ... 17 more
> 14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
> running in uber mode : false
> 14/03/17 12:07:24 INFO mapreduce.Job:  map 0% reduce 0%
> 14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
> failed with state FAILED due to: NA
> 14/03/17 12:07:24 INFO mapreduce.Job: Counters: 0
> 14/03/17 12:07:24 INFO driver.MahoutDriver: Program took 3343 ms 
> (Minutes: 0.055716666666666664)
>
> Obviously I am doing something wrong :)
>
> Best regards, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:15, Margusja wrote:
>> Hi
>>
>> 2.2.0 and 2.3.0 gave me the same container log.
>>
>> A little bit more details.
>> I'll try to use external java client who submits job.
>> some lines from maven pom.xml file:
>>     <dependency>
>>       <groupId>org.apache.hadoop</groupId>
>>       <artifactId>hadoop-client</artifactId>
>>       <version>2.3.0</version>
>>     </dependency>
>>     <dependency>
>>         <groupId>org.apache.hadoop</groupId>
>>         <artifactId>hadoop-core</artifactId>
>>         <version>1.2.1</version>
>>     </dependency>
>>
>> lines from external client:
>> ...
>> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
>> process : 1
>> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
>> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for 
>> job: job_1393848686226_0018
>> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
>> application_1393848686226_0018
>> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
>> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
>> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
>> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 
>> running in uber mode : false
>> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
>> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 
>> failed with state FAILED due to: Application 
>> application_1393848686226_0018 failed 2 times due to AM Container for 
>> appattempt_1393848686226_0018_000002 exited with exitCode: 1 due to: 
>> Exception from container-launch:
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>>     at 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>>     at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>     at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>     at java.lang.Thread.run(Thread.java:744)
>> ...
>>
>> Lines from namenode:
>> ...
>> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 
>> 900 Total time for transactions(ms): 69 Number of transactions 
>> batched in Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
>> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
>> blk_1073742050_1226 90.190.106.33:50010
>> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/input/data666.noheader.data. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
>> 90.190.106.33:50010 to delete [blk_1073742050_1226]
>> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/input/data666.noheader.data is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
>> blk_1073742051_1227 90.190.106.33:50010
>> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/input/data666.noheader.data.info. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/input/data666.noheader.data.info is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
>> 90.190.106.33:50010 to delete [blk_1073742051_1227]
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
>> replication from 3 to 10 for 
>> /user/hduser/.staging/job_1393848686226_0019/job.jar
>> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
>> replication from 3 to 10 for 
>> /user/hduser/.staging/job_1393848686226_0019/job.split
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.split. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is 
>> closed by DFSClient_NONMAPREDUCE_-915999412_15
>> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
>> /user/hduser/.staging/job_1393848686226_0019/job.xml. 
>> BP-802201089-90.190.106.33-1393506052071 
>> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
>> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: 
>> blockMap updated: 90.190.106.33:50010 is added to 
>> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
>> primaryNodeIndex=-1, 
>> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
>> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
>> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
>> DFSClient_NONMAPREDUCE_-915999412_15
>> ...
>>
>> Lines from namemanager log:
>> ...
>> 2014-03-03 19:13:19,473 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Exit code from container container_1393848686226_0019_02_000001 is : 1
>> 2014-03-03 19:13:19,474 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Exception from container-launch with container ID: 
>> container_1393848686226_0019_02_000001 and exit code: 1
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>>         at 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>> 2014-03-03 19:13:19,474 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
>> 2014-03-03 19:13:19,474 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
>> Container exited with a non-zero exit code 1
>> 2014-03-03 19:13:19,475 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
>> Container container_1393848686226_0019_02_000001 transitioned from 
>> RUNNING to EXITED_WITH_FAILURE
>> 2014-03-03 19:13:19,475 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
>> Cleaning up container container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,496 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Deleting absolute path : 
>> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,498 WARN 
>> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
>> USER=hduser       OPERATION=Container Finished - Failed 
>> TARGET=ContainerImpl    RESULT=FAILURE DESCRIPTION=Container failed 
>> with state: EXITED_WITH_FAILURE APPID=application_1393848686226_0019 
>> CONTAINERID=container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:19,498 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
>> Container container_1393848686226_0019_02_000001 transitioned from 
>> EXITED_WITH_FAILURE to DONE
>> 2014-03-03 19:13:19,498 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Removing container_1393848686226_0019_02_000001 from application 
>> application_1393848686226_0019
>> 2014-03-03 19:13:19,499 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
>> Got event CONTAINER_STOP for appId application_1393848686226_0019
>> 2014-03-03 19:13:20,160 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
>> Sending out status for container: container_id { app_attempt_id { 
>> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 
>> 2 } id: 1 } state: C_COMPLETE diagnostics: "Exception from 
>> container-launch: \norg.apache.hadoop.util.Shell$ExitCodeException: 
>> \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
>> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
>> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
>> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
>> 2014-03-03 19:13:20,161 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
>> Removed completed container container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:20,542 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
>> Starting resource-monitoring for container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:20,543 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
>> Stopping resource-monitoring for container_1393848686226_0019_02_000001
>> 2014-03-03 19:13:21,164 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Application application_1393848686226_0019 transitioned from RUNNING 
>> to APPLICATION_RESOURCES_CLEANINGUP
>> 2014-03-03 19:13:21,164 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
>> Deleting absolute path : 
>> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
>> Got event APPLICATION_STOP for appId application_1393848686226_0019
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
>> Application application_1393848686226_0019 transitioned from 
>> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
>> 2014-03-03 19:13:21,165 INFO 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
>> Scheduling Log Deletion for application: 
>> application_1393848686226_0019, with delay of 10800 seconds
>> ...
>>
>>
>> Tervitades, Margus (Margusja) Roo
>> +372 51 48 780
>> http://margus.roo.ee
>> http://ee.linkedin.com/in/margusroo
>> skype: margusja
>> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
>> -----BEGIN PUBLIC KEY-----
>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>> BjM8j36yJvoBVsfOHQIDAQAB
>> -----END PUBLIC KEY-----
>>
>> On 03/03/14 19:05, Ted Yu wrote:
>>> Can you tell us the hadoop release you're using ?
>>>
>>> Seems there is inconsistency in protobuf library.
>>>
>>>
>>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
>>> <ma...@roo.ee>> wrote:
>>>
>>>     Hi
>>>
>>>     I even don't know what information to provide but my container 
>>> log is:
>>>
>>>     2014-03-03 17:36:05,311 FATAL [main]
>>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>>     MRAppMaster
>>>     java.lang.VerifyError: class
>>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>>     overrides final method
>>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>>             at
>>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>>             at
>>> java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>>             at 
>>> java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>             at java.security.AccessController.doPrivileged(Native 
>>> Method)
>>>             at 
>>> java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>>             at
>>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>>             at
>>> java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>>             at java.lang.Class.getConstructor(Class.java:1718)
>>>             at
>>> org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>>             at
>>> org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137) 
>>>
>>>             at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177) 
>>>
>>>             at
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343) 
>>>
>>>
>>>
>>>     Where to start digging?
>>>
>>>     --     Tervitades, Margus (Margusja) Roo
>>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>>     http://margus.roo.ee
>>>     http://ee.linkedin.com/in/margusroo
>>>     skype: margusja
>>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>>     "(serialNumber=37303140314)"
>>>     -----BEGIN PUBLIC KEY-----
>>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>>     BjM8j36yJvoBVsfOHQIDAQAB
>>>     -----END PUBLIC KEY-----
>>>
>>>
>>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Hi thanks for your replay.

What I did:
[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found it 
from pom profile section so I used it.

...
it compiled:
[INFO] 
------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Mahout Build Tools ................................ SUCCESS [  
1.751 s]
[INFO] Apache Mahout ..................................... SUCCESS [  
0.484 s]
[INFO] Mahout Math ....................................... SUCCESS [ 
12.946 s]
[INFO] Mahout Core ....................................... SUCCESS [ 
14.192 s]
[INFO] Mahout Integration ................................ SUCCESS [  
1.857 s]
[INFO] Mahout Examples ................................... SUCCESS [ 
10.762 s]
[INFO] Mahout Release Package ............................ SUCCESS [  
0.012 s]
[INFO] Mahout Math/Scala wrappers ........................ SUCCESS [ 
25.431 s]
[INFO] Mahout Spark bindings ............................. SUCCESS [ 
40.376 s]
[INFO] 
------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] 
------------------------------------------------------------------------
[INFO] Total time: 01:48 min
[INFO] Finished at: 2014-03-17T12:06:31+02:00
[INFO] Final Memory: 79M/2947M
[INFO] 
------------------------------------------------------------------------

How to check is there hadoop2 libs in use?

but unfortunately again:
[speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
/user/speech/demo -o demo-seqfiles
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/bin/hadoop and 
HADOOP_CONF_DIR=/etc/hadoop/conf
MAHOUT-JOB: 
/home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
{--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
--fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
--input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
--output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/17 12:07:22 INFO Configuration.deprecation: 
mapred.compress.map.output is deprecated. Instead, use 
mapreduce.map.output.compress
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
deprecated. Instead, use dfs.metrics.session-id
14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
process : 10
14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
node allocation with : CompletedNodes: 4, size left: 29775
14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
deprecated. Instead, use mapreduce.job.user.name
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.output.compress 
is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
deprecated. Instead, use mapreduce.job.jar
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks is 
deprecated. Instead, use mapreduce.job.reduces
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.value.class is deprecated. Instead, use 
mapreduce.job.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.value.class is deprecated. Instead, use 
mapreduce.map.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class is 
deprecated. Instead, use mapreduce.job.map.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.inputformat.class is deprecated. Instead, use 
mapreduce.job.inputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.max.split.size 
is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.outputformat.class is deprecated. Instead, use 
mapreduce.job.outputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.map.tasks is 
deprecated. Instead, use mapreduce.job.maps
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.key.class is deprecated. Instead, use 
mapreduce.job.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.key.class is deprecated. Instead, use 
mapreduce.map.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.working.dir is 
deprecated. Instead, use mapreduce.job.working.dir
14/03/17 12:07:23 INFO mapreduce.JobSubmitter: Submitting tokens for 
job: job_local1589554356_0001
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 INFO mapreduce.Job: The url to track the job: 
http://localhost:8080/
14/03/17 12:07:23 INFO mapreduce.Job: Running job: job_local1589554356_0001
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter set in 
config null
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter is 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Waiting for map tasks
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Starting task: 
attempt_local1589554356_0001_m_000000_0
14/03/17 12:07:23 INFO mapred.Task:  Using ResourceCalculatorProcessTree 
: [ ]
14/03/17 12:07:23 INFO mapred.MapTask: Processing split: 
Paths:/user/speech/demo/text1.txt:0+628,/user/speech/demo/text10.txt:0+1327,/user/speech/demo/text2.txt:0+5165,/user/speech/demo/text3.txt:0+3736,/user/speech/demo/text4.txt:0+4338,/user/speech/demo/text5.txt:0+3338,/user/speech/demo/text6.txt:0+5836,/user/speech/demo/text7.txt:0+2936,/user/speech/demo/text8.txt:0+905,/user/speech/demo/text9.txt:0+1566
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Map task executor complete.
14/03/17 12:07:23 WARN mapred.LocalJobRunner: job_local1589554356_0001
java.lang.Exception: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
     at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
Caused by: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
     at 
org.apache.mahout.text.MultipleTextFileInputFormat.createRecordReader(MultipleTextFileInputFormat.java:43)
     at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:491)
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:734)
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
     at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.reflect.InvocationTargetException
     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
     at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
     at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
     at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
     ... 12 more
Caused by: java.lang.IncompatibleClassChangeError: Found interface 
org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
     at 
org.apache.mahout.text.WholeFileRecordReader.<init>(WholeFileRecordReader.java:59)
     ... 17 more
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
running in uber mode : false
14/03/17 12:07:24 INFO mapreduce.Job:  map 0% reduce 0%
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
failed with state FAILED due to: NA
14/03/17 12:07:24 INFO mapreduce.Job: Counters: 0
14/03/17 12:07:24 INFO driver.MahoutDriver: Program took 3343 ms 
(Minutes: 0.055716666666666664)

Obviously I am doing something wrong :)

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:15, Margusja wrote:
> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>     <dependency>
>       <groupId>org.apache.hadoop</groupId>
>       <artifactId>hadoop-client</artifactId>
>       <version>2.3.0</version>
>     </dependency>
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-core</artifactId>
>         <version>1.2.1</version>
>     </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
> process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for 
> job: job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 
> running in uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed 
> with state FAILED due to: Application application_1393848686226_0018 
> failed 2 times due to AM Container for 
> appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
> Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>     at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 
> Total time for transactions(ms): 69 Number of transactions batched in 
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/input/data666.noheader.data. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/input/data666.noheader.data is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/input/data666.noheader.data.info. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/input/data666.noheader.data.info is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.jar. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
> replication from 3 to 10 for 
> /user/hduser/.staging/job_1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
> replication from 3 to 10 for 
> /user/hduser/.staging/job_1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.split. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is 
> closed by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.xml. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Exception from container-launch with container ID: 
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>         at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
> Container exited with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
> Container container_1393848686226_0019_02_000001 transitioned from 
> RUNNING to EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
> Cleaning up container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Deleting absolute path : 
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
> USER=hduser       OPERATION=Container Finished - Failed 
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
> failed with state: EXITED_WITH_FAILURE 
> APPID=application_1393848686226_0019 
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
> Container container_1393848686226_0019_02_000001 transitioned from 
> EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Removing container_1393848686226_0019_02_000001 from application 
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
> Sending out status for container: container_id { app_attempt_id { 
> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 
> 2 } id: 1 } state: C_COMPLETE diagnostics: "Exception from 
> container-launch: \norg.apache.hadoop.util.Shell$ExitCodeException: 
> \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
> Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
> Starting resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
> Stopping resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Application application_1393848686226_0019 transitioned from RUNNING 
> to APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Deleting absolute path : 
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event APPLICATION_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Application application_1393848686226_0019 transitioned from 
> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
> Scheduling Log Deletion for application: 
> application_1393848686226_0019, with delay of 10800 seconds
> ...
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
>> <ma...@roo.ee>> wrote:
>>
>>     Hi
>>
>>     I even don't know what information to provide but my container 
>> log is:
>>
>>     2014-03-03 17:36:05,311 FATAL [main]
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>     MRAppMaster
>>     java.lang.VerifyError: class
>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>     overrides final method
>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>             at
>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>             at
>>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>             at 
>> java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>             at java.security.AccessController.doPrivileged(Native 
>> Method)
>>             at 
>> java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>             at
>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>             at
>> java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>             at java.lang.Class.getConstructor(Class.java:1718)
>>             at
>> org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>>             at
>> org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>             at
>> org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>>             at
>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>             at
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>             at
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)
>>
>>
>>     Where to start digging?
>>
>>     --     Tervitades, Margus (Margusja) Roo
>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>     http://margus.roo.ee
>>     http://ee.linkedin.com/in/margusroo
>>     skype: margusja
>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>     "(serialNumber=37303140314)"
>>     -----BEGIN PUBLIC KEY-----
>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>     BjM8j36yJvoBVsfOHQIDAQAB
>>     -----END PUBLIC KEY-----
>>
>>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Hi thanks for your replay.

What I did:
[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found it 
from pom profile section so I used it.

...
it compiled:
[INFO] 
------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Mahout Build Tools ................................ SUCCESS [  
1.751 s]
[INFO] Apache Mahout ..................................... SUCCESS [  
0.484 s]
[INFO] Mahout Math ....................................... SUCCESS [ 
12.946 s]
[INFO] Mahout Core ....................................... SUCCESS [ 
14.192 s]
[INFO] Mahout Integration ................................ SUCCESS [  
1.857 s]
[INFO] Mahout Examples ................................... SUCCESS [ 
10.762 s]
[INFO] Mahout Release Package ............................ SUCCESS [  
0.012 s]
[INFO] Mahout Math/Scala wrappers ........................ SUCCESS [ 
25.431 s]
[INFO] Mahout Spark bindings ............................. SUCCESS [ 
40.376 s]
[INFO] 
------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] 
------------------------------------------------------------------------
[INFO] Total time: 01:48 min
[INFO] Finished at: 2014-03-17T12:06:31+02:00
[INFO] Final Memory: 79M/2947M
[INFO] 
------------------------------------------------------------------------

How to check is there hadoop2 libs in use?

but unfortunately again:
[speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
/user/speech/demo -o demo-seqfiles
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/bin/hadoop and 
HADOOP_CONF_DIR=/etc/hadoop/conf
MAHOUT-JOB: 
/home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
{--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
--fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
--input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
--output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/17 12:07:22 INFO Configuration.deprecation: 
mapred.compress.map.output is deprecated. Instead, use 
mapreduce.map.output.compress
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
deprecated. Instead, use dfs.metrics.session-id
14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
process : 10
14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
node allocation with : CompletedNodes: 4, size left: 29775
14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
deprecated. Instead, use mapreduce.job.user.name
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.output.compress 
is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
deprecated. Instead, use mapreduce.job.jar
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks is 
deprecated. Instead, use mapreduce.job.reduces
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.value.class is deprecated. Instead, use 
mapreduce.job.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.value.class is deprecated. Instead, use 
mapreduce.map.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class is 
deprecated. Instead, use mapreduce.job.map.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.inputformat.class is deprecated. Instead, use 
mapreduce.job.inputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.max.split.size 
is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.outputformat.class is deprecated. Instead, use 
mapreduce.job.outputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.map.tasks is 
deprecated. Instead, use mapreduce.job.maps
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.key.class is deprecated. Instead, use 
mapreduce.job.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.key.class is deprecated. Instead, use 
mapreduce.map.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.working.dir is 
deprecated. Instead, use mapreduce.job.working.dir
14/03/17 12:07:23 INFO mapreduce.JobSubmitter: Submitting tokens for 
job: job_local1589554356_0001
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 INFO mapreduce.Job: The url to track the job: 
http://localhost:8080/
14/03/17 12:07:23 INFO mapreduce.Job: Running job: job_local1589554356_0001
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter set in 
config null
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter is 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Waiting for map tasks
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Starting task: 
attempt_local1589554356_0001_m_000000_0
14/03/17 12:07:23 INFO mapred.Task:  Using ResourceCalculatorProcessTree 
: [ ]
14/03/17 12:07:23 INFO mapred.MapTask: Processing split: 
Paths:/user/speech/demo/text1.txt:0+628,/user/speech/demo/text10.txt:0+1327,/user/speech/demo/text2.txt:0+5165,/user/speech/demo/text3.txt:0+3736,/user/speech/demo/text4.txt:0+4338,/user/speech/demo/text5.txt:0+3338,/user/speech/demo/text6.txt:0+5836,/user/speech/demo/text7.txt:0+2936,/user/speech/demo/text8.txt:0+905,/user/speech/demo/text9.txt:0+1566
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Map task executor complete.
14/03/17 12:07:23 WARN mapred.LocalJobRunner: job_local1589554356_0001
java.lang.Exception: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
     at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
Caused by: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
     at 
org.apache.mahout.text.MultipleTextFileInputFormat.createRecordReader(MultipleTextFileInputFormat.java:43)
     at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:491)
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:734)
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
     at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.reflect.InvocationTargetException
     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
     at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
     at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
     at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
     ... 12 more
Caused by: java.lang.IncompatibleClassChangeError: Found interface 
org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
     at 
org.apache.mahout.text.WholeFileRecordReader.<init>(WholeFileRecordReader.java:59)
     ... 17 more
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
running in uber mode : false
14/03/17 12:07:24 INFO mapreduce.Job:  map 0% reduce 0%
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
failed with state FAILED due to: NA
14/03/17 12:07:24 INFO mapreduce.Job: Counters: 0
14/03/17 12:07:24 INFO driver.MahoutDriver: Program took 3343 ms 
(Minutes: 0.055716666666666664)

Obviously I am doing something wrong :)

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:15, Margusja wrote:
> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>     <dependency>
>       <groupId>org.apache.hadoop</groupId>
>       <artifactId>hadoop-client</artifactId>
>       <version>2.3.0</version>
>     </dependency>
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-core</artifactId>
>         <version>1.2.1</version>
>     </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
> process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for 
> job: job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 
> running in uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed 
> with state FAILED due to: Application application_1393848686226_0018 
> failed 2 times due to AM Container for 
> appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
> Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>     at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 
> Total time for transactions(ms): 69 Number of transactions batched in 
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/input/data666.noheader.data. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/input/data666.noheader.data is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/input/data666.noheader.data.info. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/input/data666.noheader.data.info is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.jar. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
> replication from 3 to 10 for 
> /user/hduser/.staging/job_1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
> replication from 3 to 10 for 
> /user/hduser/.staging/job_1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.split. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is 
> closed by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.xml. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Exception from container-launch with container ID: 
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>         at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
> Container exited with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
> Container container_1393848686226_0019_02_000001 transitioned from 
> RUNNING to EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
> Cleaning up container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Deleting absolute path : 
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
> USER=hduser       OPERATION=Container Finished - Failed 
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
> failed with state: EXITED_WITH_FAILURE 
> APPID=application_1393848686226_0019 
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
> Container container_1393848686226_0019_02_000001 transitioned from 
> EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Removing container_1393848686226_0019_02_000001 from application 
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
> Sending out status for container: container_id { app_attempt_id { 
> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 
> 2 } id: 1 } state: C_COMPLETE diagnostics: "Exception from 
> container-launch: \norg.apache.hadoop.util.Shell$ExitCodeException: 
> \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
> Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
> Starting resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
> Stopping resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Application application_1393848686226_0019 transitioned from RUNNING 
> to APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Deleting absolute path : 
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event APPLICATION_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Application application_1393848686226_0019 transitioned from 
> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
> Scheduling Log Deletion for application: 
> application_1393848686226_0019, with delay of 10800 seconds
> ...
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
>> <ma...@roo.ee>> wrote:
>>
>>     Hi
>>
>>     I even don't know what information to provide but my container 
>> log is:
>>
>>     2014-03-03 17:36:05,311 FATAL [main]
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>     MRAppMaster
>>     java.lang.VerifyError: class
>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>     overrides final method
>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>             at
>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>             at
>>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>             at 
>> java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>             at java.security.AccessController.doPrivileged(Native 
>> Method)
>>             at 
>> java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>             at
>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>             at
>> java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>             at java.lang.Class.getConstructor(Class.java:1718)
>>             at
>> org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>>             at
>> org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>             at
>> org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>>             at
>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>             at
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>             at
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)
>>
>>
>>     Where to start digging?
>>
>>     --     Tervitades, Margus (Margusja) Roo
>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>     http://margus.roo.ee
>>     http://ee.linkedin.com/in/margusroo
>>     skype: margusja
>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>     "(serialNumber=37303140314)"
>>     -----BEGIN PUBLIC KEY-----
>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>     BjM8j36yJvoBVsfOHQIDAQAB
>>     -----END PUBLIC KEY-----
>>
>>
>


RE: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Rohith Sharma K S <ro...@huawei.com>.
Hi

      The reason for " org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet" is hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of protobuf is present.

1. Check MRAppMaster classpath, which version of protobuf is in classpath. Expected to have 2.5.0 version.
   

Thanks & Regards
Rohith Sharma K S



-----Original Message-----
From: Margusja [mailto:margus@roo.ee] 
Sent: 03 March 2014 22:45
To: user@hadoop.apache.org
Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client</artifactId>
       <version>2.3.0</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-core</artifactId>
         <version>1.2.1</version>
     </dependency>

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with state FAILED due to: Application application_1393848686226_0018 failed 2 times due to AM Container for
appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
     at org.apache.hadoop.util.Shell.run(Shell.java:379)
     at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
     at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
     at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
     at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total time for transactions(ms): 69 Number of transactions batched in
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.jar. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete [blk_1073742051_1227]
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.jar
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.split
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.split. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.split is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.xml. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
DFSClient_NONMAPREDUCE_-915999412_15
...

Lines from namemanager log:
...
2014-03-03 19:13:19,473 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1393848686226_0019_02_000001 is : 1
2014-03-03 19:13:19,474 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Exception from container-launch with container ID: 
container_1393848686226_0019_02_000001 and exit code: 1
org.apache.hadoop.util.Shell$ExitCodeException:
         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
         at org.apache.hadoop.util.Shell.run(Shell.java:379)
         at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
         at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
         at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
         at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:744)
2014-03-03 19:13:19,474 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2014-03-03 19:13:19,474 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Container exited with a non-zero exit code 1
2014-03-03 19:13:19,475 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
2014-03-03 19:13:19,475 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Cleaning up container container_1393848686226_0019_02_000001
2014-03-03 19:13:19,496 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 WARN
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
USER=hduser       OPERATION=Container Finished - Failed 
TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
failed with state: EXITED_WITH_FAILURE
APPID=application_1393848686226_0019
CONTAINERID=container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from EXITED_WITH_FAILURE to DONE
2014-03-03 19:13:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Removing container_1393848686226_0019_02_000001 from application
application_1393848686226_0019
2014-03-03 19:13:19,499 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event CONTAINER_STOP for appId application_1393848686226_0019
2014-03-03 19:13:20,160 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 }
state: C_COMPLETE diagnostics: "Exception from container-launch: 
\norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
2014-03-03 19:13:20,161 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1393848686226_0019_02_000001
2014-03-03 19:13:20,542 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Starting resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:20,543 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Stopping resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:21,164 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2014-03-03 19:13:21,164 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event APPLICATION_STOP for appId application_1393848686226_0019
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
Scheduling Log Deletion for application: application_1393848686226_0019, with delay of 10800 seconds ...


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:05, Ted Yu wrote:
> Can you tell us the hadoop release you're using ?
>
> Seems there is inconsistency in protobuf library.
>
>
> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
> <ma...@roo.ee>> wrote:
>
>     Hi
>
>     I even don't know what information to provide but my container log is:
>
>     2014-03-03 17:36:05,311 FATAL [main]
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>     MRAppMaster
>     java.lang.VerifyError: class
>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>     overrides final method
>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>             at java.lang.ClassLoader.defineClass1(Native Method)
>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>             at
>     java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>             at
>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>             at
>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>             at
>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>             at java.lang.Class.getConstructor0(Class.java:2803)
>             at java.lang.Class.getConstructor(Class.java:1718)
>             at
>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>             at
>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>             at
>     org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>             at
>     
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1
> 343)
>
>
>     Where to start digging?
>
>     -- 
>     Tervitades, Margus (Margusja) Roo
>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>     http://margus.roo.ee
>     http://ee.linkedin.com/in/margusroo
>     skype: margusja
>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>     "(serialNumber=37303140314)"
>     -----BEGIN PUBLIC KEY-----
>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>     BjM8j36yJvoBVsfOHQIDAQAB
>     -----END PUBLIC KEY-----
>
>


RE: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Rohith Sharma K S <ro...@huawei.com>.
Hi

      The reason for " org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet" is hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of protobuf is present.

1. Check MRAppMaster classpath, which version of protobuf is in classpath. Expected to have 2.5.0 version.
   

Thanks & Regards
Rohith Sharma K S



-----Original Message-----
From: Margusja [mailto:margus@roo.ee] 
Sent: 03 March 2014 22:45
To: user@hadoop.apache.org
Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client</artifactId>
       <version>2.3.0</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-core</artifactId>
         <version>1.2.1</version>
     </dependency>

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with state FAILED due to: Application application_1393848686226_0018 failed 2 times due to AM Container for
appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
     at org.apache.hadoop.util.Shell.run(Shell.java:379)
     at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
     at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
     at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
     at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total time for transactions(ms): 69 Number of transactions batched in
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.jar. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete [blk_1073742051_1227]
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.jar
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.split
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.split. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.split is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.xml. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
DFSClient_NONMAPREDUCE_-915999412_15
...

Lines from namemanager log:
...
2014-03-03 19:13:19,473 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1393848686226_0019_02_000001 is : 1
2014-03-03 19:13:19,474 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Exception from container-launch with container ID: 
container_1393848686226_0019_02_000001 and exit code: 1
org.apache.hadoop.util.Shell$ExitCodeException:
         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
         at org.apache.hadoop.util.Shell.run(Shell.java:379)
         at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
         at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
         at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
         at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:744)
2014-03-03 19:13:19,474 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2014-03-03 19:13:19,474 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Container exited with a non-zero exit code 1
2014-03-03 19:13:19,475 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
2014-03-03 19:13:19,475 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Cleaning up container container_1393848686226_0019_02_000001
2014-03-03 19:13:19,496 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 WARN
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
USER=hduser       OPERATION=Container Finished - Failed 
TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
failed with state: EXITED_WITH_FAILURE
APPID=application_1393848686226_0019
CONTAINERID=container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from EXITED_WITH_FAILURE to DONE
2014-03-03 19:13:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Removing container_1393848686226_0019_02_000001 from application
application_1393848686226_0019
2014-03-03 19:13:19,499 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event CONTAINER_STOP for appId application_1393848686226_0019
2014-03-03 19:13:20,160 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 }
state: C_COMPLETE diagnostics: "Exception from container-launch: 
\norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
2014-03-03 19:13:20,161 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1393848686226_0019_02_000001
2014-03-03 19:13:20,542 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Starting resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:20,543 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Stopping resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:21,164 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2014-03-03 19:13:21,164 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event APPLICATION_STOP for appId application_1393848686226_0019
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
Scheduling Log Deletion for application: application_1393848686226_0019, with delay of 10800 seconds ...


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:05, Ted Yu wrote:
> Can you tell us the hadoop release you're using ?
>
> Seems there is inconsistency in protobuf library.
>
>
> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
> <ma...@roo.ee>> wrote:
>
>     Hi
>
>     I even don't know what information to provide but my container log is:
>
>     2014-03-03 17:36:05,311 FATAL [main]
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>     MRAppMaster
>     java.lang.VerifyError: class
>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>     overrides final method
>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>             at java.lang.ClassLoader.defineClass1(Native Method)
>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>             at
>     java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>             at
>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>             at
>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>             at
>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>             at java.lang.Class.getConstructor0(Class.java:2803)
>             at java.lang.Class.getConstructor(Class.java:1718)
>             at
>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>             at
>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>             at
>     org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>             at
>     
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1
> 343)
>
>
>     Where to start digging?
>
>     -- 
>     Tervitades, Margus (Margusja) Roo
>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>     http://margus.roo.ee
>     http://ee.linkedin.com/in/margusroo
>     skype: margusja
>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>     "(serialNumber=37303140314)"
>     -----BEGIN PUBLIC KEY-----
>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>     BjM8j36yJvoBVsfOHQIDAQAB
>     -----END PUBLIC KEY-----
>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Hi thanks for your replay.

What I did:
[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found it 
from pom profile section so I used it.

...
it compiled:
[INFO] 
------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Mahout Build Tools ................................ SUCCESS [  
1.751 s]
[INFO] Apache Mahout ..................................... SUCCESS [  
0.484 s]
[INFO] Mahout Math ....................................... SUCCESS [ 
12.946 s]
[INFO] Mahout Core ....................................... SUCCESS [ 
14.192 s]
[INFO] Mahout Integration ................................ SUCCESS [  
1.857 s]
[INFO] Mahout Examples ................................... SUCCESS [ 
10.762 s]
[INFO] Mahout Release Package ............................ SUCCESS [  
0.012 s]
[INFO] Mahout Math/Scala wrappers ........................ SUCCESS [ 
25.431 s]
[INFO] Mahout Spark bindings ............................. SUCCESS [ 
40.376 s]
[INFO] 
------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] 
------------------------------------------------------------------------
[INFO] Total time: 01:48 min
[INFO] Finished at: 2014-03-17T12:06:31+02:00
[INFO] Final Memory: 79M/2947M
[INFO] 
------------------------------------------------------------------------

How to check is there hadoop2 libs in use?

but unfortunately again:
[speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
/user/speech/demo -o demo-seqfiles
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/bin/hadoop and 
HADOOP_CONF_DIR=/etc/hadoop/conf
MAHOUT-JOB: 
/home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
{--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
--fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
--input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
--output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/17 12:07:22 INFO Configuration.deprecation: 
mapred.compress.map.output is deprecated. Instead, use 
mapreduce.map.output.compress
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
deprecated. Instead, use dfs.metrics.session-id
14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
process : 10
14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
node allocation with : CompletedNodes: 4, size left: 29775
14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
deprecated. Instead, use mapreduce.job.user.name
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.output.compress 
is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
deprecated. Instead, use mapreduce.job.jar
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks is 
deprecated. Instead, use mapreduce.job.reduces
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.value.class is deprecated. Instead, use 
mapreduce.job.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.value.class is deprecated. Instead, use 
mapreduce.map.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class is 
deprecated. Instead, use mapreduce.job.map.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.inputformat.class is deprecated. Instead, use 
mapreduce.job.inputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.max.split.size 
is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.outputformat.class is deprecated. Instead, use 
mapreduce.job.outputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.map.tasks is 
deprecated. Instead, use mapreduce.job.maps
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.key.class is deprecated. Instead, use 
mapreduce.job.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.key.class is deprecated. Instead, use 
mapreduce.map.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.working.dir is 
deprecated. Instead, use mapreduce.job.working.dir
14/03/17 12:07:23 INFO mapreduce.JobSubmitter: Submitting tokens for 
job: job_local1589554356_0001
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 INFO mapreduce.Job: The url to track the job: 
http://localhost:8080/
14/03/17 12:07:23 INFO mapreduce.Job: Running job: job_local1589554356_0001
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter set in 
config null
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter is 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Waiting for map tasks
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Starting task: 
attempt_local1589554356_0001_m_000000_0
14/03/17 12:07:23 INFO mapred.Task:  Using ResourceCalculatorProcessTree 
: [ ]
14/03/17 12:07:23 INFO mapred.MapTask: Processing split: 
Paths:/user/speech/demo/text1.txt:0+628,/user/speech/demo/text10.txt:0+1327,/user/speech/demo/text2.txt:0+5165,/user/speech/demo/text3.txt:0+3736,/user/speech/demo/text4.txt:0+4338,/user/speech/demo/text5.txt:0+3338,/user/speech/demo/text6.txt:0+5836,/user/speech/demo/text7.txt:0+2936,/user/speech/demo/text8.txt:0+905,/user/speech/demo/text9.txt:0+1566
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Map task executor complete.
14/03/17 12:07:23 WARN mapred.LocalJobRunner: job_local1589554356_0001
java.lang.Exception: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
     at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
Caused by: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
     at 
org.apache.mahout.text.MultipleTextFileInputFormat.createRecordReader(MultipleTextFileInputFormat.java:43)
     at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:491)
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:734)
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
     at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.reflect.InvocationTargetException
     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
     at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
     at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
     at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
     ... 12 more
Caused by: java.lang.IncompatibleClassChangeError: Found interface 
org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
     at 
org.apache.mahout.text.WholeFileRecordReader.<init>(WholeFileRecordReader.java:59)
     ... 17 more
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
running in uber mode : false
14/03/17 12:07:24 INFO mapreduce.Job:  map 0% reduce 0%
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
failed with state FAILED due to: NA
14/03/17 12:07:24 INFO mapreduce.Job: Counters: 0
14/03/17 12:07:24 INFO driver.MahoutDriver: Program took 3343 ms 
(Minutes: 0.055716666666666664)

Obviously I am doing something wrong :)

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:15, Margusja wrote:
> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>     <dependency>
>       <groupId>org.apache.hadoop</groupId>
>       <artifactId>hadoop-client</artifactId>
>       <version>2.3.0</version>
>     </dependency>
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-core</artifactId>
>         <version>1.2.1</version>
>     </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
> process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for 
> job: job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 
> running in uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed 
> with state FAILED due to: Application application_1393848686226_0018 
> failed 2 times due to AM Container for 
> appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
> Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>     at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 
> Total time for transactions(ms): 69 Number of transactions batched in 
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/input/data666.noheader.data. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/input/data666.noheader.data is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/input/data666.noheader.data.info. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/input/data666.noheader.data.info is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.jar. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
> replication from 3 to 10 for 
> /user/hduser/.staging/job_1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
> replication from 3 to 10 for 
> /user/hduser/.staging/job_1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.split. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is 
> closed by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.xml. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Exception from container-launch with container ID: 
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>         at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
> Container exited with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
> Container container_1393848686226_0019_02_000001 transitioned from 
> RUNNING to EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
> Cleaning up container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Deleting absolute path : 
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
> USER=hduser       OPERATION=Container Finished - Failed 
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
> failed with state: EXITED_WITH_FAILURE 
> APPID=application_1393848686226_0019 
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
> Container container_1393848686226_0019_02_000001 transitioned from 
> EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Removing container_1393848686226_0019_02_000001 from application 
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
> Sending out status for container: container_id { app_attempt_id { 
> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 
> 2 } id: 1 } state: C_COMPLETE diagnostics: "Exception from 
> container-launch: \norg.apache.hadoop.util.Shell$ExitCodeException: 
> \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
> Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
> Starting resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
> Stopping resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Application application_1393848686226_0019 transitioned from RUNNING 
> to APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Deleting absolute path : 
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event APPLICATION_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Application application_1393848686226_0019 transitioned from 
> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
> Scheduling Log Deletion for application: 
> application_1393848686226_0019, with delay of 10800 seconds
> ...
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
>> <ma...@roo.ee>> wrote:
>>
>>     Hi
>>
>>     I even don't know what information to provide but my container 
>> log is:
>>
>>     2014-03-03 17:36:05,311 FATAL [main]
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>     MRAppMaster
>>     java.lang.VerifyError: class
>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>     overrides final method
>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>             at
>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>             at
>>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>             at 
>> java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>             at java.security.AccessController.doPrivileged(Native 
>> Method)
>>             at 
>> java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>             at
>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>             at
>> java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>             at java.lang.Class.getConstructor(Class.java:1718)
>>             at
>> org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>>             at
>> org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>             at
>> org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>>             at
>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>             at
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>             at
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)
>>
>>
>>     Where to start digging?
>>
>>     --     Tervitades, Margus (Margusja) Roo
>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>     http://margus.roo.ee
>>     http://ee.linkedin.com/in/margusroo
>>     skype: margusja
>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>     "(serialNumber=37303140314)"
>>     -----BEGIN PUBLIC KEY-----
>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>     BjM8j36yJvoBVsfOHQIDAQAB
>>     -----END PUBLIC KEY-----
>>
>>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Stanley Shi <ss...@gopivotal.com>.
Why you have 2 hadoop version in the same pom file? In this case, you are
not going to know which hadoop class you are actually using.

<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>2.3.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-core</artifactId>
        <version>1.2.1</version>
    </dependency>



Regards,
*Stanley Shi,*



On Tue, Mar 4, 2014 at 1:15 AM, Margusja <ma...@roo.ee> wrote:

> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>     <dependency>
>       <groupId>org.apache.hadoop</groupId>
>       <artifactId>hadoop-client</artifactId>
>       <version>2.3.0</version>
>     </dependency>
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-core</artifactId>
>         <version>1.2.1</version>
>     </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to
> process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
> job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in
> uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed
> with state FAILED due to: Application application_1393848686226_0018 failed
> 2 times due to AM Container for appattempt_1393848686226_0018_000002
> exited with  exitCode: 1 due to: Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:589)
>     at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
> launchContainer(DefaultContainerExecutor.java:195)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900
> Total time for transactions(ms): 69 Number of transactions batched in
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
> 1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data. BP-802201089-90.190.106.33-1393506052071
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742056
> _1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
> 1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.info. BP-802201089-90.190.106.33-1393506052071
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742057
> _1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data.info is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.jar.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742058_1234{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742058
> _1234{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing
> replication from 3 to 10 for /user/hduser/.staging/job_
> 1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing
> replication from 3 to 10 for /user/hduser/.staging/job_
> 1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.split.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742059_1235{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742059
> _1235{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742060_1236{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742060
> _1236{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed
> by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.xml.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742061_1237{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742061
> _1237{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:589)
>         at org.apache.hadoop.yarn.server.nodemanager.
> DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:
> 195)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO org.apache.hadoop.yarn.server.
> nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Container exited
> with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.Container: Container
> container_1393848686226_0019_02_000001 transitioned from RUNNING to
> EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up
> container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path : /tmp/hadoop-hdfs/nm-local-dir/
> usercache/hduser/appcache/application_1393848686226_
> 0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=hduser       OPERATION=Container Finished - Failed
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container failed
> with state: EXITED_WITH_FAILURE APPID=application_1393848686226_0019
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.Container: Container
> container_1393848686226_0019_02_000001 transitioned from
> EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Removing
> container_1393848686226_0019_02_000001 from application
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for
> appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Sending out status for container: container_id { app_attempt_id {
> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 }
> id: 1 } state: C_COMPLETE diagnostics: "Exception from container-launch:
> \norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat
> org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
> launchContainer(DefaultContainerExecutor.java:195)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)\n\tat java.util.concurrent.
> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting
> resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping
> resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Application
> application_1393848686226_0019 transitioned from RUNNING to
> APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path : /tmp/hadoop-hdfs/nm-local-dir/
> usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for
> appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Application
> application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP
> to FINISHED
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1393848686226_0019,
> with delay of 10800 seconds
> ...
>
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee <mailto:
>> margus@roo.ee>> wrote:
>>
>>     Hi
>>
>>     I even don't know what information to provide but my container log is:
>>
>>     2014-03-03 17:36:05,311 FATAL [main]
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>     MRAppMaster
>>     java.lang.VerifyError: class
>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>     overrides final method
>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>             at
>>     java.security.SecureClassLoader.defineClass(
>> SecureClassLoader.java:142)
>>             at
>>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>             at java.security.AccessController.doPrivileged(Native Method)
>>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>             at
>>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>             at
>>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>             at java.lang.Class.getConstructor(Class.java:1718)
>>             at
>>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.
>> newRecordInstance(RecordFactoryPBImpl.java:62)
>>             at
>>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>             at
>>     org.apache.hadoop.yarn.api.records.ApplicationId.
>> newInstance(ApplicationId.java:49)
>>             at
>>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(
>> ConverterUtils.java:137)
>>             at
>>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
>> ConverterUtils.java:177)
>>             at
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
>> MRAppMaster.java:1343)
>>
>>
>>     Where to start digging?
>>
>>     --     Tervitades, Margus (Margusja) Roo
>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>
>>     http://margus.roo.ee
>>     http://ee.linkedin.com/in/margusroo
>>     skype: margusja
>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>
>>     "(serialNumber=37303140314)"
>>     -----BEGIN PUBLIC KEY-----
>>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>     BjM8j36yJvoBVsfOHQIDAQAB
>>     -----END PUBLIC KEY-----
>>
>>
>>
>

RE: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Rohith Sharma K S <ro...@huawei.com>.
Hi

      The reason for " org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet" is hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of protobuf is present.

1. Check MRAppMaster classpath, which version of protobuf is in classpath. Expected to have 2.5.0 version.
   

Thanks & Regards
Rohith Sharma K S



-----Original Message-----
From: Margusja [mailto:margus@roo.ee] 
Sent: 03 March 2014 22:45
To: user@hadoop.apache.org
Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client</artifactId>
       <version>2.3.0</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-core</artifactId>
         <version>1.2.1</version>
     </dependency>

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with state FAILED due to: Application application_1393848686226_0018 failed 2 times due to AM Container for
appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
     at org.apache.hadoop.util.Shell.run(Shell.java:379)
     at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
     at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
     at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
     at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total time for transactions(ms): 69 Number of transactions batched in
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.jar. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete [blk_1073742051_1227]
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.jar
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.split
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.split. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.split is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.xml. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
DFSClient_NONMAPREDUCE_-915999412_15
...

Lines from namemanager log:
...
2014-03-03 19:13:19,473 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1393848686226_0019_02_000001 is : 1
2014-03-03 19:13:19,474 WARN
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Exception from container-launch with container ID: 
container_1393848686226_0019_02_000001 and exit code: 1
org.apache.hadoop.util.Shell$ExitCodeException:
         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
         at org.apache.hadoop.util.Shell.run(Shell.java:379)
         at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
         at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
         at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
         at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:744)
2014-03-03 19:13:19,474 INFO
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2014-03-03 19:13:19,474 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Container exited with a non-zero exit code 1
2014-03-03 19:13:19,475 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
2014-03-03 19:13:19,475 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Cleaning up container container_1393848686226_0019_02_000001
2014-03-03 19:13:19,496 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 WARN
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
USER=hduser       OPERATION=Container Finished - Failed 
TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
failed with state: EXITED_WITH_FAILURE
APPID=application_1393848686226_0019
CONTAINERID=container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from EXITED_WITH_FAILURE to DONE
2014-03-03 19:13:19,498 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Removing container_1393848686226_0019_02_000001 from application
application_1393848686226_0019
2014-03-03 19:13:19,499 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event CONTAINER_STOP for appId application_1393848686226_0019
2014-03-03 19:13:20,160 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 }
state: C_COMPLETE diagnostics: "Exception from container-launch: 
\norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
2014-03-03 19:13:20,161 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1393848686226_0019_02_000001
2014-03-03 19:13:20,542 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Starting resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:20,543 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Stopping resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:21,164 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2014-03-03 19:13:21,164 INFO
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event APPLICATION_STOP for appId application_1393848686226_0019
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2014-03-03 19:13:21,165 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
Scheduling Log Deletion for application: application_1393848686226_0019, with delay of 10800 seconds ...


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:05, Ted Yu wrote:
> Can you tell us the hadoop release you're using ?
>
> Seems there is inconsistency in protobuf library.
>
>
> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
> <ma...@roo.ee>> wrote:
>
>     Hi
>
>     I even don't know what information to provide but my container log is:
>
>     2014-03-03 17:36:05,311 FATAL [main]
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>     MRAppMaster
>     java.lang.VerifyError: class
>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>     overrides final method
>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>             at java.lang.ClassLoader.defineClass1(Native Method)
>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>             at
>     java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>             at
>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>             at
>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>             at
>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>             at java.lang.Class.getConstructor0(Class.java:2803)
>             at java.lang.Class.getConstructor(Class.java:1718)
>             at
>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>             at
>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>             at
>     org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>             at
>     
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1
> 343)
>
>
>     Where to start digging?
>
>     -- 
>     Tervitades, Margus (Margusja) Roo
>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>     http://margus.roo.ee
>     http://ee.linkedin.com/in/margusroo
>     skype: margusja
>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>     "(serialNumber=37303140314)"
>     -----BEGIN PUBLIC KEY-----
>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>     BjM8j36yJvoBVsfOHQIDAQAB
>     -----END PUBLIC KEY-----
>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Stanley Shi <ss...@gopivotal.com>.
Why you have 2 hadoop version in the same pom file? In this case, you are
not going to know which hadoop class you are actually using.

<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>2.3.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-core</artifactId>
        <version>1.2.1</version>
    </dependency>



Regards,
*Stanley Shi,*



On Tue, Mar 4, 2014 at 1:15 AM, Margusja <ma...@roo.ee> wrote:

> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>     <dependency>
>       <groupId>org.apache.hadoop</groupId>
>       <artifactId>hadoop-client</artifactId>
>       <version>2.3.0</version>
>     </dependency>
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-core</artifactId>
>         <version>1.2.1</version>
>     </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to
> process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
> job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in
> uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed
> with state FAILED due to: Application application_1393848686226_0018 failed
> 2 times due to AM Container for appattempt_1393848686226_0018_000002
> exited with  exitCode: 1 due to: Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:589)
>     at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
> launchContainer(DefaultContainerExecutor.java:195)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900
> Total time for transactions(ms): 69 Number of transactions batched in
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
> 1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data. BP-802201089-90.190.106.33-1393506052071
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742056
> _1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
> 1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.info. BP-802201089-90.190.106.33-1393506052071
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742057
> _1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data.info is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.jar.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742058_1234{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742058
> _1234{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing
> replication from 3 to 10 for /user/hduser/.staging/job_
> 1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing
> replication from 3 to 10 for /user/hduser/.staging/job_
> 1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.split.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742059_1235{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742059
> _1235{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742060_1236{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742060
> _1236{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed
> by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.xml.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742061_1237{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742061
> _1237{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:589)
>         at org.apache.hadoop.yarn.server.nodemanager.
> DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:
> 195)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO org.apache.hadoop.yarn.server.
> nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Container exited
> with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.Container: Container
> container_1393848686226_0019_02_000001 transitioned from RUNNING to
> EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up
> container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path : /tmp/hadoop-hdfs/nm-local-dir/
> usercache/hduser/appcache/application_1393848686226_
> 0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=hduser       OPERATION=Container Finished - Failed
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container failed
> with state: EXITED_WITH_FAILURE APPID=application_1393848686226_0019
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.Container: Container
> container_1393848686226_0019_02_000001 transitioned from
> EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Removing
> container_1393848686226_0019_02_000001 from application
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for
> appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Sending out status for container: container_id { app_attempt_id {
> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 }
> id: 1 } state: C_COMPLETE diagnostics: "Exception from container-launch:
> \norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat
> org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
> launchContainer(DefaultContainerExecutor.java:195)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)\n\tat java.util.concurrent.
> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting
> resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping
> resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Application
> application_1393848686226_0019 transitioned from RUNNING to
> APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path : /tmp/hadoop-hdfs/nm-local-dir/
> usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for
> appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Application
> application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP
> to FINISHED
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1393848686226_0019,
> with delay of 10800 seconds
> ...
>
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee <mailto:
>> margus@roo.ee>> wrote:
>>
>>     Hi
>>
>>     I even don't know what information to provide but my container log is:
>>
>>     2014-03-03 17:36:05,311 FATAL [main]
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>     MRAppMaster
>>     java.lang.VerifyError: class
>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>     overrides final method
>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>             at
>>     java.security.SecureClassLoader.defineClass(
>> SecureClassLoader.java:142)
>>             at
>>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>             at java.security.AccessController.doPrivileged(Native Method)
>>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>             at
>>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>             at
>>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>             at java.lang.Class.getConstructor(Class.java:1718)
>>             at
>>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.
>> newRecordInstance(RecordFactoryPBImpl.java:62)
>>             at
>>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>             at
>>     org.apache.hadoop.yarn.api.records.ApplicationId.
>> newInstance(ApplicationId.java:49)
>>             at
>>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(
>> ConverterUtils.java:137)
>>             at
>>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
>> ConverterUtils.java:177)
>>             at
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
>> MRAppMaster.java:1343)
>>
>>
>>     Where to start digging?
>>
>>     --     Tervitades, Margus (Margusja) Roo
>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>
>>     http://margus.roo.ee
>>     http://ee.linkedin.com/in/margusroo
>>     skype: margusja
>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>
>>     "(serialNumber=37303140314)"
>>     -----BEGIN PUBLIC KEY-----
>>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>     BjM8j36yJvoBVsfOHQIDAQAB
>>     -----END PUBLIC KEY-----
>>
>>
>>
>

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Hi thanks for your replay.

What I did:
[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found it 
from pom profile section so I used it.

...
it compiled:
[INFO] 
------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Mahout Build Tools ................................ SUCCESS [  
1.751 s]
[INFO] Apache Mahout ..................................... SUCCESS [  
0.484 s]
[INFO] Mahout Math ....................................... SUCCESS [ 
12.946 s]
[INFO] Mahout Core ....................................... SUCCESS [ 
14.192 s]
[INFO] Mahout Integration ................................ SUCCESS [  
1.857 s]
[INFO] Mahout Examples ................................... SUCCESS [ 
10.762 s]
[INFO] Mahout Release Package ............................ SUCCESS [  
0.012 s]
[INFO] Mahout Math/Scala wrappers ........................ SUCCESS [ 
25.431 s]
[INFO] Mahout Spark bindings ............................. SUCCESS [ 
40.376 s]
[INFO] 
------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] 
------------------------------------------------------------------------
[INFO] Total time: 01:48 min
[INFO] Finished at: 2014-03-17T12:06:31+02:00
[INFO] Final Memory: 79M/2947M
[INFO] 
------------------------------------------------------------------------

How to check is there hadoop2 libs in use?

but unfortunately again:
[speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
/user/speech/demo -o demo-seqfiles
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/bin/hadoop and 
HADOOP_CONF_DIR=/etc/hadoop/conf
MAHOUT-JOB: 
/home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
{--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
--fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
--input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
--output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/17 12:07:22 INFO Configuration.deprecation: 
mapred.compress.map.output is deprecated. Instead, use 
mapreduce.map.output.compress
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
deprecated. Instead, use dfs.metrics.session-id
14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
process : 10
14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
node allocation with : CompletedNodes: 4, size left: 29775
14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
deprecated. Instead, use mapreduce.job.user.name
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.output.compress 
is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
deprecated. Instead, use mapreduce.job.jar
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks is 
deprecated. Instead, use mapreduce.job.reduces
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.value.class is deprecated. Instead, use 
mapreduce.job.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.value.class is deprecated. Instead, use 
mapreduce.map.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class is 
deprecated. Instead, use mapreduce.job.map.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.inputformat.class is deprecated. Instead, use 
mapreduce.job.inputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.max.split.size 
is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.outputformat.class is deprecated. Instead, use 
mapreduce.job.outputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.map.tasks is 
deprecated. Instead, use mapreduce.job.maps
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.key.class is deprecated. Instead, use 
mapreduce.job.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.key.class is deprecated. Instead, use 
mapreduce.map.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.working.dir is 
deprecated. Instead, use mapreduce.job.working.dir
14/03/17 12:07:23 INFO mapreduce.JobSubmitter: Submitting tokens for 
job: job_local1589554356_0001
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 INFO mapreduce.Job: The url to track the job: 
http://localhost:8080/
14/03/17 12:07:23 INFO mapreduce.Job: Running job: job_local1589554356_0001
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter set in 
config null
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter is 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Waiting for map tasks
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Starting task: 
attempt_local1589554356_0001_m_000000_0
14/03/17 12:07:23 INFO mapred.Task:  Using ResourceCalculatorProcessTree 
: [ ]
14/03/17 12:07:23 INFO mapred.MapTask: Processing split: 
Paths:/user/speech/demo/text1.txt:0+628,/user/speech/demo/text10.txt:0+1327,/user/speech/demo/text2.txt:0+5165,/user/speech/demo/text3.txt:0+3736,/user/speech/demo/text4.txt:0+4338,/user/speech/demo/text5.txt:0+3338,/user/speech/demo/text6.txt:0+5836,/user/speech/demo/text7.txt:0+2936,/user/speech/demo/text8.txt:0+905,/user/speech/demo/text9.txt:0+1566
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Map task executor complete.
14/03/17 12:07:23 WARN mapred.LocalJobRunner: job_local1589554356_0001
java.lang.Exception: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
     at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
Caused by: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.<init>(CombineFileRecordReader.java:126)
     at 
org.apache.mahout.text.MultipleTextFileInputFormat.createRecordReader(MultipleTextFileInputFormat.java:43)
     at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:491)
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:734)
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
     at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.reflect.InvocationTargetException
     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
     at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
     at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
     at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
     at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)
     ... 12 more
Caused by: java.lang.IncompatibleClassChangeError: Found interface 
org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
     at 
org.apache.mahout.text.WholeFileRecordReader.<init>(WholeFileRecordReader.java:59)
     ... 17 more
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
running in uber mode : false
14/03/17 12:07:24 INFO mapreduce.Job:  map 0% reduce 0%
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
failed with state FAILED due to: NA
14/03/17 12:07:24 INFO mapreduce.Job: Counters: 0
14/03/17 12:07:24 INFO driver.MahoutDriver: Program took 3343 ms 
(Minutes: 0.055716666666666664)

Obviously I am doing something wrong :)

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:15, Margusja wrote:
> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>     <dependency>
>       <groupId>org.apache.hadoop</groupId>
>       <artifactId>hadoop-client</artifactId>
>       <version>2.3.0</version>
>     </dependency>
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-core</artifactId>
>         <version>1.2.1</version>
>     </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
> process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for 
> job: job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 
> running in uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed 
> with state FAILED due to: Application application_1393848686226_0018 
> failed 2 times due to AM Container for 
> appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
> Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>     at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>     at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 
> Total time for transactions(ms): 69 Number of transactions batched in 
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/input/data666.noheader.data. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/input/data666.noheader.data is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/input/data666.noheader.data.info. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/input/data666.noheader.data.info is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.jar. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
> replication from 3 to 10 for 
> /user/hduser/.staging/job_1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
> replication from 3 to 10 for 
> /user/hduser/.staging/job_1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.split. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is 
> closed by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
> /user/hduser/.staging/job_1393848686226_0019/job.xml. 
> BP-802201089-90.190.106.33-1393506052071 
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 90.190.106.33:50010 is added to 
> blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Exception from container-launch with container ID: 
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>         at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
> Container exited with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
> Container container_1393848686226_0019_02_000001 transitioned from 
> RUNNING to EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
> Cleaning up container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Deleting absolute path : 
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
> USER=hduser       OPERATION=Container Finished - Failed 
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
> failed with state: EXITED_WITH_FAILURE 
> APPID=application_1393848686226_0019 
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
> Container container_1393848686226_0019_02_000001 transitioned from 
> EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Removing container_1393848686226_0019_02_000001 from application 
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event CONTAINER_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
> Sending out status for container: container_id { app_attempt_id { 
> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 
> 2 } id: 1 } state: C_COMPLETE diagnostics: "Exception from 
> container-launch: \norg.apache.hadoop.util.Shell$ExitCodeException: 
> \n\tat org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: 
> Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
> Starting resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
> Stopping resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Application application_1393848686226_0019 transitioned from RUNNING 
> to APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
> Deleting absolute path : 
> /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got 
> event APPLICATION_STOP for appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
> Application application_1393848686226_0019 transitioned from 
> APPLICATION_RESOURCES_CLEANINGUP to FINISHED
> 2014-03-03 19:13:21,165 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
> Scheduling Log Deletion for application: 
> application_1393848686226_0019, with delay of 10800 seconds
> ...
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
>> <ma...@roo.ee>> wrote:
>>
>>     Hi
>>
>>     I even don't know what information to provide but my container 
>> log is:
>>
>>     2014-03-03 17:36:05,311 FATAL [main]
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>     MRAppMaster
>>     java.lang.VerifyError: class
>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>     overrides final method
>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>             at
>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>             at
>>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>             at 
>> java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>             at java.security.AccessController.doPrivileged(Native 
>> Method)
>>             at 
>> java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>             at
>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>             at
>> java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>             at java.lang.Class.getConstructor(Class.java:1718)
>>             at
>> org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>>             at
>> org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>             at
>> org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>>             at
>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>             at
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>             at
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)
>>
>>
>>     Where to start digging?
>>
>>     --     Tervitades, Margus (Margusja) Roo
>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>     http://margus.roo.ee
>>     http://ee.linkedin.com/in/margusroo
>>     skype: margusja
>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>     "(serialNumber=37303140314)"
>>     -----BEGIN PUBLIC KEY-----
>> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>     BjM8j36yJvoBVsfOHQIDAQAB
>>     -----END PUBLIC KEY-----
>>
>>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Stanley Shi <ss...@gopivotal.com>.
Why you have 2 hadoop version in the same pom file? In this case, you are
not going to know which hadoop class you are actually using.

<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>2.3.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-core</artifactId>
        <version>1.2.1</version>
    </dependency>



Regards,
*Stanley Shi,*



On Tue, Mar 4, 2014 at 1:15 AM, Margusja <ma...@roo.ee> wrote:

> Hi
>
> 2.2.0 and 2.3.0 gave me the same container log.
>
> A little bit more details.
> I'll try to use external java client who submits job.
> some lines from maven pom.xml file:
>     <dependency>
>       <groupId>org.apache.hadoop</groupId>
>       <artifactId>hadoop-client</artifactId>
>       <version>2.3.0</version>
>     </dependency>
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-core</artifactId>
>         <version>1.2.1</version>
>     </dependency>
>
> lines from external client:
> ...
> 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to
> process : 1
> 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
> 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
> job_1393848686226_0018
> 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
> application_1393848686226_0018
> 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
> http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
> 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
> 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in
> uber mode : false
> 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
> 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed
> with state FAILED due to: Application application_1393848686226_0018 failed
> 2 times due to AM Container for appattempt_1393848686226_0018_000002
> exited with  exitCode: 1 due to: Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>     at org.apache.hadoop.util.Shell.run(Shell.java:379)
>     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:589)
>     at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
> launchContainer(DefaultContainerExecutor.java:195)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>     at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> ...
>
> Lines from namenode:
> ...
> 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900
> Total time for transactions(ms): 69 Number of transactions batched in
> Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
> 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
> 1073742050_1226 90.190.106.33:50010
> 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data. BP-802201089-90.190.106.33-1393506052071
> blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742050_1226]
> 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742056
> _1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
> 1073742051_1227 90.190.106.33:50010
> 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/input/data666.noheader.data.info. BP-802201089-90.190.106.33-1393506052071
> blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742057
> _1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/input/data666.noheader.data.info is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.jar.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742058_1234{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
> 90.190.106.33:50010 to delete [blk_1073742051_1227]
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742058
> _1234{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing
> replication from 3 to 10 for /user/hduser/.staging/job_
> 1393848686226_0019/job.jar
> 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing
> replication from 3 to 10 for /user/hduser/.staging/job_
> 1393848686226_0019/job.split
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.split.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742059_1235{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742059
> _1235{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.split is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742060_1236{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742060
> _1236{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed
> by DFSClient_NONMAPREDUCE_-915999412_15
> 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:
> /user/hduser/.staging/job_1393848686226_0019/job.xml.
> BP-802201089-90.190.106.33-1393506052071 blk_1073742061_1237{
> blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
> 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
> updated: 90.190.106.33:50010 is added to blk_1073742061
> _1237{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
> ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
> 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile:
> /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by
> DFSClient_NONMAPREDUCE_-915999412_15
> ...
>
> Lines from namemanager log:
> ...
> 2014-03-03 19:13:19,473 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exit code from container container_1393848686226_0019_02_000001 is : 1
> 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1393848686226_0019_02_000001 and exit code: 1
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>         at org.apache.hadoop.util.Shell.run(Shell.java:379)
>         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:589)
>         at org.apache.hadoop.yarn.server.nodemanager.
> DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:
> 195)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-03-03 19:13:19,474 INFO org.apache.hadoop.yarn.server.
> nodemanager.ContainerExecutor:
> 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Container exited
> with a non-zero exit code 1
> 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.Container: Container
> container_1393848686226_0019_02_000001 transitioned from RUNNING to
> EXITED_WITH_FAILURE
> 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up
> container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,496 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path : /tmp/hadoop-hdfs/nm-local-dir/
> usercache/hduser/appcache/application_1393848686226_
> 0019/container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:
> USER=hduser       OPERATION=Container Finished - Failed
> TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container failed
> with state: EXITED_WITH_FAILURE APPID=application_1393848686226_0019
> CONTAINERID=container_1393848686226_0019_02_000001
> 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.container.Container: Container
> container_1393848686226_0019_02_000001 transitioned from
> EXITED_WITH_FAILURE to DONE
> 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Removing
> container_1393848686226_0019_02_000001 from application
> application_1393848686226_0019
> 2014-03-03 19:13:19,499 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for
> appId application_1393848686226_0019
> 2014-03-03 19:13:20,160 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Sending out status for container: container_id { app_attempt_id {
> application_id { id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 }
> id: 1 } state: C_COMPLETE diagnostics: "Exception from container-launch:
> \norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat
> org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat
> org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
> launchContainer(DefaultContainerExecutor.java:195)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)\n\tat java.util.concurrent.
> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
> 2014-03-03 19:13:20,161 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
> Removed completed container container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,542 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting
> resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:20,543 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping
> resource-monitoring for container_1393848686226_0019_02_000001
> 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Application
> application_1393848686226_0019 transitioned from RUNNING to
> APPLICATION_RESOURCES_CLEANINGUP
> 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Deleting absolute path : /tmp/hadoop-hdfs/nm-local-dir/
> usercache/hduser/appcache/application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for
> appId application_1393848686226_0019
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.application.Application: Application
> application_1393848686226_0019 transitioned from APPLICATION_RESOURCES_CLEANINGUP
> to FINISHED
> 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.
> nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
> Scheduling Log Deletion for application: application_1393848686226_0019,
> with delay of 10800 seconds
> ...
>
>
>
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
> On 03/03/14 19:05, Ted Yu wrote:
>
>> Can you tell us the hadoop release you're using ?
>>
>> Seems there is inconsistency in protobuf library.
>>
>>
>> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee <mailto:
>> margus@roo.ee>> wrote:
>>
>>     Hi
>>
>>     I even don't know what information to provide but my container log is:
>>
>>     2014-03-03 17:36:05,311 FATAL [main]
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>>     MRAppMaster
>>     java.lang.VerifyError: class
>>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>>     overrides final method
>>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>>             at java.lang.ClassLoader.defineClass1(Native Method)
>>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>>             at
>>     java.security.SecureClassLoader.defineClass(
>> SecureClassLoader.java:142)
>>             at
>>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>             at java.security.AccessController.doPrivileged(Native Method)
>>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>             at
>>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>>             at
>>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>>             at java.lang.Class.getConstructor0(Class.java:2803)
>>             at java.lang.Class.getConstructor(Class.java:1718)
>>             at
>>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.
>> newRecordInstance(RecordFactoryPBImpl.java:62)
>>             at
>>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>>             at
>>     org.apache.hadoop.yarn.api.records.ApplicationId.
>> newInstance(ApplicationId.java:49)
>>             at
>>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(
>> ConverterUtils.java:137)
>>             at
>>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
>> ConverterUtils.java:177)
>>             at
>>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
>> MRAppMaster.java:1343)
>>
>>
>>     Where to start digging?
>>
>>     --     Tervitades, Margus (Margusja) Roo
>>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>>
>>     http://margus.roo.ee
>>     http://ee.linkedin.com/in/margusroo
>>     skype: margusja
>>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>>
>>     "(serialNumber=37303140314)"
>>     -----BEGIN PUBLIC KEY-----
>>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>>     BjM8j36yJvoBVsfOHQIDAQAB
>>     -----END PUBLIC KEY-----
>>
>>
>>
>

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client</artifactId>
       <version>2.3.0</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-core</artifactId>
         <version>1.2.1</version>
     </dependency>

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running 
in uber mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed 
with state FAILED due to: Application application_1393848686226_0018 
failed 2 times due to AM Container for 
appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
     at org.apache.hadoop.util.Shell.run(Shell.java:379)
     at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
     at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
     at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
     at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 
Total time for transactions(ms): 69 Number of transactions batched in 
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.jar. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742051_1227]
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
replication from 3 to 10 for 
/user/hduser/.staging/job_1393848686226_0019/job.jar
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
replication from 3 to 10 for 
/user/hduser/.staging/job_1393848686226_0019/job.split
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.split. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed 
by DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.xml. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
...

Lines from namemanager log:
...
2014-03-03 19:13:19,473 WARN 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit 
code from container container_1393848686226_0019_02_000001 is : 1
2014-03-03 19:13:19,474 WARN 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Exception from container-launch with container ID: 
container_1393848686226_0019_02_000001 and exit code: 1
org.apache.hadoop.util.Shell$ExitCodeException:
         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
         at org.apache.hadoop.util.Shell.run(Shell.java:379)
         at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
         at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
         at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
         at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:744)
2014-03-03 19:13:19,474 INFO 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2014-03-03 19:13:19,474 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Container exited with a non-zero exit code 1
2014-03-03 19:13:19,475 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from 
RUNNING to EXITED_WITH_FAILURE
2014-03-03 19:13:19,475 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Cleaning up container container_1393848686226_0019_02_000001
2014-03-03 19:13:19,496 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 WARN 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
USER=hduser       OPERATION=Container Finished - Failed 
TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
failed with state: EXITED_WITH_FAILURE 
APPID=application_1393848686226_0019 
CONTAINERID=container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from 
EXITED_WITH_FAILURE to DONE
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Removing container_1393848686226_0019_02_000001 from application 
application_1393848686226_0019
2014-03-03 19:13:19,499 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event CONTAINER_STOP for appId application_1393848686226_0019
2014-03-03 19:13:20,160 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending 
out status for container: container_id { app_attempt_id { application_id 
{ id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 } 
state: C_COMPLETE diagnostics: "Exception from container-launch: 
\norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat 
org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
2014-03-03 19:13:20,161 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
completed container container_1393848686226_0019_02_000001
2014-03-03 19:13:20,542 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Starting resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:20,543 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Stopping resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from RUNNING to 
APPLICATION_RESOURCES_CLEANINGUP
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event APPLICATION_STOP for appId application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from 
APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
Scheduling Log Deletion for application: application_1393848686226_0019, 
with delay of 10800 seconds
...


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:05, Ted Yu wrote:
> Can you tell us the hadoop release you're using ?
>
> Seems there is inconsistency in protobuf library.
>
>
> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
> <ma...@roo.ee>> wrote:
>
>     Hi
>
>     I even don't know what information to provide but my container log is:
>
>     2014-03-03 17:36:05,311 FATAL [main]
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>     MRAppMaster
>     java.lang.VerifyError: class
>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>     overrides final method
>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>             at java.lang.ClassLoader.defineClass1(Native Method)
>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>             at
>     java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>             at
>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>             at
>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>             at
>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>             at java.lang.Class.getConstructor0(Class.java:2803)
>             at java.lang.Class.getConstructor(Class.java:1718)
>             at
>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>             at
>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>             at
>     org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>             at
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)
>
>
>     Where to start digging?
>
>     -- 
>     Tervitades, Margus (Margusja) Roo
>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>     http://margus.roo.ee
>     http://ee.linkedin.com/in/margusroo
>     skype: margusja
>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>     "(serialNumber=37303140314)"
>     -----BEGIN PUBLIC KEY-----
>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>     BjM8j36yJvoBVsfOHQIDAQAB
>     -----END PUBLIC KEY-----
>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client</artifactId>
       <version>2.3.0</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-core</artifactId>
         <version>1.2.1</version>
     </dependency>

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running 
in uber mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed 
with state FAILED due to: Application application_1393848686226_0018 
failed 2 times due to AM Container for 
appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
     at org.apache.hadoop.util.Shell.run(Shell.java:379)
     at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
     at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
     at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
     at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 
Total time for transactions(ms): 69 Number of transactions batched in 
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.jar. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742051_1227]
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
replication from 3 to 10 for 
/user/hduser/.staging/job_1393848686226_0019/job.jar
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
replication from 3 to 10 for 
/user/hduser/.staging/job_1393848686226_0019/job.split
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.split. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed 
by DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.xml. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
...

Lines from namemanager log:
...
2014-03-03 19:13:19,473 WARN 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit 
code from container container_1393848686226_0019_02_000001 is : 1
2014-03-03 19:13:19,474 WARN 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Exception from container-launch with container ID: 
container_1393848686226_0019_02_000001 and exit code: 1
org.apache.hadoop.util.Shell$ExitCodeException:
         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
         at org.apache.hadoop.util.Shell.run(Shell.java:379)
         at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
         at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
         at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
         at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:744)
2014-03-03 19:13:19,474 INFO 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2014-03-03 19:13:19,474 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Container exited with a non-zero exit code 1
2014-03-03 19:13:19,475 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from 
RUNNING to EXITED_WITH_FAILURE
2014-03-03 19:13:19,475 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Cleaning up container container_1393848686226_0019_02_000001
2014-03-03 19:13:19,496 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 WARN 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
USER=hduser       OPERATION=Container Finished - Failed 
TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
failed with state: EXITED_WITH_FAILURE 
APPID=application_1393848686226_0019 
CONTAINERID=container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from 
EXITED_WITH_FAILURE to DONE
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Removing container_1393848686226_0019_02_000001 from application 
application_1393848686226_0019
2014-03-03 19:13:19,499 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event CONTAINER_STOP for appId application_1393848686226_0019
2014-03-03 19:13:20,160 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending 
out status for container: container_id { app_attempt_id { application_id 
{ id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 } 
state: C_COMPLETE diagnostics: "Exception from container-launch: 
\norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat 
org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
2014-03-03 19:13:20,161 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
completed container container_1393848686226_0019_02_000001
2014-03-03 19:13:20,542 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Starting resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:20,543 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Stopping resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from RUNNING to 
APPLICATION_RESOURCES_CLEANINGUP
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event APPLICATION_STOP for appId application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from 
APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
Scheduling Log Deletion for application: application_1393848686226_0019, 
with delay of 10800 seconds
...


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:05, Ted Yu wrote:
> Can you tell us the hadoop release you're using ?
>
> Seems there is inconsistency in protobuf library.
>
>
> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
> <ma...@roo.ee>> wrote:
>
>     Hi
>
>     I even don't know what information to provide but my container log is:
>
>     2014-03-03 17:36:05,311 FATAL [main]
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>     MRAppMaster
>     java.lang.VerifyError: class
>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>     overrides final method
>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>             at java.lang.ClassLoader.defineClass1(Native Method)
>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>             at
>     java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>             at
>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>             at
>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>             at
>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>             at java.lang.Class.getConstructor0(Class.java:2803)
>             at java.lang.Class.getConstructor(Class.java:1718)
>             at
>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>             at
>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>             at
>     org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>             at
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)
>
>
>     Where to start digging?
>
>     -- 
>     Tervitades, Margus (Margusja) Roo
>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>     http://margus.roo.ee
>     http://ee.linkedin.com/in/margusroo
>     skype: margusja
>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>     "(serialNumber=37303140314)"
>     -----BEGIN PUBLIC KEY-----
>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>     BjM8j36yJvoBVsfOHQIDAQAB
>     -----END PUBLIC KEY-----
>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client</artifactId>
       <version>2.3.0</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-core</artifactId>
         <version>1.2.1</version>
     </dependency>

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running 
in uber mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed 
with state FAILED due to: Application application_1393848686226_0018 
failed 2 times due to AM Container for 
appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
     at org.apache.hadoop.util.Shell.run(Shell.java:379)
     at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
     at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
     at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
     at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 
Total time for transactions(ms): 69 Number of transactions batched in 
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.jar. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742051_1227]
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
replication from 3 to 10 for 
/user/hduser/.staging/job_1393848686226_0019/job.jar
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
replication from 3 to 10 for 
/user/hduser/.staging/job_1393848686226_0019/job.split
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.split. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed 
by DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.xml. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
...

Lines from namemanager log:
...
2014-03-03 19:13:19,473 WARN 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit 
code from container container_1393848686226_0019_02_000001 is : 1
2014-03-03 19:13:19,474 WARN 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Exception from container-launch with container ID: 
container_1393848686226_0019_02_000001 and exit code: 1
org.apache.hadoop.util.Shell$ExitCodeException:
         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
         at org.apache.hadoop.util.Shell.run(Shell.java:379)
         at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
         at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
         at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
         at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:744)
2014-03-03 19:13:19,474 INFO 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2014-03-03 19:13:19,474 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Container exited with a non-zero exit code 1
2014-03-03 19:13:19,475 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from 
RUNNING to EXITED_WITH_FAILURE
2014-03-03 19:13:19,475 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Cleaning up container container_1393848686226_0019_02_000001
2014-03-03 19:13:19,496 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 WARN 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
USER=hduser       OPERATION=Container Finished - Failed 
TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
failed with state: EXITED_WITH_FAILURE 
APPID=application_1393848686226_0019 
CONTAINERID=container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from 
EXITED_WITH_FAILURE to DONE
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Removing container_1393848686226_0019_02_000001 from application 
application_1393848686226_0019
2014-03-03 19:13:19,499 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event CONTAINER_STOP for appId application_1393848686226_0019
2014-03-03 19:13:20,160 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending 
out status for container: container_id { app_attempt_id { application_id 
{ id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 } 
state: C_COMPLETE diagnostics: "Exception from container-launch: 
\norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat 
org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
2014-03-03 19:13:20,161 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
completed container container_1393848686226_0019_02_000001
2014-03-03 19:13:20,542 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Starting resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:20,543 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Stopping resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from RUNNING to 
APPLICATION_RESOURCES_CLEANINGUP
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event APPLICATION_STOP for appId application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from 
APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
Scheduling Log Deletion for application: application_1393848686226_0019, 
with delay of 10800 seconds
...


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:05, Ted Yu wrote:
> Can you tell us the hadoop release you're using ?
>
> Seems there is inconsistency in protobuf library.
>
>
> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
> <ma...@roo.ee>> wrote:
>
>     Hi
>
>     I even don't know what information to provide but my container log is:
>
>     2014-03-03 17:36:05,311 FATAL [main]
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>     MRAppMaster
>     java.lang.VerifyError: class
>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>     overrides final method
>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>             at java.lang.ClassLoader.defineClass1(Native Method)
>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>             at
>     java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>             at
>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>             at
>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>             at
>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>             at java.lang.Class.getConstructor0(Class.java:2803)
>             at java.lang.Class.getConstructor(Class.java:1718)
>             at
>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>             at
>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>             at
>     org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>             at
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)
>
>
>     Where to start digging?
>
>     -- 
>     Tervitades, Margus (Margusja) Roo
>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>     http://margus.roo.ee
>     http://ee.linkedin.com/in/margusroo
>     skype: margusja
>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>     "(serialNumber=37303140314)"
>     -----BEGIN PUBLIC KEY-----
>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>     BjM8j36yJvoBVsfOHQIDAQAB
>     -----END PUBLIC KEY-----
>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Margusja <ma...@roo.ee>.
Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client</artifactId>
       <version>2.3.0</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-core</artifactId>
         <version>1.2.1</version>
     </dependency>

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running 
in uber mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed 
with state FAILED due to: Application application_1393848686226_0018 
failed 2 times due to AM Container for 
appattempt_1393848686226_0018_000002 exited with  exitCode: 1 due to: 
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
     at org.apache.hadoop.util.Shell.run(Shell.java:379)
     at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
     at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
     at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
     at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 
Total time for transactions(ms): 69 Number of transactions batched in 
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.jar. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742051_1227]
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.jar is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
replication from 3 to 10 for 
/user/hduser/.staging/job_1393848686226_0019/job.jar
14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing 
replication from 3 to 10 for 
/user/hduser/.staging/job_1393848686226_0019/job.split
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.split. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742059_1235{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.split is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742060_1236{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed 
by DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.xml. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742061_1237{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/.staging/job_1393848686226_0019/job.xml is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
...

Lines from namemanager log:
...
2014-03-03 19:13:19,473 WARN 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit 
code from container container_1393848686226_0019_02_000001 is : 1
2014-03-03 19:13:19,474 WARN 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Exception from container-launch with container ID: 
container_1393848686226_0019_02_000001 and exit code: 1
org.apache.hadoop.util.Shell$ExitCodeException:
         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
         at org.apache.hadoop.util.Shell.run(Shell.java:379)
         at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
         at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
         at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
         at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:744)
2014-03-03 19:13:19,474 INFO 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2014-03-03 19:13:19,474 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Container exited with a non-zero exit code 1
2014-03-03 19:13:19,475 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from 
RUNNING to EXITED_WITH_FAILURE
2014-03-03 19:13:19,475 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: 
Cleaning up container container_1393848686226_0019_02_000001
2014-03-03 19:13:19,496 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019/container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 WARN 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
USER=hduser       OPERATION=Container Finished - Failed 
TARGET=ContainerImpl    RESULT=FAILURE       DESCRIPTION=Container 
failed with state: EXITED_WITH_FAILURE 
APPID=application_1393848686226_0019 
CONTAINERID=container_1393848686226_0019_02_000001
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_000001 transitioned from 
EXITED_WITH_FAILURE to DONE
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Removing container_1393848686226_0019_02_000001 from application 
application_1393848686226_0019
2014-03-03 19:13:19,499 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event CONTAINER_STOP for appId application_1393848686226_0019
2014-03-03 19:13:20,160 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending 
out status for container: container_id { app_attempt_id { application_id 
{ id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 } 
state: C_COMPLETE diagnostics: "Exception from container-launch: 
\norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat 
org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1
2014-03-03 19:13:20,161 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
completed container container_1393848686226_0019_02_000001
2014-03-03 19:13:20,542 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Starting resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:20,543 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Stopping resource-monitoring for container_1393848686226_0019_02_000001
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from RUNNING to 
APPLICATION_RESOURCES_CLEANINGUP
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event APPLICATION_STOP for appId application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from 
APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
Scheduling Log Deletion for application: application_1393848686226_0019, 
with delay of 10800 seconds
...


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 03/03/14 19:05, Ted Yu wrote:
> Can you tell us the hadoop release you're using ?
>
> Seems there is inconsistency in protobuf library.
>
>
> On Mon, Mar 3, 2014 at 8:01 AM, Margusja <margus@roo.ee 
> <ma...@roo.ee>> wrote:
>
>     Hi
>
>     I even don't know what information to provide but my container log is:
>
>     2014-03-03 17:36:05,311 FATAL [main]
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
>     MRAppMaster
>     java.lang.VerifyError: class
>     org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
>     overrides final method
>     getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>             at java.lang.ClassLoader.defineClass1(Native Method)
>             at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>             at
>     java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>             at
>     java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>             at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>             at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>             at
>     sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>             at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>             at java.lang.Class.getDeclaredConstructors0(Native Method)
>             at
>     java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>             at java.lang.Class.getConstructor0(Class.java:2803)
>             at java.lang.Class.getConstructor(Class.java:1718)
>             at
>     org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
>             at
>     org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>             at
>     org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>             at
>     org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>             at
>     org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)
>
>
>     Where to start digging?
>
>     -- 
>     Tervitades, Margus (Margusja) Roo
>     +372 51 48 780 <tel:%2B372%2051%2048%20780>
>     http://margus.roo.ee
>     http://ee.linkedin.com/in/margusroo
>     skype: margusja
>     ldapsearch -x -h ldap.sk.ee <http://ldap.sk.ee> -b c=EE
>     "(serialNumber=37303140314)"
>     -----BEGIN PUBLIC KEY-----
>     MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
>     5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
>     RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
>     BjM8j36yJvoBVsfOHQIDAQAB
>     -----END PUBLIC KEY-----
>
>


Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Ted Yu <yu...@gmail.com>.
Can you tell us the hadoop release you're using ?

Seems there is inconsistency in protobuf library.


On Mon, Mar 3, 2014 at 8:01 AM, Margusja <ma...@roo.ee> wrote:

> Hi
>
> I even don't know what information to provide but my container log is:
>
> 2014-03-03 17:36:05,311 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Error starting MRAppMaster
> java.lang.VerifyError: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
> overrides final method getUnknownFields.()Lcom/google/protobuf/
> UnknownFieldSet;
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>         at java.security.SecureClassLoader.defineClass(
> SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>         at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>         at java.lang.Class.getDeclaredConstructors0(Native Method)
>         at java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>         at java.lang.Class.getConstructor0(Class.java:2803)
>         at java.lang.Class.getConstructor(Class.java:1718)
>         at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.
> newRecordInstance(RecordFactoryPBImpl.java:62)
>         at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>         at org.apache.hadoop.yarn.api.records.ApplicationId.
> newInstance(ApplicationId.java:49)
>         at org.apache.hadoop.yarn.util.ConverterUtils.
> toApplicationAttemptId(ConverterUtils.java:137)
>         at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
> ConverterUtils.java:177)
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
> MRAppMaster.java:1343)
>
>
> Where to start digging?
>
> --
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
>

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Ted Yu <yu...@gmail.com>.
Can you tell us the hadoop release you're using ?

Seems there is inconsistency in protobuf library.


On Mon, Mar 3, 2014 at 8:01 AM, Margusja <ma...@roo.ee> wrote:

> Hi
>
> I even don't know what information to provide but my container log is:
>
> 2014-03-03 17:36:05,311 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Error starting MRAppMaster
> java.lang.VerifyError: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
> overrides final method getUnknownFields.()Lcom/google/protobuf/
> UnknownFieldSet;
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>         at java.security.SecureClassLoader.defineClass(
> SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>         at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>         at java.lang.Class.getDeclaredConstructors0(Native Method)
>         at java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>         at java.lang.Class.getConstructor0(Class.java:2803)
>         at java.lang.Class.getConstructor(Class.java:1718)
>         at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.
> newRecordInstance(RecordFactoryPBImpl.java:62)
>         at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>         at org.apache.hadoop.yarn.api.records.ApplicationId.
> newInstance(ApplicationId.java:49)
>         at org.apache.hadoop.yarn.util.ConverterUtils.
> toApplicationAttemptId(ConverterUtils.java:137)
>         at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
> ConverterUtils.java:177)
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
> MRAppMaster.java:1343)
>
>
> Where to start digging?
>
> --
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
>

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Ted Yu <yu...@gmail.com>.
Can you tell us the hadoop release you're using ?

Seems there is inconsistency in protobuf library.


On Mon, Mar 3, 2014 at 8:01 AM, Margusja <ma...@roo.ee> wrote:

> Hi
>
> I even don't know what information to provide but my container log is:
>
> 2014-03-03 17:36:05,311 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Error starting MRAppMaster
> java.lang.VerifyError: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
> overrides final method getUnknownFields.()Lcom/google/protobuf/
> UnknownFieldSet;
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>         at java.security.SecureClassLoader.defineClass(
> SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>         at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>         at java.lang.Class.getDeclaredConstructors0(Native Method)
>         at java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>         at java.lang.Class.getConstructor0(Class.java:2803)
>         at java.lang.Class.getConstructor(Class.java:1718)
>         at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.
> newRecordInstance(RecordFactoryPBImpl.java:62)
>         at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>         at org.apache.hadoop.yarn.api.records.ApplicationId.
> newInstance(ApplicationId.java:49)
>         at org.apache.hadoop.yarn.util.ConverterUtils.
> toApplicationAttemptId(ConverterUtils.java:137)
>         at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
> ConverterUtils.java:177)
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
> MRAppMaster.java:1343)
>
>
> Where to start digging?
>
> --
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
>

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

Posted by Ted Yu <yu...@gmail.com>.
Can you tell us the hadoop release you're using ?

Seems there is inconsistency in protobuf library.


On Mon, Mar 3, 2014 at 8:01 AM, Margusja <ma...@roo.ee> wrote:

> Hi
>
> I even don't know what information to provide but my container log is:
>
> 2014-03-03 17:36:05,311 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Error starting MRAppMaster
> java.lang.VerifyError: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
> overrides final method getUnknownFields.()Lcom/google/protobuf/
> UnknownFieldSet;
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>         at java.security.SecureClassLoader.defineClass(
> SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>         at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>         at java.lang.Class.getDeclaredConstructors0(Native Method)
>         at java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
>         at java.lang.Class.getConstructor0(Class.java:2803)
>         at java.lang.Class.getConstructor(Class.java:1718)
>         at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.
> newRecordInstance(RecordFactoryPBImpl.java:62)
>         at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
>         at org.apache.hadoop.yarn.api.records.ApplicationId.
> newInstance(ApplicationId.java:49)
>         at org.apache.hadoop.yarn.util.ConverterUtils.
> toApplicationAttemptId(ConverterUtils.java:137)
>         at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
> ConverterUtils.java:177)
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
> MRAppMaster.java:1343)
>
>
> Where to start digging?
>
> --
> Tervitades, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> -----BEGIN PUBLIC KEY-----
> MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
> 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
> RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
> BjM8j36yJvoBVsfOHQIDAQAB
> -----END PUBLIC KEY-----
>
>