You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@twill.apache.org by Kristoffer Sjögren <st...@gmail.com> on 2016/01/21 22:59:23 UTC

Yarn 2.7.1

Hi

I'm trying the basic example [1] on yarn 2.7.1 but get an exception as
soon as the application starts on the resource manager that tells me
the container id cannot be parsed.

java.lang.IllegalArgumentException: Invalid containerId:
container_e04_1427159778706_0002_01_000001

I don't have the exact stacktrace but I recall it failing in
ConverterUtils.toContainerId because it assumes that that the first
token is an application attempt to be parsed as an integer. This class
resides in hadoop-yarn-common 2.3.0.

Is there any way to either tweak the container id or make twill use
the 2.7.1 jar instead?

Cheers,
-Kristoffer


[1] https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
Sorry for the late reply, I was busy with other things. Actually, I
got a plain yarn application working. It will do for now, but I might
consider Twill another time.

Thanks for your help!

On Mon, Jan 25, 2016 at 10:20 PM, Poorna Chandra <po...@cask.co> wrote:
> Since container 000002 could not be started successfully, you'll not find
> the logs in resource manager UI. You'll have to find the logs on the box
> where the container was launched.
>
> If you look at App Master logs, you'll see a line like -
> 12:49:33.417 [ApplicationMasterService] INFO
> o.a.t.i.a.RunnableProcessLauncher - Launching in container
> container_e29_1453498444043_0012_01_000002 at
> *hdfs-ix03.se-ix.delta.prod*:45454, [$JAVA_HOME/bin/java
> -Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
> -Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
> launcher.jar:$HADOOP_CONF_DIR -Xmx359m
> org.apache.twill.launcher.TwillLauncher container.jar
> org.apache.twill.internal.container.TwillContainerMain true
> 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]
>
> The stdout/stderr logs for container 000002 will be on the box where the
> container was launched (hdfs-ix03.se-ix.delta.prod in the above case). They
> should be in the haddop logs directory, which typically is
> /var/log/hadoop-yarn/container/<application-id>/<container-id/
>
> Poorna.
>
>
> On Mon, Jan 25, 2016 at 6:15 AM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
>
>> I got a tip on the hadoop mailing list to set
>> yarn.nodemanager.delete.debug-delay-sec which prevented yarn from
>> deleting the app resources and logs immediately.
>>
>> However, the 000002 container logs is nowhere to be found even with
>> this property set? Are you sure that the container got a chance to
>> start?
>>
>> On Sun, Jan 24, 2016 at 12:55 PM, Kristoffer Sjögren <st...@gmail.com>
>> wrote:
>> > I'm not sure where I can find those logs? There is no container or
>> > application with this id in the yarn UI. And there is no directory
>> > with that name on the machine that started the application.
>> >
>> > On Sat, Jan 23, 2016 at 11:17 PM, Poorna Chandra <po...@cask.co> wrote:
>> >> The logs pasted in your previous post are from the App Master -
>> >> container_e29_1453498444043_0012_01_000001.
>> >>
>> >> The App Master starts up fine now, and launches the application
>> container -
>> >> container_e29_1453498444043_0012_01_000002. It is the application
>> container
>> >> that dies on launch. We'll need the logs for the application container
>> to
>> >> see why is is dying.
>> >>
>> >> Poorna.
>> >>
>> >> On Sat, Jan 23, 2016 at 1:52 PM, Kristoffer Sjögren <st...@gmail.com>
>> >> wrote:
>> >>
>> >>> I pasted both stdout and stderr in my previous post.
>> >>> Den 23 jan 2016 22:50 skrev "Poorna Chandra" <po...@cask.co>:
>> >>>
>> >>> > Hi Kristoffer,
>> >>> >
>> >>> > Looks like container_e29_1453498444043_0012_01_000002 could not be
>> >>> started
>> >>> > due to some issue. Can you attach the stdout and stderr logs for
>> >>> > container_e29_1453498444043_0012_01_000002?
>> >>> >
>> >>> > Poorna.
>> >>> >
>> >>> >
>> >>> > On Sat, Jan 23, 2016 at 3:53 AM, Kristoffer Sjögren <
>> stoffe@gmail.com>
>> >>> > wrote:
>> >>> >
>> >>> > > Yes, that almost worked. Now the application starts on Yarn and
>> after
>> >>> > > a while an exception is thrown and the application exits with code
>> 10.
>> >>> > >
>> >>> > >
>> >>> > > Application
>> >>> > >
>> >>> > > About
>> >>> > > Jobs
>> >>> > >
>> >>> > > Tools
>> >>> > >
>> >>> > > Log Type: stdout
>> >>> > >
>> >>> > > Log Upload Time: Sat Jan 23 12:49:41 +0100 2016
>> >>> > >
>> >>> > > Log Length: 21097
>> >>> > >
>> >>> > > UnJar appMaster.jar to tmp/twill.launcher-1453549768670-0
>> >>> > > Launch class
>> >>> (org.apache.twill.internal.appmaster.ApplicationMasterMain)
>> >>> > > with classpath:
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> [file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/classes,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/resources,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-cli-1.2.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/scala-library-2.10.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-math3-3.1.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-core-1.0.9.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/xmlenc-0.52.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsch-0.1.42.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpclient-4.1.2.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-configuration-1.6.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/metrics-core-2.2.0.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-6.1.26.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-api-2.7.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-annotations-2.7.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guice-3.0.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-net-3.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-util-6.1.26.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/kafka_2.10-0.8.0.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-api-0.6.0-incubating.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-api-1.7.10.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/paranamer-2.3.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/protobuf-java-2.5.0.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-kerberos-codec-2.0.0-M15.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/avro-1.7.4.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-compress-1.4.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-auth-2.7.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zookeeper-3.4.6.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-core-1.9.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-client-2.7.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-zookeeper-0.6.0-incubating.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-client-1.9.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/gson-2.2.4.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-common-2.7.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-hdfs-2.7.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-asn1-api-1.0.0-M20.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-core-0.6.0-incubating.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-collections-3.2.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-3.7.0.Final.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-common-2.7.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-mapper-asl-1.9.13.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zkclient-0.3.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-jaxrs-1.9.13.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-xc-1.9.13.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsr305-3.0.0.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/snappy-java-1.0.4.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/log4j-1.2.17.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-codec-1.4.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/asm-all-5.0.2.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-all-4.0.23.Final.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/servlet-api-2.5.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guava-13.0.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jopt-simple-3.2.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-framework-2.7.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-client-2.7.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-httpclient-3.1.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-api-0.6.0-incubating.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-lang-2.6.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpcore-4.1.2.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-yarn-0.6.0-incubating.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-util-1.0.0-M20.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/htrace-core-3.1.0-incubating.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-common-0.6.0-incubating.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-io-2.4.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-server-1.9.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-i18n-2.0.0-M15.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-logging-1.1.3.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-core-0.6.0-incubating.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-core-asl-1.9.13.jar,
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/javax.inject-1.jar]
>> >>> > > Launching main: public static void
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(java.lang.String[])
>> >>> > > throws java.lang.Exception []
>> >>> > > 12:49:29.586 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos
>> krb5
>> >>> > > configuration not found, setting default realm to empty
>> >>> > > 12:49:30.083 [main] DEBUG o.a.h.h.p.d.s.DataTransferSaslUtil -
>> >>> > > DataTransferProtocol not using SaslPropertiesResolver, no QOP
>> found in
>> >>> > > configuration for dfs.data.transfer.protection
>> >>> > > 12:49:30.552 [main] INFO  o.apache.twill.internal.ServiceMain -
>> >>> > > Starting service ApplicationMasterService [NEW].
>> >>> > > 12:49:30.600 [kafka-publisher] WARN
>> o.a.t.i.k.c.SimpleKafkaPublisher
>> >>> > > - Broker list is empty. No Kafka producer is created.
>> >>> > > 12:49:30.704 [TrackerService STARTING] INFO
>> >>> > > o.a.t.i.appmaster.TrackerService - Tracker service started at
>> >>> > > http://hdfs-ix03.se-ix.delta.prod:51793
>> >>> > > 12:49:30.922 [TwillZKPathService STARTING] INFO
>> >>> > > o.a.t.i.ServiceMain$TwillZKPathService - Creating container ZK
>> path:
>> >>> > >
>> >>>
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> >>> > > 12:49:31.102 [kafka-publisher] INFO
>> o.a.t.i.k.c.SimpleKafkaPublisher
>> >>> > > - Update Kafka producer broker list:
>> hdfs-ix03.se-ix.delta.prod:58668
>> >>> > > 12:49:31.288 [ApplicationMasterService] INFO
>> >>> > > o.a.t.internal.AbstractTwillService - Create live node
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> >>> > > 12:49:31.308 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.ApplicationMasterService - Start application master with
>> >>> > > spec:
>> >>> > >
>> >>> >
>> >>>
>> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
>> >>> > > 12:49:31.318 [main] INFO  o.apache.twill.internal.ServiceMain -
>> >>> > > Service ApplicationMasterService [RUNNING] started.
>> >>> > > 12:49:31.344 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.ApplicationMasterService - Request 1 container with
>> >>> > > capability <memory:512, vCores:1> for runnable JarRunnable
>> >>> > > 12:49:33.368 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.ApplicationMasterService - Got container
>> >>> > > container_e29_1453498444043_0012_01_000002
>> >>> > > 12:49:33.369 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.ApplicationMasterService - Starting runnable JarRunnable
>> >>> > > with
>> >>> > >
>> >>> >
>> >>>
>> RunnableProcessLauncher{container=org.apache.twill.internal.yarn.Hadoop21YarnContainerInfo@5e82cebd
>> >>> > > }
>> >>> > > 12:49:33.417 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.RunnableProcessLauncher - Launching in container
>> >>> > > container_e29_1453498444043_0012_01_000002 at
>> >>> > > hdfs-ix03.se-ix.delta.prod:45454, [$JAVA_HOME/bin/java
>> >>> > > -Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
>> >>> > > -Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
>> >>> > > launcher.jar:$HADOOP_CONF_DIR -Xmx359m
>> >>> > > org.apache.twill.launcher.TwillLauncher container.jar
>> >>> > > org.apache.twill.internal.container.TwillContainerMain true
>> >>> > > 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]
>> >>> > > 12:49:33.473 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.ApplicationMasterService - Runnable JarRunnable fully
>> >>> > > provisioned with 1 instances.
>> >>> > > 12:49:35.302 [zk-client-EventThread] INFO
>> >>> > > o.a.t.i.TwillContainerLauncher - Container LiveNodeData updated:
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> {"data":{"containerId":"container_e29_1453498444043_0012_01_000002","host":"hdfs-ix03.se-ix.delta.prod"}}
>> >>> > > 12:49:37.484 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.ApplicationMasterService - Container
>> >>> > > container_e29_1453498444043_0012_01_000002 completed with
>> >>> > > COMPLETE:Exception from container-launch.
>> >>> > > Container id: container_e29_1453498444043_0012_01_000002
>> >>> > > Exit code: 10
>> >>> > > Stack trace: ExitCodeException exitCode=10:
>> >>> > > at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
>> >>> > > at org.apache.hadoop.util.Shell.run(Shell.java:487)
>> >>> > > at
>> >>> > >
>> >>>
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>> >>> > > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >>> > > at java.lang.Thread.run(Thread.java:745)
>> >>> > >
>> >>> > >
>> >>> > > Container exited with a non-zero exit code 10
>> >>> > > .
>> >>> > > 12:49:37.488 [ApplicationMasterService] WARN
>> >>> > > o.a.t.i.appmaster.RunningContainers - Container
>> >>> > > container_e29_1453498444043_0012_01_000002 exited abnormally with
>> >>> > > state COMPLETE, exit code 10.
>> >>> > > 12:49:37.496 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.ApplicationMasterService - All containers completed.
>> >>> > > Shutting down application master.
>> >>> > > 12:49:37.498 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.ApplicationMasterService - Stop application master with
>> >>> > > spec:
>> >>> > >
>> >>> >
>> >>>
>> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
>> >>> > > 12:49:37.500 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.appmaster.RunningContainers - Stopping all instances of
>> >>> > > JarRunnable
>> >>> > > 12:49:37.500 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.appmaster.RunningContainers - Terminated all instances of
>> >>> > > JarRunnable
>> >>> > > 12:49:37.512 [ApplicationMasterService] INFO
>> >>> > > o.a.t.i.a.ApplicationMasterService - Application directory deleted:
>> >>> > >
>> >>>
>> hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> >>> > > 12:49:37.512 [ApplicationMasterService] INFO
>> >>> > > o.a.t.internal.AbstractTwillService - Remove live node
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> >>> > > 12:49:37.516 [ApplicationMasterService] INFO
>> >>> > > o.a.t.internal.AbstractTwillService - Service
>> ApplicationMasterService
>> >>> > > with runId be4bbf01-5e72-4147-b2eb-b84e19214b5b shutdown completed
>> >>> > > 12:49:37.516 [main] INFO  o.apache.twill.internal.ServiceMain -
>> >>> > > Service ApplicationMasterService [TERMINATED] completed.
>> >>> > > 12:49:39.676 [kafka-publisher] WARN
>> o.a.t.i.k.c.SimpleKafkaPublisher
>> >>> > > - Broker list is empty. No Kafka producer is created.
>> >>> > > 12:49:40.037 [TwillZKPathService STOPPING] INFO
>> >>> > > o.a.t.i.ServiceMain$TwillZKPathService - Removing container ZK
>> path:
>> >>> > >
>> >>>
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> >>> > > 12:49:40.248 [TrackerService STOPPING] INFO
>> >>> > > o.a.t.i.appmaster.TrackerService - Tracker service stopped at
>> >>> > > http://hdfs-ix03.se-ix.delta.prod:51793
>> >>> > > Main class completed.
>> >>> > > Launcher completed
>> >>> > > Cleanup directory tmp/twill.launcher-1453549768670-0
>> >>> > >
>> >>> > >
>> >>> > >
>> >>> > > SLF4J: Class path contains multiple SLF4J bindings.
>> >>> > > SLF4J: Found binding in
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >>> > > SLF4J: Found binding in
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >>> > > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for
>> an
>> >>> > > explanation.
>> >>> > > SLF4J: Actual binding is of type
>> >>> > > [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
>> >>> > > 16/01/23 12:49:29 INFO impl.ContainerManagementProtocolProxy:
>> >>> > > yarn.client.max-cached-nodemanagers-proxies : 0
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Verifying
>> properties
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> log.dir is
>> >>> > > overridden to
>> >>> > >
>> >>> >
>> >>>
>> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > default.replication.factor is overridden to 1
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property port is
>> >>> > > overridden to 58668
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > socket.request.max.bytes is overridden to 104857600
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > socket.send.buffer.bytes is overridden to 1048576
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > log.flush.interval.ms is overridden to 1000
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > zookeeper.connect is overridden to
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> broker.id
>> >>> > > is overridden to 1
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > log.retention.hours is overridden to 24
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > socket.receive.buffer.bytes is overridden to 1048576
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > zookeeper.connection.timeout.ms is overridden to 3000
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > num.partitions is overridden to 1
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > log.flush.interval.messages is overridden to 10000
>> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> >>> > > log.segment.bytes is overridden to 536870912
>> >>> > > 16/01/23 12:49:30 INFO client.ConfiguredRMFailoverProxyProvider:
>> >>> > > Failing over to rm2
>> >>> > > 16/01/23 12:49:30 INFO server.KafkaServer: [Kafka Server 1],
>> Starting
>> >>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
>> Log
>> >>> > > directory
>> >>> > >
>> >>> >
>> >>>
>> '/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs'
>> >>> > > not found, creating it.
>> >>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
>> >>> > > Starting log cleaner every 600000 ms
>> >>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
>> >>> > > Starting log flusher every 3000 ms with the following overrides
>> Map()
>> >>> > > 16/01/23 12:49:30 INFO network.Acceptor: Awaiting socket
>> connections
>> >>> > > on 0.0.0.0:58668.
>> >>> > > 16/01/23 12:49:30 INFO network.SocketServer: [Socket Server on
>> Broker
>> >>> > > 1], Started
>> >>> > > 16/01/23 12:49:30 INFO server.KafkaZooKeeper: connecting to ZK:
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
>> >>> > > 16/01/23 12:49:30 INFO zkclient.ZkEventThread: Starting ZkClient
>> event
>> >>> > > thread.
>> >>> > > 16/01/23 12:49:31 INFO zkclient.ZkClient: zookeeper state changed
>> >>> > > (SyncConnected)
>> >>> > > 16/01/23 12:49:31 INFO utils.ZkUtils$: Registered broker 1 at path
>> >>> > > /brokers/ids/1 with address hdfs-ix03.se-ix.delta.prod:58668.
>> >>> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
>> >>> > > Connecting to ZK:
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
>> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Verifying
>> properties
>> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> >>> > > metadata.broker.list is overridden to
>> hdfs-ix03.se-ix.delta.prod:58668
>> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> >>> > > request.required.acks is overridden to 1
>> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> >>> > > partitioner.class is overridden to
>> >>> > > org.apache.twill.internal.kafka.client.IntegerPartitioner
>> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> >>> > > compression.codec is overridden to snappy
>> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> >>> > > key.serializer.class is overridden to
>> >>> > > org.apache.twill.internal.kafka.client.IntegerEncoder
>> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> >>> > > serializer.class is overridden to
>> >>> > > org.apache.twill.internal.kafka.client.ByteBufferEncoder
>> >>> > > 16/01/23 12:49:31 INFO utils.Mx4jLoader$: Will not load MX4J,
>> >>> > > mx4j-tools.jar is not in the classpath
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Controller starting up
>> >>> > > 16/01/23 12:49:31 INFO server.ZookeeperLeaderElector: 1
>> successfully
>> >>> > > elected as leader
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Broker 1 starting become controller state transition
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Controller 1 incremented epoch to 1
>> >>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>> >>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>> >>> > > correlation id 0 for 1 topic(s) Set(log)
>> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> >>> > > 16/01/23 12:49:31 INFO controller.RequestSendThread:
>> >>> > > [Controller-1-to-broker-1-send-thread], Starting
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Currently active brokers in the cluster: Set(1)
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Currently shutting brokers in the cluster: Set()
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Current list of topics in the cluster: Set()
>> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
>> state
>> >>> > > machine on controller 1]: No state transitions triggered since no
>> >>> > > partitions are assigned to brokers 1
>> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
>> state
>> >>> > > machine on controller 1]: Invoking state change to OnlineReplica
>> for
>> >>> > > replicas
>> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
>> state
>> >>> > > machine on controller 1]: Started replica state machine with
>> initial
>> >>> > > state -> Map()
>> >>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>> >>> > > state machine on Controller 1]: Started partition state machine
>> with
>> >>> > > initial state -> Map()
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Broker 1 is ready to serve as the new controller with epoch 1
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Partitions being reassigned: Map()
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Partitions already reassigned: List()
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Resuming reassignment of partitions: Map()
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Partitions undergoing preferred replica election:
>> >>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
>> >>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Partitions that completed preferred replica election:
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Resuming preferred replica election for partitions:
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Starting preferred replica leader election for partitions
>> >>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>> >>> > > state machine on Controller 1]: Invoking state change to
>> >>> > > OnlinePartition for partitions
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Controller startup complete
>> >>> > > 16/01/23 12:49:31 INFO server.KafkaApis: [KafkaApi-1] Auto
>> creation of
>> >>> > > topic log with 1 partitions and replication factor 1 is successful!
>> >>> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
>> Started
>> >>> > > 16/01/23 12:49:31 INFO
>> >>> > > server.ZookeeperLeaderElector$LeaderChangeListener: New leader is 1
>> >>> > > 16/01/23 12:49:31 INFO controller.ControllerEpochListener:
>> >>> > > [ControllerEpochListener on 1]: Initialized controller epoch to 1
>> and
>> >>> > > zk version 0
>> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668
>> >>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
>> >>> > > fetching metadata [{TopicMetadata for topic log ->
>> >>> > > No partition metadata for topic log due to
>> >>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
>> >>> > > kafka.common.LeaderNotAvailableException
>> >>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>> >>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>> >>> > > correlation id 1 for 1 topic(s) Set(log)
>> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> >>> > > 16/01/23 12:49:31 INFO
>> >>> > > controller.PartitionStateMachine$TopicChangeListener:
>> >>> > > [TopicChangeListener on Controller 1]: New topics: [Set(log)],
>> deleted
>> >>> > > topics: [Set()], new partition replica assignment [Map([log,0] ->
>> >>> > > List(1))]
>> >>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
>> >>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> New
>> >>> > > topic creation callback for [log,0]
>> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> New
>> >>> > > partition creation callback for [log,0]
>> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668
>> >>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>> >>> > > state machine on Controller 1]: Invoking state change to
>> NewPartition
>> >>> > > for partitions [log,0]
>> >>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
>> >>> > > fetching metadata [{TopicMetadata for topic log ->
>> >>> > > No partition metadata for topic log due to
>> >>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
>> >>> > > kafka.common.LeaderNotAvailableException
>> >>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket
>> connection to
>> >>> > > /10.3.24.22.
>> >>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket
>> connection to
>> >>> > > /10.3.24.22.
>> >>> > > 16/01/23 12:49:31 ERROR async.DefaultEventHandler: Failed to
>> collate
>> >>> > > messages by topic, partition due to: Failed to fetch topic metadata
>> >>> > > for topic: log
>> >>> > > 16/01/23 12:49:31 INFO async.DefaultEventHandler: Back off for 100
>> ms
>> >>> > > before retrying send. Remaining retries = 3
>> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
>> state
>> >>> > > machine on controller 1]: Invoking state change to NewReplica for
>> >>> > > replicas PartitionAndReplica(log,0,1)
>> >>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>> >>> > > state machine on Controller 1]: Invoking state change to
>> >>> > > OnlinePartition for partitions [log,0]
>> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
>> state
>> >>> > > machine on controller 1]: Invoking state change to OnlineReplica
>> for
>> >>> > > replicas PartitionAndReplica(log,0,1)
>> >>> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
>> >>> > > Broker 1]: Handling LeaderAndIsr request
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
>> >>> > > ->
>> >>> > >
>> >>> >
>> >>>
>> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
>> >>> > > 16/01/23 12:49:31 INFO server.ReplicaFetcherManager:
>> >>> > > [ReplicaFetcherManager on broker 1] Removing fetcher for partition
>> >>> > > [log,0]
>> >>> > > 16/01/23 12:49:31 INFO log.Log: [Kafka Log on Broker 1], Completed
>> >>> > > load of log log-0 with log end offset 0
>> >>> > > 16/01/23 12:49:31 INFO log.LogManager: [Log Manager on Broker 1]
>> >>> > > Created log for partition [log,0] in
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs.
>> >>> > > 16/01/23 12:49:31 WARN server.HighwaterMarkCheckpoint: No
>> >>> > > highwatermark file is found. Returning 0 as the highwatermark for
>> >>> > > partition [log,0]
>> >>> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
>> >>> > > Broker 1]: Handled leader and isr request
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
>> >>> > > ->
>> >>> > >
>> >>> >
>> >>>
>> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
>> >>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>> >>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>> >>> > > correlation id 2 for 1 topic(s) Set(log)
>> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> >>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
>> >>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
>> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668
>> >>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket
>> connection to
>> >>> > > /10.3.24.22.
>> >>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
>> >>> > > fetching metadata [{TopicMetadata for topic log ->
>> >>> > > No partition metadata for topic log due to
>> >>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
>> >>> > > kafka.common.LeaderNotAvailableException
>> >>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>> >>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>> >>> > > correlation id 3 for 1 topic(s) Set(log)
>> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668
>> >>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket
>> connection to
>> >>> > > /10.3.24.22.
>> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> >>> > > 16/01/23 12:49:33 INFO impl.AMRMClientImpl: Received new token for
>> :
>> >>> > > hdfs-ix03.se-ix.delta.prod:45454
>> >>> > > 16/01/23 12:49:33 INFO impl.ContainerManagementProtocolProxy:
>> Opening
>> >>> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
>> >>> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket
>> connection to
>> >>> > > /10.3.24.22.
>> >>> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket
>> connection to
>> >>> > > /10.3.24.22.
>> >>> > > 16/01/23 12:49:39 INFO server.KafkaServer: [Kafka Server 1],
>> Shutting
>> >>> > down
>> >>> > > 16/01/23 12:49:39 INFO server.KafkaZooKeeper: Closing zookeeper
>> >>> client...
>> >>> > > 16/01/23 12:49:39 INFO zkclient.ZkEventThread: Terminate ZkClient
>> event
>> >>> > > thread.
>> >>> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on
>> Broker
>> >>> > > 1], Shutting down
>> >>> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on
>> Broker
>> >>> > > 1], Shutdown completed
>> >>> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka
>> Request
>> >>> > > Handler on Broker 1], shutting down
>> >>> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka
>> Request
>> >>> > > Handler on Broker 1], shutted down completely
>> >>> > > 16/01/23 12:49:39 INFO utils.KafkaScheduler: Shutdown Kafka
>> scheduler
>> >>> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
>> >>> > > Broker 1]: Shut down
>> >>> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
>> >>> > > [ReplicaFetcherManager on broker 1] shutting down
>> >>> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
>> >>> > > [ReplicaFetcherManager on broker 1] shutdown completed
>> >>> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
>> >>> > > Broker 1]: Shutted down completely
>> >>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
>> >>> > > [Controller-1-to-broker-1-send-thread], Shutting down
>> >>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
>> >>> > > [Controller-1-to-broker-1-send-thread], Stopped
>> >>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
>> >>> > > [Controller-1-to-broker-1-send-thread], Shutdown completed
>> >>> > > 16/01/23 12:49:40 INFO controller.KafkaController: [Controller 1]:
>> >>> > > Controller shutdown complete
>> >>> > > 16/01/23 12:49:40 INFO server.KafkaServer: [Kafka Server 1], Shut
>> down
>> >>> > > completed
>> >>> > > 16/01/23 12:49:40 INFO impl.ContainerManagementProtocolProxy:
>> Opening
>> >>> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
>> >>> > > 16/01/23 12:49:40 INFO impl.AMRMClientImpl: Waiting for
>> application to
>> >>> > > be successfully unregistered.
>> >>> > > 16/01/23 12:49:40 INFO producer.SyncProducer: Disconnecting from
>> >>> > > hdfs-ix03.se-ix.delta.prod:58668
>> >>> > > 16/01/23 12:49:40 WARN async.DefaultEventHandler: Failed to send
>> >>> > > producer request with correlation id 35 to broker 1 with data for
>> >>> > > partitions [log,0]
>> >>> > > java.nio.channels.ClosedByInterruptException
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>> >>> > > at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:957)
>> >>> > > at
>> >>> >
>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:204)
>> >>> > > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
>> >>> > > at kafka.utils.Utils$.read(Unknown Source)
>> >>> > > at kafka.network.BoundedByteBufferReceive.readFrom(Unknown Source)
>> >>> > > at kafka.network.Receive$class.readCompletely(Unknown Source)
>> >>> > > at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown
>> >>> Source)
>> >>> > > at kafka.network.BlockingChannel.receive(Unknown Source)
>> >>> > > at kafka.producer.SyncProducer.liftedTree1$1(Unknown Source)
>> >>> > > at
>> >>> >
>> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Unknown
>> >>> > > Source)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Unknown
>> >>> > > Source)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
>> >>> > > Source)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
>> >>> > > Source)
>> >>> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
>> >>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(Unknown
>> >>> > Source)
>> >>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown
>> Source)
>> >>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown
>> Source)
>> >>> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
>> >>> > > at kafka.producer.SyncProducer.send(Unknown Source)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(Unknown
>> >>> > > Source)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
>> >>> > > Source)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
>> >>> > > Source)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>> >>> > > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>> >>> > > at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>> >>> > > at
>> >>> >
>> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(Unknown
>> >>> > > Source)
>> >>> > > at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
>> >>> > > at kafka.producer.Producer.send(Unknown Source)
>> >>> > > at kafka.javaapi.producer.Producer.send(Unknown Source)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.logging.KafkaAppender.doPublishLogs(KafkaAppender.java:268)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.logging.KafkaAppender.publishLogs(KafkaAppender.java:228)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.logging.KafkaAppender.access$700(KafkaAppender.java:66)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.logging.KafkaAppender$2.run(KafkaAppender.java:280)
>> >>> > > at
>> >>> >
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> >>> > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >>> > > at
>> >>> > >
>> >>> >
>> >>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >>> > > at java.lang.Thread.run(Thread.java:745)
>> >>> > > 16/01/23 12:49:40 INFO async.DefaultEventHandler: Back off for 100
>> ms
>> >>> > > before retrying send. Remaining retries = 3
>> >>> > > 16/01/23 12:49:40 INFO producer.Producer: Shutting down producer
>> >>> > > 16/01/23 12:49:40 INFO producer.ProducerPool: Closing all sync
>> >>> producers
>> >>> > >
>> >>> > >
>> >>> > > On Sat, Jan 23, 2016 at 1:22 AM, Terence Yim <ch...@gmail.com>
>> wrote:
>> >>> > > > Hi,
>> >>> > > >
>> >>> > > > It's due to a very old version of ASM library that bring it by
>> >>> > > hadoop/yarn.
>> >>> > > > Please add exclusion of asm library to all hadoop dependencies.
>> >>> > > >
>> >>> > > > <exclusion>
>> >>> > > >   <groupId>asm</groupId>
>> >>> > > >   <artifactId>asm</artifactId>
>> >>> > > > </exclusion>
>> >>> > > >
>> >>> > > > Terence
>> >>> > > >
>> >>> > > >
>> >>> > > > On Fri, Jan 22, 2016 at 2:34 PM, Kristoffer Sjögren <
>> >>> stoffe@gmail.com>
>> >>> > > > wrote:
>> >>> > > >
>> >>> > > >> Further adding the following dependencies cause another
>> exception.
>> >>> > > >>
>> >>> > > >> <dependency>
>> >>> > > >>   <groupId>com.google.guava</groupId>
>> >>> > > >>   <artifactId>guava</artifactId>
>> >>> > > >>   <version>13.0</version>
>> >>> > > >> </dependency>
>> >>> > > >> <dependency>
>> >>> > > >>   <groupId>org.apache.hadoop</groupId>
>> >>> > > >>   <artifactId>hadoop-hdfs</artifactId>
>> >>> > > >>   <version>2.7.1</version>
>> >>> > > >> </dependency>
>> >>> > > >>
>> >>> > > >> Exception in thread " STARTING"
>> >>> > > >> java.lang.IncompatibleClassChangeError: class
>> >>> > > >>
>> org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
>> >>> > > >> has interface org.objectweb.asm.ClassVisitor as super class
>> >>> > > >> at java.lang.ClassLoader.defineClass1(Native Method)
>> >>> > > >> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
>> >>> > > >> at
>> >>> > >
>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>> >>> > > >> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>> >>> > > >> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>> >>> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>> >>> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>> >>> > > >> at java.security.AccessController.doPrivileged(Native Method)
>> >>> > > >> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>> >>> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> >>> > > >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>> >>> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
>> >>> > > >> at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
>> >>> > > >> at java.lang.Thread.run(Thread.java:745)
>> >>> > > >>
>> >>> > > >> On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <
>> >>> > stoffe@gmail.com>
>> >>> > > >> wrote:
>> >>> > > >> > Add those dependencies fail with the following exception.
>> >>> > > >> >
>> >>> > > >> > Exception in thread "main" java.lang.AbstractMethodError:
>> >>> > > >> >
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
>> >>> > > >> > at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
>> >>> > > >> > at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
>> >>> > > >> > at
>> >>> org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
>> >>> > > >> > at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
>> >>> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
>> >>> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
>> >>> > > >> > at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
>> >>> > > >> > at
>> >>> > >
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>> >>> > > >> > at
>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>> >>> > > >> > at
>> >>> > > >>
>> >>> >
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>> >>> > > >> > at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>> >>> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>> >>> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
>> >>> > > >> > at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
>> >>> > > >> > at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
>> >>> > > >> > at deephacks.BundledJarExample.main(BundledJarExample.java:70)
>> >>> > > >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> > > >> > at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> >>> > > >> > at
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >>> > > >> > at java.lang.reflect.Method.invoke(Method.java:497)
>> >>> > > >> > at
>> >>> > >
>> com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
>> >>> > > >> >
>> >>> > > >> > On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <
>> chtyim@gmail.com>
>> >>> > > wrote:
>> >>> > > >> >> Hi,
>> >>> > > >> >>
>> >>> > > >> >> If you run it from IDE, you and simply add a dependency on
>> hadoop
>> >>> > > with
>> >>> > > >> >> version 2.7.1. E.g. if you are using Maven, you can add the
>> >>> > > following to
>> >>> > > >> >> your pom.xml dependencies section.
>> >>> > > >> >>
>> >>> > > >> >> <dependency>
>> >>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>> >>> > > >> >>   <artifactId>hadoop-yarn-api</artifactId>
>> >>> > > >> >>   <version>2.7.1</version>
>> >>> > > >> >> </dependency>
>> >>> > > >> >> <dependency>
>> >>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>> >>> > > >> >>   <artifactId>hadoop-yarn-common</artifactId>
>> >>> > > >> >>   <version>2.7.1</version>
>> >>> > > >> >> </dependency>
>> >>> > > >> >> <dependency>
>> >>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>> >>> > > >> >>   <artifactId>hadoop-yarn-client</artifactId>
>> >>> > > >> >>   <version>2.7.1</version>
>> >>> > > >> >> </dependency>
>> >>> > > >> >> <dependency>
>> >>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>> >>> > > >> >>   <artifactId>hadoop-common</artifactId>
>> >>> > > >> >>   <version>2.7.1</version>
>> >>> > > >> >> </dependency>
>> >>> > > >> >>
>> >>> > > >> >> Terence
>> >>> > > >> >>
>> >>> > > >> >> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <
>> >>> > > stoffe@gmail.com>
>> >>> > > >> >> wrote:
>> >>> > > >> >>
>> >>> > > >> >>> I run it from IDE right now, but would like to create a
>> command
>> >>> > line
>> >>> > > >> >>> app eventually.
>> >>> > > >> >>>
>> >>> > > >> >>> I should clarify that the exception above is thrown on the
>> YARN
>> >>> > > node,
>> >>> > > >> >>> not in the IDE.
>> >>> > > >> >>>
>> >>> > > >> >>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <
>> chtyim@gmail.com>
>> >>> > > wrote:
>> >>> > > >> >>> > Hi Kristoffer,
>> >>> > > >> >>> >
>> >>> > > >> >>> > The example itself shouldn't need any modification.
>> However,
>> >>> how
>> >>> > > do
>> >>> > > >> >>> > you run that class? Do you run it from IDE or from command
>> >>> line
>> >>> > > using
>> >>> > > >> >>> > "java" command?
>> >>> > > >> >>> >
>> >>> > > >> >>> > Terence
>> >>> > > >> >>> >
>> >>> > > >> >>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <
>> >>> > > >> stoffe@gmail.com>
>> >>> > > >> >>> wrote:
>> >>> > > >> >>> >> Hi Terence,
>> >>> > > >> >>> >>
>> >>> > > >> >>> >> I'm quite new to Twill and not sure how to do that
>> exactly.
>> >>> > Could
>> >>> > > >> you
>> >>> > > >> >>> >> show me how to modify the following example to do the
>> same?
>> >>> > > >> >>> >>
>> >>> > > >> >>> >>
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>> >>> > > >> >>> >>
>> >>> > > >> >>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <
>> >>> chtyim@gmail.com
>> >>> > >
>> >>> > > >> wrote:
>> >>> > > >> >>> >>> Hi Kristoffer,
>> >>> > > >> >>> >>>
>> >>> > > >> >>> >>> Seems like the exception comes from the YARN class
>> >>> > > >> "ConverterUtils". I
>> >>> > > >> >>> >>> believe need to start the application with the version
>> 2.7.1
>> >>> > > Hadoop
>> >>> > > >> >>> >>> Jars. How to do start the twill application? Usually on
>> a
>> >>> > > cluster
>> >>> > > >> with
>> >>> > > >> >>> >>> hadoop installed, you can get all the hadoop jars in the
>> >>> > > classpath
>> >>> > > >> by
>> >>> > > >> >>> >>> running this:
>> >>> > > >> >>> >>>
>> >>> > > >> >>> >>> export CP=`hadoop classpath`
>> >>> > > >> >>> >>> java -cp .:$CP YourApp ...
>> >>> > > >> >>> >>>
>> >>> > > >> >>> >>> Assuming your app classes and Twill jars are in the
>> current
>> >>> > > >> directory.
>> >>> > > >> >>> >>>
>> >>> > > >> >>> >>> Terence
>> >>> > > >> >>> >>>
>> >>> > > >> >>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <
>> >>> > > >> stoffe@gmail.com>
>> >>> > > >> >>> wrote:
>> >>> > > >> >>> >>>> Here's the full stacktrace.
>> >>> > > >> >>> >>>>
>> >>> > > >> >>> >>>> Exception in thread "main"
>> >>> > > >> java.lang.reflect.InvocationTargetException
>> >>> > > >> >>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >>> > Method)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >>> > > >> >>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
>> >>> > > >> >>> >>>> at
>> >>> > > >>
>> org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
>> >>> > > >> >>> >>>> Caused by: java.lang.RuntimeException:
>> >>> > > >> >>> >>>> java.lang.reflect.InvocationTargetException
>> >>> > > >> >>> >>>> at
>> >>> > > >> com.google.common.base.Throwables.propagate(Throwables.java:160)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
>> >>> > > >> >>> >>>> ... 5 more
>> >>> > > >> >>> >>>> Caused by: java.lang.reflect.InvocationTargetException
>> >>> > > >> >>> >>>> at
>> >>> > > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> >>> > > >> >>> Method)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >>> > > >> >>> >>>> at
>> >>> > > java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
>> >>> > > >> >>> >>>> ... 6 more
>> >>> > > >> >>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid
>> >>> > > >> ContainerId:
>> >>> > > >> >>> >>>> container_e25_1453466340022_0004_01_000001
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
>> >>> > > >> >>> >>>> ... 11 more
>> >>> > > >> >>> >>>> Caused by: java.lang.NumberFormatException: For input
>> >>> string:
>> >>> > > >> "e25"
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>> >>> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:589)
>> >>> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:631)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>> >>> > > >> >>> >>>> at
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>> >>> > > >> >>> >>>> ... 14 more
>> >>> > > >> >>> >>>>
>> >>> > > >> >>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
>> >>> > > >> >>> stoffe@gmail.com> wrote:
>> >>> > > >> >>> >>>>> Hi
>> >>> > > >> >>> >>>>>
>> >>> > > >> >>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but
>> get an
>> >>> > > >> exception
>> >>> > > >> >>> as
>> >>> > > >> >>> >>>>> soon as the application starts on the resource manager
>> >>> that
>> >>> > > >> tells me
>> >>> > > >> >>> >>>>> the container id cannot be parsed.
>> >>> > > >> >>> >>>>>
>> >>> > > >> >>> >>>>> java.lang.IllegalArgumentException: Invalid
>> containerId:
>> >>> > > >> >>> >>>>> container_e04_1427159778706_0002_01_000001
>> >>> > > >> >>> >>>>>
>> >>> > > >> >>> >>>>> I don't have the exact stacktrace but I recall it
>> failing
>> >>> in
>> >>> > > >> >>> >>>>> ConverterUtils.toContainerId because it assumes that
>> that
>> >>> > the
>> >>> > > >> first
>> >>> > > >> >>> >>>>> token is an application attempt to be parsed as an
>> >>> integer.
>> >>> > > This
>> >>> > > >> >>> class
>> >>> > > >> >>> >>>>> resides in hadoop-yarn-common 2.3.0.
>> >>> > > >> >>> >>>>>
>> >>> > > >> >>> >>>>> Is there any way to either tweak the container id or
>> make
>> >>> > > twill
>> >>> > > >> use
>> >>> > > >> >>> >>>>> the 2.7.1 jar instead?
>> >>> > > >> >>> >>>>>
>> >>> > > >> >>> >>>>> Cheers,
>> >>> > > >> >>> >>>>> -Kristoffer
>> >>> > > >> >>> >>>>>
>> >>> > > >> >>> >>>>>
>> >>> > > >> >>> >>>>> [1]
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>> >>> > > >> >>>
>> >>> > > >>
>> >>> > >
>> >>> >
>> >>>
>>

Re: Yarn 2.7.1

Posted by Poorna Chandra <po...@cask.co>.
Since container 000002 could not be started successfully, you'll not find
the logs in resource manager UI. You'll have to find the logs on the box
where the container was launched.

If you look at App Master logs, you'll see a line like -
12:49:33.417 [ApplicationMasterService] INFO
o.a.t.i.a.RunnableProcessLauncher - Launching in container
container_e29_1453498444043_0012_01_000002 at
*hdfs-ix03.se-ix.delta.prod*:45454, [$JAVA_HOME/bin/java
-Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
-Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
launcher.jar:$HADOOP_CONF_DIR -Xmx359m
org.apache.twill.launcher.TwillLauncher container.jar
org.apache.twill.internal.container.TwillContainerMain true
1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]

The stdout/stderr logs for container 000002 will be on the box where the
container was launched (hdfs-ix03.se-ix.delta.prod in the above case). They
should be in the haddop logs directory, which typically is
/var/log/hadoop-yarn/container/<application-id>/<container-id/

Poorna.


On Mon, Jan 25, 2016 at 6:15 AM, Kristoffer Sjögren <st...@gmail.com>
wrote:

> I got a tip on the hadoop mailing list to set
> yarn.nodemanager.delete.debug-delay-sec which prevented yarn from
> deleting the app resources and logs immediately.
>
> However, the 000002 container logs is nowhere to be found even with
> this property set? Are you sure that the container got a chance to
> start?
>
> On Sun, Jan 24, 2016 at 12:55 PM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
> > I'm not sure where I can find those logs? There is no container or
> > application with this id in the yarn UI. And there is no directory
> > with that name on the machine that started the application.
> >
> > On Sat, Jan 23, 2016 at 11:17 PM, Poorna Chandra <po...@cask.co> wrote:
> >> The logs pasted in your previous post are from the App Master -
> >> container_e29_1453498444043_0012_01_000001.
> >>
> >> The App Master starts up fine now, and launches the application
> container -
> >> container_e29_1453498444043_0012_01_000002. It is the application
> container
> >> that dies on launch. We'll need the logs for the application container
> to
> >> see why is is dying.
> >>
> >> Poorna.
> >>
> >> On Sat, Jan 23, 2016 at 1:52 PM, Kristoffer Sjögren <st...@gmail.com>
> >> wrote:
> >>
> >>> I pasted both stdout and stderr in my previous post.
> >>> Den 23 jan 2016 22:50 skrev "Poorna Chandra" <po...@cask.co>:
> >>>
> >>> > Hi Kristoffer,
> >>> >
> >>> > Looks like container_e29_1453498444043_0012_01_000002 could not be
> >>> started
> >>> > due to some issue. Can you attach the stdout and stderr logs for
> >>> > container_e29_1453498444043_0012_01_000002?
> >>> >
> >>> > Poorna.
> >>> >
> >>> >
> >>> > On Sat, Jan 23, 2016 at 3:53 AM, Kristoffer Sjögren <
> stoffe@gmail.com>
> >>> > wrote:
> >>> >
> >>> > > Yes, that almost worked. Now the application starts on Yarn and
> after
> >>> > > a while an exception is thrown and the application exits with code
> 10.
> >>> > >
> >>> > >
> >>> > > Application
> >>> > >
> >>> > > About
> >>> > > Jobs
> >>> > >
> >>> > > Tools
> >>> > >
> >>> > > Log Type: stdout
> >>> > >
> >>> > > Log Upload Time: Sat Jan 23 12:49:41 +0100 2016
> >>> > >
> >>> > > Log Length: 21097
> >>> > >
> >>> > > UnJar appMaster.jar to tmp/twill.launcher-1453549768670-0
> >>> > > Launch class
> >>> (org.apache.twill.internal.appmaster.ApplicationMasterMain)
> >>> > > with classpath:
> >>> > >
> >>> > >
> >>> >
> >>>
> [file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/classes,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/resources,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-cli-1.2.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/scala-library-2.10.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-math3-3.1.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-core-1.0.9.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/xmlenc-0.52.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsch-0.1.42.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpclient-4.1.2.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-configuration-1.6.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/metrics-core-2.2.0.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-6.1.26.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-api-2.7.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-annotations-2.7.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guice-3.0.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-net-3.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-util-6.1.26.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/kafka_2.10-0.8.0.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-api-0.6.0-incubating.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-api-1.7.10.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/paranamer-2.3.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/protobuf-java-2.5.0.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-kerberos-codec-2.0.0-M15.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/avro-1.7.4.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-compress-1.4.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-auth-2.7.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zookeeper-3.4.6.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-core-1.9.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-client-2.7.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-zookeeper-0.6.0-incubating.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-client-1.9.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/gson-2.2.4.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-common-2.7.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-hdfs-2.7.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-asn1-api-1.0.0-M20.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-core-0.6.0-incubating.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-collections-3.2.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-3.7.0.Final.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-common-2.7.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-mapper-asl-1.9.13.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zkclient-0.3.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-jaxrs-1.9.13.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-xc-1.9.13.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsr305-3.0.0.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/snappy-java-1.0.4.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/log4j-1.2.17.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-codec-1.4.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/asm-all-5.0.2.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-all-4.0.23.Final.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/servlet-api-2.5.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guava-13.0.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jopt-simple-3.2.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-framework-2.7.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-client-2.7.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-httpclient-3.1.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-api-0.6.0-incubating.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-lang-2.6.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpcore-4.1.2.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-yarn-0.6.0-incubating.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-util-1.0.0-M20.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/htrace-core-3.1.0-incubating.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-common-0.6.0-incubating.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-io-2.4.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-server-1.9.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-i18n-2.0.0-M15.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-logging-1.1.3.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-core-0.6.0-incubating.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-core-asl-1.9.13.jar,
> >>> > >
> >>> > >
> >>> >
> >>>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/javax.inject-1.jar]
> >>> > > Launching main: public static void
> >>> > >
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(java.lang.String[])
> >>> > > throws java.lang.Exception []
> >>> > > 12:49:29.586 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos
> krb5
> >>> > > configuration not found, setting default realm to empty
> >>> > > 12:49:30.083 [main] DEBUG o.a.h.h.p.d.s.DataTransferSaslUtil -
> >>> > > DataTransferProtocol not using SaslPropertiesResolver, no QOP
> found in
> >>> > > configuration for dfs.data.transfer.protection
> >>> > > 12:49:30.552 [main] INFO  o.apache.twill.internal.ServiceMain -
> >>> > > Starting service ApplicationMasterService [NEW].
> >>> > > 12:49:30.600 [kafka-publisher] WARN
> o.a.t.i.k.c.SimpleKafkaPublisher
> >>> > > - Broker list is empty. No Kafka producer is created.
> >>> > > 12:49:30.704 [TrackerService STARTING] INFO
> >>> > > o.a.t.i.appmaster.TrackerService - Tracker service started at
> >>> > > http://hdfs-ix03.se-ix.delta.prod:51793
> >>> > > 12:49:30.922 [TwillZKPathService STARTING] INFO
> >>> > > o.a.t.i.ServiceMain$TwillZKPathService - Creating container ZK
> path:
> >>> > >
> >>>
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> >>> > > 12:49:31.102 [kafka-publisher] INFO
> o.a.t.i.k.c.SimpleKafkaPublisher
> >>> > > - Update Kafka producer broker list:
> hdfs-ix03.se-ix.delta.prod:58668
> >>> > > 12:49:31.288 [ApplicationMasterService] INFO
> >>> > > o.a.t.internal.AbstractTwillService - Create live node
> >>> > >
> >>> > >
> >>> >
> >>>
> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> >>> > > 12:49:31.308 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.ApplicationMasterService - Start application master with
> >>> > > spec:
> >>> > >
> >>> >
> >>>
> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
> >>> > > 12:49:31.318 [main] INFO  o.apache.twill.internal.ServiceMain -
> >>> > > Service ApplicationMasterService [RUNNING] started.
> >>> > > 12:49:31.344 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.ApplicationMasterService - Request 1 container with
> >>> > > capability <memory:512, vCores:1> for runnable JarRunnable
> >>> > > 12:49:33.368 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.ApplicationMasterService - Got container
> >>> > > container_e29_1453498444043_0012_01_000002
> >>> > > 12:49:33.369 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.ApplicationMasterService - Starting runnable JarRunnable
> >>> > > with
> >>> > >
> >>> >
> >>>
> RunnableProcessLauncher{container=org.apache.twill.internal.yarn.Hadoop21YarnContainerInfo@5e82cebd
> >>> > > }
> >>> > > 12:49:33.417 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.RunnableProcessLauncher - Launching in container
> >>> > > container_e29_1453498444043_0012_01_000002 at
> >>> > > hdfs-ix03.se-ix.delta.prod:45454, [$JAVA_HOME/bin/java
> >>> > > -Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
> >>> > > -Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
> >>> > > launcher.jar:$HADOOP_CONF_DIR -Xmx359m
> >>> > > org.apache.twill.launcher.TwillLauncher container.jar
> >>> > > org.apache.twill.internal.container.TwillContainerMain true
> >>> > > 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]
> >>> > > 12:49:33.473 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.ApplicationMasterService - Runnable JarRunnable fully
> >>> > > provisioned with 1 instances.
> >>> > > 12:49:35.302 [zk-client-EventThread] INFO
> >>> > > o.a.t.i.TwillContainerLauncher - Container LiveNodeData updated:
> >>> > >
> >>> > >
> >>> >
> >>>
> {"data":{"containerId":"container_e29_1453498444043_0012_01_000002","host":"hdfs-ix03.se-ix.delta.prod"}}
> >>> > > 12:49:37.484 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.ApplicationMasterService - Container
> >>> > > container_e29_1453498444043_0012_01_000002 completed with
> >>> > > COMPLETE:Exception from container-launch.
> >>> > > Container id: container_e29_1453498444043_0012_01_000002
> >>> > > Exit code: 10
> >>> > > Stack trace: ExitCodeException exitCode=10:
> >>> > > at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
> >>> > > at org.apache.hadoop.util.Shell.run(Shell.java:487)
> >>> > > at
> >>> > >
> >>>
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
> >>> > > at
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
> >>> > > at
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> >>> > > at
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> >>> > > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >>> > > at
> >>> > >
> >>> >
> >>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >>> > > at
> >>> > >
> >>> >
> >>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >>> > > at java.lang.Thread.run(Thread.java:745)
> >>> > >
> >>> > >
> >>> > > Container exited with a non-zero exit code 10
> >>> > > .
> >>> > > 12:49:37.488 [ApplicationMasterService] WARN
> >>> > > o.a.t.i.appmaster.RunningContainers - Container
> >>> > > container_e29_1453498444043_0012_01_000002 exited abnormally with
> >>> > > state COMPLETE, exit code 10.
> >>> > > 12:49:37.496 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.ApplicationMasterService - All containers completed.
> >>> > > Shutting down application master.
> >>> > > 12:49:37.498 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.ApplicationMasterService - Stop application master with
> >>> > > spec:
> >>> > >
> >>> >
> >>>
> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
> >>> > > 12:49:37.500 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.appmaster.RunningContainers - Stopping all instances of
> >>> > > JarRunnable
> >>> > > 12:49:37.500 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.appmaster.RunningContainers - Terminated all instances of
> >>> > > JarRunnable
> >>> > > 12:49:37.512 [ApplicationMasterService] INFO
> >>> > > o.a.t.i.a.ApplicationMasterService - Application directory deleted:
> >>> > >
> >>>
> hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> >>> > > 12:49:37.512 [ApplicationMasterService] INFO
> >>> > > o.a.t.internal.AbstractTwillService - Remove live node
> >>> > >
> >>> > >
> >>> >
> >>>
> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> >>> > > 12:49:37.516 [ApplicationMasterService] INFO
> >>> > > o.a.t.internal.AbstractTwillService - Service
> ApplicationMasterService
> >>> > > with runId be4bbf01-5e72-4147-b2eb-b84e19214b5b shutdown completed
> >>> > > 12:49:37.516 [main] INFO  o.apache.twill.internal.ServiceMain -
> >>> > > Service ApplicationMasterService [TERMINATED] completed.
> >>> > > 12:49:39.676 [kafka-publisher] WARN
> o.a.t.i.k.c.SimpleKafkaPublisher
> >>> > > - Broker list is empty. No Kafka producer is created.
> >>> > > 12:49:40.037 [TwillZKPathService STOPPING] INFO
> >>> > > o.a.t.i.ServiceMain$TwillZKPathService - Removing container ZK
> path:
> >>> > >
> >>>
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> >>> > > 12:49:40.248 [TrackerService STOPPING] INFO
> >>> > > o.a.t.i.appmaster.TrackerService - Tracker service stopped at
> >>> > > http://hdfs-ix03.se-ix.delta.prod:51793
> >>> > > Main class completed.
> >>> > > Launcher completed
> >>> > > Cleanup directory tmp/twill.launcher-1453549768670-0
> >>> > >
> >>> > >
> >>> > >
> >>> > > SLF4J: Class path contains multiple SLF4J bindings.
> >>> > > SLF4J: Found binding in
> >>> > >
> >>> > >
> >>> >
> >>>
> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >>> > > SLF4J: Found binding in
> >>> > >
> >>> > >
> >>> >
> >>>
> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >>> > > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for
> an
> >>> > > explanation.
> >>> > > SLF4J: Actual binding is of type
> >>> > > [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
> >>> > > 16/01/23 12:49:29 INFO impl.ContainerManagementProtocolProxy:
> >>> > > yarn.client.max-cached-nodemanagers-proxies : 0
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Verifying
> properties
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> log.dir is
> >>> > > overridden to
> >>> > >
> >>> >
> >>>
> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > default.replication.factor is overridden to 1
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property port is
> >>> > > overridden to 58668
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > socket.request.max.bytes is overridden to 104857600
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > socket.send.buffer.bytes is overridden to 1048576
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > log.flush.interval.ms is overridden to 1000
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > zookeeper.connect is overridden to
> >>> > >
> >>> > >
> >>> >
> >>>
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> broker.id
> >>> > > is overridden to 1
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > log.retention.hours is overridden to 24
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > socket.receive.buffer.bytes is overridden to 1048576
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > zookeeper.connection.timeout.ms is overridden to 3000
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > num.partitions is overridden to 1
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > log.flush.interval.messages is overridden to 10000
> >>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> >>> > > log.segment.bytes is overridden to 536870912
> >>> > > 16/01/23 12:49:30 INFO client.ConfiguredRMFailoverProxyProvider:
> >>> > > Failing over to rm2
> >>> > > 16/01/23 12:49:30 INFO server.KafkaServer: [Kafka Server 1],
> Starting
> >>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
> Log
> >>> > > directory
> >>> > >
> >>> >
> >>>
> '/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs'
> >>> > > not found, creating it.
> >>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
> >>> > > Starting log cleaner every 600000 ms
> >>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
> >>> > > Starting log flusher every 3000 ms with the following overrides
> Map()
> >>> > > 16/01/23 12:49:30 INFO network.Acceptor: Awaiting socket
> connections
> >>> > > on 0.0.0.0:58668.
> >>> > > 16/01/23 12:49:30 INFO network.SocketServer: [Socket Server on
> Broker
> >>> > > 1], Started
> >>> > > 16/01/23 12:49:30 INFO server.KafkaZooKeeper: connecting to ZK:
> >>> > >
> >>> > >
> >>> >
> >>>
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> >>> > > 16/01/23 12:49:30 INFO zkclient.ZkEventThread: Starting ZkClient
> event
> >>> > > thread.
> >>> > > 16/01/23 12:49:31 INFO zkclient.ZkClient: zookeeper state changed
> >>> > > (SyncConnected)
> >>> > > 16/01/23 12:49:31 INFO utils.ZkUtils$: Registered broker 1 at path
> >>> > > /brokers/ids/1 with address hdfs-ix03.se-ix.delta.prod:58668.
> >>> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
> >>> > > Connecting to ZK:
> >>> > >
> >>> > >
> >>> >
> >>>
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Verifying
> properties
> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> >>> > > metadata.broker.list is overridden to
> hdfs-ix03.se-ix.delta.prod:58668
> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> >>> > > request.required.acks is overridden to 1
> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> >>> > > partitioner.class is overridden to
> >>> > > org.apache.twill.internal.kafka.client.IntegerPartitioner
> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> >>> > > compression.codec is overridden to snappy
> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> >>> > > key.serializer.class is overridden to
> >>> > > org.apache.twill.internal.kafka.client.IntegerEncoder
> >>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> >>> > > serializer.class is overridden to
> >>> > > org.apache.twill.internal.kafka.client.ByteBufferEncoder
> >>> > > 16/01/23 12:49:31 INFO utils.Mx4jLoader$: Will not load MX4J,
> >>> > > mx4j-tools.jar is not in the classpath
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Controller starting up
> >>> > > 16/01/23 12:49:31 INFO server.ZookeeperLeaderElector: 1
> successfully
> >>> > > elected as leader
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Broker 1 starting become controller state transition
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Controller 1 incremented epoch to 1
> >>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> >>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> >>> > > correlation id 0 for 1 topic(s) Set(log)
> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> >>> > > 16/01/23 12:49:31 INFO controller.RequestSendThread:
> >>> > > [Controller-1-to-broker-1-send-thread], Starting
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Currently active brokers in the cluster: Set(1)
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Currently shutting brokers in the cluster: Set()
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Current list of topics in the cluster: Set()
> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
> state
> >>> > > machine on controller 1]: No state transitions triggered since no
> >>> > > partitions are assigned to brokers 1
> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
> state
> >>> > > machine on controller 1]: Invoking state change to OnlineReplica
> for
> >>> > > replicas
> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
> state
> >>> > > machine on controller 1]: Started replica state machine with
> initial
> >>> > > state -> Map()
> >>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> >>> > > state machine on Controller 1]: Started partition state machine
> with
> >>> > > initial state -> Map()
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Broker 1 is ready to serve as the new controller with epoch 1
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Partitions being reassigned: Map()
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Partitions already reassigned: List()
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Resuming reassignment of partitions: Map()
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Partitions undergoing preferred replica election:
> >>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> >>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Partitions that completed preferred replica election:
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Resuming preferred replica election for partitions:
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Starting preferred replica leader election for partitions
> >>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> >>> > > state machine on Controller 1]: Invoking state change to
> >>> > > OnlinePartition for partitions
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> >>> > > Controller startup complete
> >>> > > 16/01/23 12:49:31 INFO server.KafkaApis: [KafkaApi-1] Auto
> creation of
> >>> > > topic log with 1 partitions and replication factor 1 is successful!
> >>> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
> Started
> >>> > > 16/01/23 12:49:31 INFO
> >>> > > server.ZookeeperLeaderElector$LeaderChangeListener: New leader is 1
> >>> > > 16/01/23 12:49:31 INFO controller.ControllerEpochListener:
> >>> > > [ControllerEpochListener on 1]: Initialized controller epoch to 1
> and
> >>> > > zk version 0
> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> >>> > > hdfs-ix03.se-ix.delta.prod:58668
> >>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> >>> > > fetching metadata [{TopicMetadata for topic log ->
> >>> > > No partition metadata for topic log due to
> >>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
> >>> > > kafka.common.LeaderNotAvailableException
> >>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> >>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> >>> > > correlation id 1 for 1 topic(s) Set(log)
> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> >>> > > 16/01/23 12:49:31 INFO
> >>> > > controller.PartitionStateMachine$TopicChangeListener:
> >>> > > [TopicChangeListener on Controller 1]: New topics: [Set(log)],
> deleted
> >>> > > topics: [Set()], new partition replica assignment [Map([log,0] ->
> >>> > > List(1))]
> >>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> >>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> New
> >>> > > topic creation callback for [log,0]
> >>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> New
> >>> > > partition creation callback for [log,0]
> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> >>> > > hdfs-ix03.se-ix.delta.prod:58668
> >>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> >>> > > state machine on Controller 1]: Invoking state change to
> NewPartition
> >>> > > for partitions [log,0]
> >>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> >>> > > fetching metadata [{TopicMetadata for topic log ->
> >>> > > No partition metadata for topic log due to
> >>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
> >>> > > kafka.common.LeaderNotAvailableException
> >>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket
> connection to
> >>> > > /10.3.24.22.
> >>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket
> connection to
> >>> > > /10.3.24.22.
> >>> > > 16/01/23 12:49:31 ERROR async.DefaultEventHandler: Failed to
> collate
> >>> > > messages by topic, partition due to: Failed to fetch topic metadata
> >>> > > for topic: log
> >>> > > 16/01/23 12:49:31 INFO async.DefaultEventHandler: Back off for 100
> ms
> >>> > > before retrying send. Remaining retries = 3
> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
> state
> >>> > > machine on controller 1]: Invoking state change to NewReplica for
> >>> > > replicas PartitionAndReplica(log,0,1)
> >>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> >>> > > state machine on Controller 1]: Invoking state change to
> >>> > > OnlinePartition for partitions [log,0]
> >>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica
> state
> >>> > > machine on controller 1]: Invoking state change to OnlineReplica
> for
> >>> > > replicas PartitionAndReplica(log,0,1)
> >>> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
> >>> > > Broker 1]: Handling LeaderAndIsr request
> >>> > >
> >>> > >
> >>> >
> >>>
> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
> >>> > > ->
> >>> > >
> >>> >
> >>>
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
> >>> > > 16/01/23 12:49:31 INFO server.ReplicaFetcherManager:
> >>> > > [ReplicaFetcherManager on broker 1] Removing fetcher for partition
> >>> > > [log,0]
> >>> > > 16/01/23 12:49:31 INFO log.Log: [Kafka Log on Broker 1], Completed
> >>> > > load of log log-0 with log end offset 0
> >>> > > 16/01/23 12:49:31 INFO log.LogManager: [Log Manager on Broker 1]
> >>> > > Created log for partition [log,0] in
> >>> > >
> >>> > >
> >>> >
> >>>
> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs.
> >>> > > 16/01/23 12:49:31 WARN server.HighwaterMarkCheckpoint: No
> >>> > > highwatermark file is found. Returning 0 as the highwatermark for
> >>> > > partition [log,0]
> >>> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
> >>> > > Broker 1]: Handled leader and isr request
> >>> > >
> >>> > >
> >>> >
> >>>
> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
> >>> > > ->
> >>> > >
> >>> >
> >>>
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
> >>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> >>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> >>> > > correlation id 2 for 1 topic(s) Set(log)
> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> >>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> >>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> >>> > > hdfs-ix03.se-ix.delta.prod:58668
> >>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket
> connection to
> >>> > > /10.3.24.22.
> >>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> >>> > > fetching metadata [{TopicMetadata for topic log ->
> >>> > > No partition metadata for topic log due to
> >>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
> >>> > > kafka.common.LeaderNotAvailableException
> >>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> >>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> >>> > > correlation id 3 for 1 topic(s) Set(log)
> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> >>> > > hdfs-ix03.se-ix.delta.prod:58668
> >>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket
> connection to
> >>> > > /10.3.24.22.
> >>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> >>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> >>> > > 16/01/23 12:49:33 INFO impl.AMRMClientImpl: Received new token for
> :
> >>> > > hdfs-ix03.se-ix.delta.prod:45454
> >>> > > 16/01/23 12:49:33 INFO impl.ContainerManagementProtocolProxy:
> Opening
> >>> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
> >>> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket
> connection to
> >>> > > /10.3.24.22.
> >>> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket
> connection to
> >>> > > /10.3.24.22.
> >>> > > 16/01/23 12:49:39 INFO server.KafkaServer: [Kafka Server 1],
> Shutting
> >>> > down
> >>> > > 16/01/23 12:49:39 INFO server.KafkaZooKeeper: Closing zookeeper
> >>> client...
> >>> > > 16/01/23 12:49:39 INFO zkclient.ZkEventThread: Terminate ZkClient
> event
> >>> > > thread.
> >>> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on
> Broker
> >>> > > 1], Shutting down
> >>> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on
> Broker
> >>> > > 1], Shutdown completed
> >>> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka
> Request
> >>> > > Handler on Broker 1], shutting down
> >>> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka
> Request
> >>> > > Handler on Broker 1], shutted down completely
> >>> > > 16/01/23 12:49:39 INFO utils.KafkaScheduler: Shutdown Kafka
> scheduler
> >>> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
> >>> > > Broker 1]: Shut down
> >>> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
> >>> > > [ReplicaFetcherManager on broker 1] shutting down
> >>> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
> >>> > > [ReplicaFetcherManager on broker 1] shutdown completed
> >>> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
> >>> > > Broker 1]: Shutted down completely
> >>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
> >>> > > [Controller-1-to-broker-1-send-thread], Shutting down
> >>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
> >>> > > [Controller-1-to-broker-1-send-thread], Stopped
> >>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
> >>> > > [Controller-1-to-broker-1-send-thread], Shutdown completed
> >>> > > 16/01/23 12:49:40 INFO controller.KafkaController: [Controller 1]:
> >>> > > Controller shutdown complete
> >>> > > 16/01/23 12:49:40 INFO server.KafkaServer: [Kafka Server 1], Shut
> down
> >>> > > completed
> >>> > > 16/01/23 12:49:40 INFO impl.ContainerManagementProtocolProxy:
> Opening
> >>> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
> >>> > > 16/01/23 12:49:40 INFO impl.AMRMClientImpl: Waiting for
> application to
> >>> > > be successfully unregistered.
> >>> > > 16/01/23 12:49:40 INFO producer.SyncProducer: Disconnecting from
> >>> > > hdfs-ix03.se-ix.delta.prod:58668
> >>> > > 16/01/23 12:49:40 WARN async.DefaultEventHandler: Failed to send
> >>> > > producer request with correlation id 35 to broker 1 with data for
> >>> > > partitions [log,0]
> >>> > > java.nio.channels.ClosedByInterruptException
> >>> > > at
> >>> > >
> >>> >
> >>>
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> >>> > > at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:957)
> >>> > > at
> >>> >
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:204)
> >>> > > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
> >>> > > at
> >>> > >
> >>> >
> >>>
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
> >>> > > at kafka.utils.Utils$.read(Unknown Source)
> >>> > > at kafka.network.BoundedByteBufferReceive.readFrom(Unknown Source)
> >>> > > at kafka.network.Receive$class.readCompletely(Unknown Source)
> >>> > > at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown
> >>> Source)
> >>> > > at kafka.network.BlockingChannel.receive(Unknown Source)
> >>> > > at kafka.producer.SyncProducer.liftedTree1$1(Unknown Source)
> >>> > > at
> >>> >
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Unknown
> >>> > > Source)
> >>> > > at
> >>> > >
> >>> >
> >>>
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Unknown
> >>> > > Source)
> >>> > > at
> >>> > >
> >>> >
> >>>
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
> >>> > > Source)
> >>> > > at
> >>> > >
> >>> >
> >>>
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
> >>> > > Source)
> >>> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
> >>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(Unknown
> >>> > Source)
> >>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown
> Source)
> >>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown
> Source)
> >>> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
> >>> > > at kafka.producer.SyncProducer.send(Unknown Source)
> >>> > > at
> >>> > >
> >>> >
> >>>
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(Unknown
> >>> > > Source)
> >>> > > at
> >>> > >
> >>> >
> >>>
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
> >>> > > Source)
> >>> > > at
> >>> > >
> >>> >
> >>>
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
> >>> > > Source)
> >>> > > at
> >>> > >
> >>> >
> >>>
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> >>> > > at
> >>> > >
> >>> >
> >>>
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> >>> > > at
> >>> > >
> >>> >
> >>>
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> >>> > > at
> >>> > >
> >>> >
> >>>
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> >>> > > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> >>> > > at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> >>> > > at
> >>> > >
> >>> >
> >>>
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> >>> > > at
> >>> >
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(Unknown
> >>> > > Source)
> >>> > > at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
> >>> > > at kafka.producer.Producer.send(Unknown Source)
> >>> > > at kafka.javaapi.producer.Producer.send(Unknown Source)
> >>> > > at
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122)
> >>> > > at
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.logging.KafkaAppender.doPublishLogs(KafkaAppender.java:268)
> >>> > > at
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.logging.KafkaAppender.publishLogs(KafkaAppender.java:228)
> >>> > > at
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.logging.KafkaAppender.access$700(KafkaAppender.java:66)
> >>> > > at
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.logging.KafkaAppender$2.run(KafkaAppender.java:280)
> >>> > > at
> >>> >
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >>> > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> >>> > > at
> >>> > >
> >>> >
> >>>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> >>> > > at
> >>> > >
> >>> >
> >>>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> >>> > > at
> >>> > >
> >>> >
> >>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >>> > > at
> >>> > >
> >>> >
> >>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >>> > > at java.lang.Thread.run(Thread.java:745)
> >>> > > 16/01/23 12:49:40 INFO async.DefaultEventHandler: Back off for 100
> ms
> >>> > > before retrying send. Remaining retries = 3
> >>> > > 16/01/23 12:49:40 INFO producer.Producer: Shutting down producer
> >>> > > 16/01/23 12:49:40 INFO producer.ProducerPool: Closing all sync
> >>> producers
> >>> > >
> >>> > >
> >>> > > On Sat, Jan 23, 2016 at 1:22 AM, Terence Yim <ch...@gmail.com>
> wrote:
> >>> > > > Hi,
> >>> > > >
> >>> > > > It's due to a very old version of ASM library that bring it by
> >>> > > hadoop/yarn.
> >>> > > > Please add exclusion of asm library to all hadoop dependencies.
> >>> > > >
> >>> > > > <exclusion>
> >>> > > >   <groupId>asm</groupId>
> >>> > > >   <artifactId>asm</artifactId>
> >>> > > > </exclusion>
> >>> > > >
> >>> > > > Terence
> >>> > > >
> >>> > > >
> >>> > > > On Fri, Jan 22, 2016 at 2:34 PM, Kristoffer Sjögren <
> >>> stoffe@gmail.com>
> >>> > > > wrote:
> >>> > > >
> >>> > > >> Further adding the following dependencies cause another
> exception.
> >>> > > >>
> >>> > > >> <dependency>
> >>> > > >>   <groupId>com.google.guava</groupId>
> >>> > > >>   <artifactId>guava</artifactId>
> >>> > > >>   <version>13.0</version>
> >>> > > >> </dependency>
> >>> > > >> <dependency>
> >>> > > >>   <groupId>org.apache.hadoop</groupId>
> >>> > > >>   <artifactId>hadoop-hdfs</artifactId>
> >>> > > >>   <version>2.7.1</version>
> >>> > > >> </dependency>
> >>> > > >>
> >>> > > >> Exception in thread " STARTING"
> >>> > > >> java.lang.IncompatibleClassChangeError: class
> >>> > > >>
> org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
> >>> > > >> has interface org.objectweb.asm.ClassVisitor as super class
> >>> > > >> at java.lang.ClassLoader.defineClass1(Native Method)
> >>> > > >> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> >>> > > >> at
> >>> > >
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> >>> > > >> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> >>> > > >> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> >>> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> >>> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> >>> > > >> at java.security.AccessController.doPrivileged(Native Method)
> >>> > > >> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> >>> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >>> > > >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> >>> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
> >>> > > >> at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
> >>> > > >> at java.lang.Thread.run(Thread.java:745)
> >>> > > >>
> >>> > > >> On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <
> >>> > stoffe@gmail.com>
> >>> > > >> wrote:
> >>> > > >> > Add those dependencies fail with the following exception.
> >>> > > >> >
> >>> > > >> > Exception in thread "main" java.lang.AbstractMethodError:
> >>> > > >> >
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
> >>> > > >> > at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
> >>> > > >> > at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
> >>> > > >> > at
> >>> org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
> >>> > > >> > at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
> >>> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
> >>> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
> >>> > > >> > at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
> >>> > > >> > at
> >>> > >
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> >>> > > >> > at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> >>> > > >> > at
> >>> > > >>
> >>> >
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> >>> > > >> > at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> >>> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> >>> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> >>> > > >> > at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
> >>> > > >> > at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
> >>> > > >> > at deephacks.BundledJarExample.main(BundledJarExample.java:70)
> >>> > > >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> > > >> > at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>> > > >> > at
> >>> > > >>
> >>> > >
> >>> >
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>> > > >> > at java.lang.reflect.Method.invoke(Method.java:497)
> >>> > > >> > at
> >>> > >
> com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> >>> > > >> >
> >>> > > >> > On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <
> chtyim@gmail.com>
> >>> > > wrote:
> >>> > > >> >> Hi,
> >>> > > >> >>
> >>> > > >> >> If you run it from IDE, you and simply add a dependency on
> hadoop
> >>> > > with
> >>> > > >> >> version 2.7.1. E.g. if you are using Maven, you can add the
> >>> > > following to
> >>> > > >> >> your pom.xml dependencies section.
> >>> > > >> >>
> >>> > > >> >> <dependency>
> >>> > > >> >>   <groupId>org.apache.hadoop</groupId>
> >>> > > >> >>   <artifactId>hadoop-yarn-api</artifactId>
> >>> > > >> >>   <version>2.7.1</version>
> >>> > > >> >> </dependency>
> >>> > > >> >> <dependency>
> >>> > > >> >>   <groupId>org.apache.hadoop</groupId>
> >>> > > >> >>   <artifactId>hadoop-yarn-common</artifactId>
> >>> > > >> >>   <version>2.7.1</version>
> >>> > > >> >> </dependency>
> >>> > > >> >> <dependency>
> >>> > > >> >>   <groupId>org.apache.hadoop</groupId>
> >>> > > >> >>   <artifactId>hadoop-yarn-client</artifactId>
> >>> > > >> >>   <version>2.7.1</version>
> >>> > > >> >> </dependency>
> >>> > > >> >> <dependency>
> >>> > > >> >>   <groupId>org.apache.hadoop</groupId>
> >>> > > >> >>   <artifactId>hadoop-common</artifactId>
> >>> > > >> >>   <version>2.7.1</version>
> >>> > > >> >> </dependency>
> >>> > > >> >>
> >>> > > >> >> Terence
> >>> > > >> >>
> >>> > > >> >> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <
> >>> > > stoffe@gmail.com>
> >>> > > >> >> wrote:
> >>> > > >> >>
> >>> > > >> >>> I run it from IDE right now, but would like to create a
> command
> >>> > line
> >>> > > >> >>> app eventually.
> >>> > > >> >>>
> >>> > > >> >>> I should clarify that the exception above is thrown on the
> YARN
> >>> > > node,
> >>> > > >> >>> not in the IDE.
> >>> > > >> >>>
> >>> > > >> >>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <
> chtyim@gmail.com>
> >>> > > wrote:
> >>> > > >> >>> > Hi Kristoffer,
> >>> > > >> >>> >
> >>> > > >> >>> > The example itself shouldn't need any modification.
> However,
> >>> how
> >>> > > do
> >>> > > >> >>> > you run that class? Do you run it from IDE or from command
> >>> line
> >>> > > using
> >>> > > >> >>> > "java" command?
> >>> > > >> >>> >
> >>> > > >> >>> > Terence
> >>> > > >> >>> >
> >>> > > >> >>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <
> >>> > > >> stoffe@gmail.com>
> >>> > > >> >>> wrote:
> >>> > > >> >>> >> Hi Terence,
> >>> > > >> >>> >>
> >>> > > >> >>> >> I'm quite new to Twill and not sure how to do that
> exactly.
> >>> > Could
> >>> > > >> you
> >>> > > >> >>> >> show me how to modify the following example to do the
> same?
> >>> > > >> >>> >>
> >>> > > >> >>> >>
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> >>> > > >> >>> >>
> >>> > > >> >>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <
> >>> chtyim@gmail.com
> >>> > >
> >>> > > >> wrote:
> >>> > > >> >>> >>> Hi Kristoffer,
> >>> > > >> >>> >>>
> >>> > > >> >>> >>> Seems like the exception comes from the YARN class
> >>> > > >> "ConverterUtils". I
> >>> > > >> >>> >>> believe need to start the application with the version
> 2.7.1
> >>> > > Hadoop
> >>> > > >> >>> >>> Jars. How to do start the twill application? Usually on
> a
> >>> > > cluster
> >>> > > >> with
> >>> > > >> >>> >>> hadoop installed, you can get all the hadoop jars in the
> >>> > > classpath
> >>> > > >> by
> >>> > > >> >>> >>> running this:
> >>> > > >> >>> >>>
> >>> > > >> >>> >>> export CP=`hadoop classpath`
> >>> > > >> >>> >>> java -cp .:$CP YourApp ...
> >>> > > >> >>> >>>
> >>> > > >> >>> >>> Assuming your app classes and Twill jars are in the
> current
> >>> > > >> directory.
> >>> > > >> >>> >>>
> >>> > > >> >>> >>> Terence
> >>> > > >> >>> >>>
> >>> > > >> >>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <
> >>> > > >> stoffe@gmail.com>
> >>> > > >> >>> wrote:
> >>> > > >> >>> >>>> Here's the full stacktrace.
> >>> > > >> >>> >>>>
> >>> > > >> >>> >>>> Exception in thread "main"
> >>> > > >> java.lang.reflect.InvocationTargetException
> >>> > > >> >>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >>> > Method)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>> > > >> >>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
> >>> > > >> >>> >>>> at
> >>> > > >>
> org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
> >>> > > >> >>> >>>> Caused by: java.lang.RuntimeException:
> >>> > > >> >>> >>>> java.lang.reflect.InvocationTargetException
> >>> > > >> >>> >>>> at
> >>> > > >> com.google.common.base.Throwables.propagate(Throwables.java:160)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
> >>> > > >> >>> >>>> ... 5 more
> >>> > > >> >>> >>>> Caused by: java.lang.reflect.InvocationTargetException
> >>> > > >> >>> >>>> at
> >>> > > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >>> > > >> >>> Method)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>> > > >> >>> >>>> at
> >>> > > java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
> >>> > > >> >>> >>>> ... 6 more
> >>> > > >> >>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid
> >>> > > >> ContainerId:
> >>> > > >> >>> >>>> container_e25_1453466340022_0004_01_000001
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
> >>> > > >> >>> >>>> ... 11 more
> >>> > > >> >>> >>>> Caused by: java.lang.NumberFormatException: For input
> >>> string:
> >>> > > >> "e25"
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> >>> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:589)
> >>> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:631)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
> >>> > > >> >>> >>>> at
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
> >>> > > >> >>> >>>> ... 14 more
> >>> > > >> >>> >>>>
> >>> > > >> >>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
> >>> > > >> >>> stoffe@gmail.com> wrote:
> >>> > > >> >>> >>>>> Hi
> >>> > > >> >>> >>>>>
> >>> > > >> >>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but
> get an
> >>> > > >> exception
> >>> > > >> >>> as
> >>> > > >> >>> >>>>> soon as the application starts on the resource manager
> >>> that
> >>> > > >> tells me
> >>> > > >> >>> >>>>> the container id cannot be parsed.
> >>> > > >> >>> >>>>>
> >>> > > >> >>> >>>>> java.lang.IllegalArgumentException: Invalid
> containerId:
> >>> > > >> >>> >>>>> container_e04_1427159778706_0002_01_000001
> >>> > > >> >>> >>>>>
> >>> > > >> >>> >>>>> I don't have the exact stacktrace but I recall it
> failing
> >>> in
> >>> > > >> >>> >>>>> ConverterUtils.toContainerId because it assumes that
> that
> >>> > the
> >>> > > >> first
> >>> > > >> >>> >>>>> token is an application attempt to be parsed as an
> >>> integer.
> >>> > > This
> >>> > > >> >>> class
> >>> > > >> >>> >>>>> resides in hadoop-yarn-common 2.3.0.
> >>> > > >> >>> >>>>>
> >>> > > >> >>> >>>>> Is there any way to either tweak the container id or
> make
> >>> > > twill
> >>> > > >> use
> >>> > > >> >>> >>>>> the 2.7.1 jar instead?
> >>> > > >> >>> >>>>>
> >>> > > >> >>> >>>>> Cheers,
> >>> > > >> >>> >>>>> -Kristoffer
> >>> > > >> >>> >>>>>
> >>> > > >> >>> >>>>>
> >>> > > >> >>> >>>>> [1]
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> >>> > > >> >>>
> >>> > > >>
> >>> > >
> >>> >
> >>>
>

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
I got a tip on the hadoop mailing list to set
yarn.nodemanager.delete.debug-delay-sec which prevented yarn from
deleting the app resources and logs immediately.

However, the 000002 container logs is nowhere to be found even with
this property set? Are you sure that the container got a chance to
start?

On Sun, Jan 24, 2016 at 12:55 PM, Kristoffer Sjögren <st...@gmail.com> wrote:
> I'm not sure where I can find those logs? There is no container or
> application with this id in the yarn UI. And there is no directory
> with that name on the machine that started the application.
>
> On Sat, Jan 23, 2016 at 11:17 PM, Poorna Chandra <po...@cask.co> wrote:
>> The logs pasted in your previous post are from the App Master -
>> container_e29_1453498444043_0012_01_000001.
>>
>> The App Master starts up fine now, and launches the application container -
>> container_e29_1453498444043_0012_01_000002. It is the application container
>> that dies on launch. We'll need the logs for the application container to
>> see why is is dying.
>>
>> Poorna.
>>
>> On Sat, Jan 23, 2016 at 1:52 PM, Kristoffer Sjögren <st...@gmail.com>
>> wrote:
>>
>>> I pasted both stdout and stderr in my previous post.
>>> Den 23 jan 2016 22:50 skrev "Poorna Chandra" <po...@cask.co>:
>>>
>>> > Hi Kristoffer,
>>> >
>>> > Looks like container_e29_1453498444043_0012_01_000002 could not be
>>> started
>>> > due to some issue. Can you attach the stdout and stderr logs for
>>> > container_e29_1453498444043_0012_01_000002?
>>> >
>>> > Poorna.
>>> >
>>> >
>>> > On Sat, Jan 23, 2016 at 3:53 AM, Kristoffer Sjögren <st...@gmail.com>
>>> > wrote:
>>> >
>>> > > Yes, that almost worked. Now the application starts on Yarn and after
>>> > > a while an exception is thrown and the application exits with code 10.
>>> > >
>>> > >
>>> > > Application
>>> > >
>>> > > About
>>> > > Jobs
>>> > >
>>> > > Tools
>>> > >
>>> > > Log Type: stdout
>>> > >
>>> > > Log Upload Time: Sat Jan 23 12:49:41 +0100 2016
>>> > >
>>> > > Log Length: 21097
>>> > >
>>> > > UnJar appMaster.jar to tmp/twill.launcher-1453549768670-0
>>> > > Launch class
>>> (org.apache.twill.internal.appmaster.ApplicationMasterMain)
>>> > > with classpath:
>>> > >
>>> > >
>>> >
>>> [file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/classes,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/resources,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-cli-1.2.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/scala-library-2.10.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-math3-3.1.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-core-1.0.9.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/xmlenc-0.52.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsch-0.1.42.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpclient-4.1.2.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-configuration-1.6.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/metrics-core-2.2.0.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-6.1.26.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-api-2.7.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-annotations-2.7.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guice-3.0.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-net-3.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-util-6.1.26.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/kafka_2.10-0.8.0.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-api-0.6.0-incubating.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-api-1.7.10.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/paranamer-2.3.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/protobuf-java-2.5.0.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-kerberos-codec-2.0.0-M15.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/avro-1.7.4.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-compress-1.4.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-auth-2.7.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zookeeper-3.4.6.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-core-1.9.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-client-2.7.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-zookeeper-0.6.0-incubating.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-client-1.9.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/gson-2.2.4.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-common-2.7.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-hdfs-2.7.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-asn1-api-1.0.0-M20.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-core-0.6.0-incubating.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-collections-3.2.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-3.7.0.Final.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-common-2.7.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-mapper-asl-1.9.13.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zkclient-0.3.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-jaxrs-1.9.13.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-xc-1.9.13.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsr305-3.0.0.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/snappy-java-1.0.4.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/log4j-1.2.17.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-codec-1.4.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/asm-all-5.0.2.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-all-4.0.23.Final.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/servlet-api-2.5.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guava-13.0.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jopt-simple-3.2.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-framework-2.7.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-client-2.7.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-httpclient-3.1.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-api-0.6.0-incubating.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-lang-2.6.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpcore-4.1.2.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-yarn-0.6.0-incubating.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-util-1.0.0-M20.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/htrace-core-3.1.0-incubating.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-common-0.6.0-incubating.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-io-2.4.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-server-1.9.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-i18n-2.0.0-M15.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-logging-1.1.3.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-core-0.6.0-incubating.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-core-asl-1.9.13.jar,
>>> > >
>>> > >
>>> >
>>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/javax.inject-1.jar]
>>> > > Launching main: public static void
>>> > >
>>> > >
>>> >
>>> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(java.lang.String[])
>>> > > throws java.lang.Exception []
>>> > > 12:49:29.586 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos krb5
>>> > > configuration not found, setting default realm to empty
>>> > > 12:49:30.083 [main] DEBUG o.a.h.h.p.d.s.DataTransferSaslUtil -
>>> > > DataTransferProtocol not using SaslPropertiesResolver, no QOP found in
>>> > > configuration for dfs.data.transfer.protection
>>> > > 12:49:30.552 [main] INFO  o.apache.twill.internal.ServiceMain -
>>> > > Starting service ApplicationMasterService [NEW].
>>> > > 12:49:30.600 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
>>> > > - Broker list is empty. No Kafka producer is created.
>>> > > 12:49:30.704 [TrackerService STARTING] INFO
>>> > > o.a.t.i.appmaster.TrackerService - Tracker service started at
>>> > > http://hdfs-ix03.se-ix.delta.prod:51793
>>> > > 12:49:30.922 [TwillZKPathService STARTING] INFO
>>> > > o.a.t.i.ServiceMain$TwillZKPathService - Creating container ZK path:
>>> > >
>>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>>> > > 12:49:31.102 [kafka-publisher] INFO  o.a.t.i.k.c.SimpleKafkaPublisher
>>> > > - Update Kafka producer broker list: hdfs-ix03.se-ix.delta.prod:58668
>>> > > 12:49:31.288 [ApplicationMasterService] INFO
>>> > > o.a.t.internal.AbstractTwillService - Create live node
>>> > >
>>> > >
>>> >
>>> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>>> > > 12:49:31.308 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.ApplicationMasterService - Start application master with
>>> > > spec:
>>> > >
>>> >
>>> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
>>> > > 12:49:31.318 [main] INFO  o.apache.twill.internal.ServiceMain -
>>> > > Service ApplicationMasterService [RUNNING] started.
>>> > > 12:49:31.344 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.ApplicationMasterService - Request 1 container with
>>> > > capability <memory:512, vCores:1> for runnable JarRunnable
>>> > > 12:49:33.368 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.ApplicationMasterService - Got container
>>> > > container_e29_1453498444043_0012_01_000002
>>> > > 12:49:33.369 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.ApplicationMasterService - Starting runnable JarRunnable
>>> > > with
>>> > >
>>> >
>>> RunnableProcessLauncher{container=org.apache.twill.internal.yarn.Hadoop21YarnContainerInfo@5e82cebd
>>> > > }
>>> > > 12:49:33.417 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.RunnableProcessLauncher - Launching in container
>>> > > container_e29_1453498444043_0012_01_000002 at
>>> > > hdfs-ix03.se-ix.delta.prod:45454, [$JAVA_HOME/bin/java
>>> > > -Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
>>> > > -Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
>>> > > launcher.jar:$HADOOP_CONF_DIR -Xmx359m
>>> > > org.apache.twill.launcher.TwillLauncher container.jar
>>> > > org.apache.twill.internal.container.TwillContainerMain true
>>> > > 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]
>>> > > 12:49:33.473 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.ApplicationMasterService - Runnable JarRunnable fully
>>> > > provisioned with 1 instances.
>>> > > 12:49:35.302 [zk-client-EventThread] INFO
>>> > > o.a.t.i.TwillContainerLauncher - Container LiveNodeData updated:
>>> > >
>>> > >
>>> >
>>> {"data":{"containerId":"container_e29_1453498444043_0012_01_000002","host":"hdfs-ix03.se-ix.delta.prod"}}
>>> > > 12:49:37.484 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.ApplicationMasterService - Container
>>> > > container_e29_1453498444043_0012_01_000002 completed with
>>> > > COMPLETE:Exception from container-launch.
>>> > > Container id: container_e29_1453498444043_0012_01_000002
>>> > > Exit code: 10
>>> > > Stack trace: ExitCodeException exitCode=10:
>>> > > at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
>>> > > at org.apache.hadoop.util.Shell.run(Shell.java:487)
>>> > > at
>>> > >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
>>> > > at
>>> > >
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
>>> > > at
>>> > >
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>>> > > at
>>> > >
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>>> > > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> > > at
>>> > >
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> > > at
>>> > >
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> > > at java.lang.Thread.run(Thread.java:745)
>>> > >
>>> > >
>>> > > Container exited with a non-zero exit code 10
>>> > > .
>>> > > 12:49:37.488 [ApplicationMasterService] WARN
>>> > > o.a.t.i.appmaster.RunningContainers - Container
>>> > > container_e29_1453498444043_0012_01_000002 exited abnormally with
>>> > > state COMPLETE, exit code 10.
>>> > > 12:49:37.496 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.ApplicationMasterService - All containers completed.
>>> > > Shutting down application master.
>>> > > 12:49:37.498 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.ApplicationMasterService - Stop application master with
>>> > > spec:
>>> > >
>>> >
>>> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
>>> > > 12:49:37.500 [ApplicationMasterService] INFO
>>> > > o.a.t.i.appmaster.RunningContainers - Stopping all instances of
>>> > > JarRunnable
>>> > > 12:49:37.500 [ApplicationMasterService] INFO
>>> > > o.a.t.i.appmaster.RunningContainers - Terminated all instances of
>>> > > JarRunnable
>>> > > 12:49:37.512 [ApplicationMasterService] INFO
>>> > > o.a.t.i.a.ApplicationMasterService - Application directory deleted:
>>> > >
>>> hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>>> > > 12:49:37.512 [ApplicationMasterService] INFO
>>> > > o.a.t.internal.AbstractTwillService - Remove live node
>>> > >
>>> > >
>>> >
>>> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>>> > > 12:49:37.516 [ApplicationMasterService] INFO
>>> > > o.a.t.internal.AbstractTwillService - Service ApplicationMasterService
>>> > > with runId be4bbf01-5e72-4147-b2eb-b84e19214b5b shutdown completed
>>> > > 12:49:37.516 [main] INFO  o.apache.twill.internal.ServiceMain -
>>> > > Service ApplicationMasterService [TERMINATED] completed.
>>> > > 12:49:39.676 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
>>> > > - Broker list is empty. No Kafka producer is created.
>>> > > 12:49:40.037 [TwillZKPathService STOPPING] INFO
>>> > > o.a.t.i.ServiceMain$TwillZKPathService - Removing container ZK path:
>>> > >
>>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>>> > > 12:49:40.248 [TrackerService STOPPING] INFO
>>> > > o.a.t.i.appmaster.TrackerService - Tracker service stopped at
>>> > > http://hdfs-ix03.se-ix.delta.prod:51793
>>> > > Main class completed.
>>> > > Launcher completed
>>> > > Cleanup directory tmp/twill.launcher-1453549768670-0
>>> > >
>>> > >
>>> > >
>>> > > SLF4J: Class path contains multiple SLF4J bindings.
>>> > > SLF4J: Found binding in
>>> > >
>>> > >
>>> >
>>> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > > SLF4J: Found binding in
>>> > >
>>> > >
>>> >
>>> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > > explanation.
>>> > > SLF4J: Actual binding is of type
>>> > > [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
>>> > > 16/01/23 12:49:29 INFO impl.ContainerManagementProtocolProxy:
>>> > > yarn.client.max-cached-nodemanagers-proxies : 0
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Verifying properties
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property log.dir is
>>> > > overridden to
>>> > >
>>> >
>>> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > default.replication.factor is overridden to 1
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property port is
>>> > > overridden to 58668
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > socket.request.max.bytes is overridden to 104857600
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > socket.send.buffer.bytes is overridden to 1048576
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > log.flush.interval.ms is overridden to 1000
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > zookeeper.connect is overridden to
>>> > >
>>> > >
>>> >
>>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property broker.id
>>> > > is overridden to 1
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > log.retention.hours is overridden to 24
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > socket.receive.buffer.bytes is overridden to 1048576
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > zookeeper.connection.timeout.ms is overridden to 3000
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > num.partitions is overridden to 1
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > log.flush.interval.messages is overridden to 10000
>>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>>> > > log.segment.bytes is overridden to 536870912
>>> > > 16/01/23 12:49:30 INFO client.ConfiguredRMFailoverProxyProvider:
>>> > > Failing over to rm2
>>> > > 16/01/23 12:49:30 INFO server.KafkaServer: [Kafka Server 1], Starting
>>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1] Log
>>> > > directory
>>> > >
>>> >
>>> '/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs'
>>> > > not found, creating it.
>>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
>>> > > Starting log cleaner every 600000 ms
>>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
>>> > > Starting log flusher every 3000 ms with the following overrides Map()
>>> > > 16/01/23 12:49:30 INFO network.Acceptor: Awaiting socket connections
>>> > > on 0.0.0.0:58668.
>>> > > 16/01/23 12:49:30 INFO network.SocketServer: [Socket Server on Broker
>>> > > 1], Started
>>> > > 16/01/23 12:49:30 INFO server.KafkaZooKeeper: connecting to ZK:
>>> > >
>>> > >
>>> >
>>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
>>> > > 16/01/23 12:49:30 INFO zkclient.ZkEventThread: Starting ZkClient event
>>> > > thread.
>>> > > 16/01/23 12:49:31 INFO zkclient.ZkClient: zookeeper state changed
>>> > > (SyncConnected)
>>> > > 16/01/23 12:49:31 INFO utils.ZkUtils$: Registered broker 1 at path
>>> > > /brokers/ids/1 with address hdfs-ix03.se-ix.delta.prod:58668.
>>> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
>>> > > Connecting to ZK:
>>> > >
>>> > >
>>> >
>>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
>>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Verifying properties
>>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>>> > > metadata.broker.list is overridden to hdfs-ix03.se-ix.delta.prod:58668
>>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>>> > > request.required.acks is overridden to 1
>>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>>> > > partitioner.class is overridden to
>>> > > org.apache.twill.internal.kafka.client.IntegerPartitioner
>>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>>> > > compression.codec is overridden to snappy
>>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>>> > > key.serializer.class is overridden to
>>> > > org.apache.twill.internal.kafka.client.IntegerEncoder
>>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>>> > > serializer.class is overridden to
>>> > > org.apache.twill.internal.kafka.client.ByteBufferEncoder
>>> > > 16/01/23 12:49:31 INFO utils.Mx4jLoader$: Will not load MX4J,
>>> > > mx4j-tools.jar is not in the classpath
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Controller starting up
>>> > > 16/01/23 12:49:31 INFO server.ZookeeperLeaderElector: 1 successfully
>>> > > elected as leader
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Broker 1 starting become controller state transition
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Controller 1 incremented epoch to 1
>>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>>> > > correlation id 0 for 1 topic(s) Set(log)
>>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>>> > > 16/01/23 12:49:31 INFO controller.RequestSendThread:
>>> > > [Controller-1-to-broker-1-send-thread], Starting
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Currently active brokers in the cluster: Set(1)
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Currently shutting brokers in the cluster: Set()
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Current list of topics in the cluster: Set()
>>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>>> > > machine on controller 1]: No state transitions triggered since no
>>> > > partitions are assigned to brokers 1
>>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>>> > > machine on controller 1]: Invoking state change to OnlineReplica for
>>> > > replicas
>>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>>> > > machine on controller 1]: Started replica state machine with initial
>>> > > state -> Map()
>>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>>> > > state machine on Controller 1]: Started partition state machine with
>>> > > initial state -> Map()
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Broker 1 is ready to serve as the new controller with epoch 1
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Partitions being reassigned: Map()
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Partitions already reassigned: List()
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Resuming reassignment of partitions: Map()
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Partitions undergoing preferred replica election:
>>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
>>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Partitions that completed preferred replica election:
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Resuming preferred replica election for partitions:
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Starting preferred replica leader election for partitions
>>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>>> > > state machine on Controller 1]: Invoking state change to
>>> > > OnlinePartition for partitions
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>>> > > Controller startup complete
>>> > > 16/01/23 12:49:31 INFO server.KafkaApis: [KafkaApi-1] Auto creation of
>>> > > topic log with 1 partitions and replication factor 1 is successful!
>>> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1], Started
>>> > > 16/01/23 12:49:31 INFO
>>> > > server.ZookeeperLeaderElector$LeaderChangeListener: New leader is 1
>>> > > 16/01/23 12:49:31 INFO controller.ControllerEpochListener:
>>> > > [ControllerEpochListener on 1]: Initialized controller epoch to 1 and
>>> > > zk version 0
>>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>>> > > hdfs-ix03.se-ix.delta.prod:58668
>>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
>>> > > fetching metadata [{TopicMetadata for topic log ->
>>> > > No partition metadata for topic log due to
>>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
>>> > > kafka.common.LeaderNotAvailableException
>>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>>> > > correlation id 1 for 1 topic(s) Set(log)
>>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>>> > > 16/01/23 12:49:31 INFO
>>> > > controller.PartitionStateMachine$TopicChangeListener:
>>> > > [TopicChangeListener on Controller 1]: New topics: [Set(log)], deleted
>>> > > topics: [Set()], new partition replica assignment [Map([log,0] ->
>>> > > List(1))]
>>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
>>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
>>> > > topic creation callback for [log,0]
>>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
>>> > > partition creation callback for [log,0]
>>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>>> > > hdfs-ix03.se-ix.delta.prod:58668
>>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>>> > > state machine on Controller 1]: Invoking state change to NewPartition
>>> > > for partitions [log,0]
>>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
>>> > > fetching metadata [{TopicMetadata for topic log ->
>>> > > No partition metadata for topic log due to
>>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
>>> > > kafka.common.LeaderNotAvailableException
>>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
>>> > > /10.3.24.22.
>>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
>>> > > /10.3.24.22.
>>> > > 16/01/23 12:49:31 ERROR async.DefaultEventHandler: Failed to collate
>>> > > messages by topic, partition due to: Failed to fetch topic metadata
>>> > > for topic: log
>>> > > 16/01/23 12:49:31 INFO async.DefaultEventHandler: Back off for 100 ms
>>> > > before retrying send. Remaining retries = 3
>>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>>> > > machine on controller 1]: Invoking state change to NewReplica for
>>> > > replicas PartitionAndReplica(log,0,1)
>>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>>> > > state machine on Controller 1]: Invoking state change to
>>> > > OnlinePartition for partitions [log,0]
>>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>>> > > machine on controller 1]: Invoking state change to OnlineReplica for
>>> > > replicas PartitionAndReplica(log,0,1)
>>> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
>>> > > Broker 1]: Handling LeaderAndIsr request
>>> > >
>>> > >
>>> >
>>> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
>>> > > ->
>>> > >
>>> >
>>> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
>>> > > 16/01/23 12:49:31 INFO server.ReplicaFetcherManager:
>>> > > [ReplicaFetcherManager on broker 1] Removing fetcher for partition
>>> > > [log,0]
>>> > > 16/01/23 12:49:31 INFO log.Log: [Kafka Log on Broker 1], Completed
>>> > > load of log log-0 with log end offset 0
>>> > > 16/01/23 12:49:31 INFO log.LogManager: [Log Manager on Broker 1]
>>> > > Created log for partition [log,0] in
>>> > >
>>> > >
>>> >
>>> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs.
>>> > > 16/01/23 12:49:31 WARN server.HighwaterMarkCheckpoint: No
>>> > > highwatermark file is found. Returning 0 as the highwatermark for
>>> > > partition [log,0]
>>> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
>>> > > Broker 1]: Handled leader and isr request
>>> > >
>>> > >
>>> >
>>> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
>>> > > ->
>>> > >
>>> >
>>> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
>>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>>> > > correlation id 2 for 1 topic(s) Set(log)
>>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
>>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
>>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>>> > > hdfs-ix03.se-ix.delta.prod:58668
>>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
>>> > > /10.3.24.22.
>>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
>>> > > fetching metadata [{TopicMetadata for topic log ->
>>> > > No partition metadata for topic log due to
>>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
>>> > > kafka.common.LeaderNotAvailableException
>>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>>> > > correlation id 3 for 1 topic(s) Set(log)
>>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>>> > > hdfs-ix03.se-ix.delta.prod:58668
>>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
>>> > > /10.3.24.22.
>>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>>> > > 16/01/23 12:49:33 INFO impl.AMRMClientImpl: Received new token for :
>>> > > hdfs-ix03.se-ix.delta.prod:45454
>>> > > 16/01/23 12:49:33 INFO impl.ContainerManagementProtocolProxy: Opening
>>> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
>>> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
>>> > > /10.3.24.22.
>>> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
>>> > > /10.3.24.22.
>>> > > 16/01/23 12:49:39 INFO server.KafkaServer: [Kafka Server 1], Shutting
>>> > down
>>> > > 16/01/23 12:49:39 INFO server.KafkaZooKeeper: Closing zookeeper
>>> client...
>>> > > 16/01/23 12:49:39 INFO zkclient.ZkEventThread: Terminate ZkClient event
>>> > > thread.
>>> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
>>> > > 1], Shutting down
>>> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
>>> > > 1], Shutdown completed
>>> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
>>> > > Handler on Broker 1], shutting down
>>> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
>>> > > Handler on Broker 1], shutted down completely
>>> > > 16/01/23 12:49:39 INFO utils.KafkaScheduler: Shutdown Kafka scheduler
>>> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
>>> > > Broker 1]: Shut down
>>> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
>>> > > [ReplicaFetcherManager on broker 1] shutting down
>>> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
>>> > > [ReplicaFetcherManager on broker 1] shutdown completed
>>> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
>>> > > Broker 1]: Shutted down completely
>>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
>>> > > [Controller-1-to-broker-1-send-thread], Shutting down
>>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
>>> > > [Controller-1-to-broker-1-send-thread], Stopped
>>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
>>> > > [Controller-1-to-broker-1-send-thread], Shutdown completed
>>> > > 16/01/23 12:49:40 INFO controller.KafkaController: [Controller 1]:
>>> > > Controller shutdown complete
>>> > > 16/01/23 12:49:40 INFO server.KafkaServer: [Kafka Server 1], Shut down
>>> > > completed
>>> > > 16/01/23 12:49:40 INFO impl.ContainerManagementProtocolProxy: Opening
>>> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
>>> > > 16/01/23 12:49:40 INFO impl.AMRMClientImpl: Waiting for application to
>>> > > be successfully unregistered.
>>> > > 16/01/23 12:49:40 INFO producer.SyncProducer: Disconnecting from
>>> > > hdfs-ix03.se-ix.delta.prod:58668
>>> > > 16/01/23 12:49:40 WARN async.DefaultEventHandler: Failed to send
>>> > > producer request with correlation id 35 to broker 1 with data for
>>> > > partitions [log,0]
>>> > > java.nio.channels.ClosedByInterruptException
>>> > > at
>>> > >
>>> >
>>> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>>> > > at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:957)
>>> > > at
>>> > sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:204)
>>> > > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
>>> > > at
>>> > >
>>> >
>>> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
>>> > > at kafka.utils.Utils$.read(Unknown Source)
>>> > > at kafka.network.BoundedByteBufferReceive.readFrom(Unknown Source)
>>> > > at kafka.network.Receive$class.readCompletely(Unknown Source)
>>> > > at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown
>>> Source)
>>> > > at kafka.network.BlockingChannel.receive(Unknown Source)
>>> > > at kafka.producer.SyncProducer.liftedTree1$1(Unknown Source)
>>> > > at
>>> > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Unknown
>>> > > Source)
>>> > > at
>>> > >
>>> >
>>> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Unknown
>>> > > Source)
>>> > > at
>>> > >
>>> >
>>> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
>>> > > Source)
>>> > > at
>>> > >
>>> >
>>> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
>>> > > Source)
>>> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
>>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(Unknown
>>> > Source)
>>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
>>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
>>> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
>>> > > at kafka.producer.SyncProducer.send(Unknown Source)
>>> > > at
>>> > >
>>> >
>>> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(Unknown
>>> > > Source)
>>> > > at
>>> > >
>>> >
>>> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
>>> > > Source)
>>> > > at
>>> > >
>>> >
>>> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
>>> > > Source)
>>> > > at
>>> > >
>>> >
>>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>>> > > at
>>> > >
>>> >
>>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>>> > > at
>>> > >
>>> >
>>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>>> > > at
>>> > >
>>> >
>>> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>>> > > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>>> > > at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>>> > > at
>>> > >
>>> >
>>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>>> > > at
>>> > kafka.producer.async.DefaultEventHandler.dispatchSerializedData(Unknown
>>> > > Source)
>>> > > at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
>>> > > at kafka.producer.Producer.send(Unknown Source)
>>> > > at kafka.javaapi.producer.Producer.send(Unknown Source)
>>> > > at
>>> > >
>>> >
>>> org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122)
>>> > > at
>>> > >
>>> >
>>> org.apache.twill.internal.logging.KafkaAppender.doPublishLogs(KafkaAppender.java:268)
>>> > > at
>>> > >
>>> >
>>> org.apache.twill.internal.logging.KafkaAppender.publishLogs(KafkaAppender.java:228)
>>> > > at
>>> > >
>>> >
>>> org.apache.twill.internal.logging.KafkaAppender.access$700(KafkaAppender.java:66)
>>> > > at
>>> > >
>>> >
>>> org.apache.twill.internal.logging.KafkaAppender$2.run(KafkaAppender.java:280)
>>> > > at
>>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>>> > > at
>>> > >
>>> >
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>>> > > at
>>> > >
>>> >
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>>> > > at
>>> > >
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> > > at
>>> > >
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> > > at java.lang.Thread.run(Thread.java:745)
>>> > > 16/01/23 12:49:40 INFO async.DefaultEventHandler: Back off for 100 ms
>>> > > before retrying send. Remaining retries = 3
>>> > > 16/01/23 12:49:40 INFO producer.Producer: Shutting down producer
>>> > > 16/01/23 12:49:40 INFO producer.ProducerPool: Closing all sync
>>> producers
>>> > >
>>> > >
>>> > > On Sat, Jan 23, 2016 at 1:22 AM, Terence Yim <ch...@gmail.com> wrote:
>>> > > > Hi,
>>> > > >
>>> > > > It's due to a very old version of ASM library that bring it by
>>> > > hadoop/yarn.
>>> > > > Please add exclusion of asm library to all hadoop dependencies.
>>> > > >
>>> > > > <exclusion>
>>> > > >   <groupId>asm</groupId>
>>> > > >   <artifactId>asm</artifactId>
>>> > > > </exclusion>
>>> > > >
>>> > > > Terence
>>> > > >
>>> > > >
>>> > > > On Fri, Jan 22, 2016 at 2:34 PM, Kristoffer Sjögren <
>>> stoffe@gmail.com>
>>> > > > wrote:
>>> > > >
>>> > > >> Further adding the following dependencies cause another exception.
>>> > > >>
>>> > > >> <dependency>
>>> > > >>   <groupId>com.google.guava</groupId>
>>> > > >>   <artifactId>guava</artifactId>
>>> > > >>   <version>13.0</version>
>>> > > >> </dependency>
>>> > > >> <dependency>
>>> > > >>   <groupId>org.apache.hadoop</groupId>
>>> > > >>   <artifactId>hadoop-hdfs</artifactId>
>>> > > >>   <version>2.7.1</version>
>>> > > >> </dependency>
>>> > > >>
>>> > > >> Exception in thread " STARTING"
>>> > > >> java.lang.IncompatibleClassChangeError: class
>>> > > >> org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
>>> > > >> has interface org.objectweb.asm.ClassVisitor as super class
>>> > > >> at java.lang.ClassLoader.defineClass1(Native Method)
>>> > > >> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
>>> > > >> at
>>> > > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>> > > >> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>>> > > >> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>>> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>>> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>>> > > >> at java.security.AccessController.doPrivileged(Native Method)
>>> > > >> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>>> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>> > > >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>>> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
>>> > > >> at
>>> > > >>
>>> > >
>>> >
>>> com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
>>> > > >> at java.lang.Thread.run(Thread.java:745)
>>> > > >>
>>> > > >> On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <
>>> > stoffe@gmail.com>
>>> > > >> wrote:
>>> > > >> > Add those dependencies fail with the following exception.
>>> > > >> >
>>> > > >> > Exception in thread "main" java.lang.AbstractMethodError:
>>> > > >> >
>>> > > >>
>>> > >
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
>>> > > >> > at
>>> > > >>
>>> > >
>>> >
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
>>> > > >> > at
>>> > > >>
>>> > >
>>> >
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
>>> > > >> > at
>>> org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
>>> > > >> > at
>>> > > >>
>>> > >
>>> >
>>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
>>> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
>>> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
>>> > > >> > at
>>> > > >>
>>> > >
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
>>> > > >> > at
>>> > > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>>> > > >> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>>> > > >> > at
>>> > > >>
>>> > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>>> > > >> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>>> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>>> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
>>> > > >> > at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
>>> > > >> > at
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
>>> > > >> > at deephacks.BundledJarExample.main(BundledJarExample.java:70)
>>> > > >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> > > >> > at
>>> > > >>
>>> > >
>>> >
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> > > >> > at
>>> > > >>
>>> > >
>>> >
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> > > >> > at java.lang.reflect.Method.invoke(Method.java:497)
>>> > > >> > at
>>> > > com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
>>> > > >> >
>>> > > >> > On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <ch...@gmail.com>
>>> > > wrote:
>>> > > >> >> Hi,
>>> > > >> >>
>>> > > >> >> If you run it from IDE, you and simply add a dependency on hadoop
>>> > > with
>>> > > >> >> version 2.7.1. E.g. if you are using Maven, you can add the
>>> > > following to
>>> > > >> >> your pom.xml dependencies section.
>>> > > >> >>
>>> > > >> >> <dependency>
>>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>>> > > >> >>   <artifactId>hadoop-yarn-api</artifactId>
>>> > > >> >>   <version>2.7.1</version>
>>> > > >> >> </dependency>
>>> > > >> >> <dependency>
>>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>>> > > >> >>   <artifactId>hadoop-yarn-common</artifactId>
>>> > > >> >>   <version>2.7.1</version>
>>> > > >> >> </dependency>
>>> > > >> >> <dependency>
>>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>>> > > >> >>   <artifactId>hadoop-yarn-client</artifactId>
>>> > > >> >>   <version>2.7.1</version>
>>> > > >> >> </dependency>
>>> > > >> >> <dependency>
>>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>>> > > >> >>   <artifactId>hadoop-common</artifactId>
>>> > > >> >>   <version>2.7.1</version>
>>> > > >> >> </dependency>
>>> > > >> >>
>>> > > >> >> Terence
>>> > > >> >>
>>> > > >> >> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <
>>> > > stoffe@gmail.com>
>>> > > >> >> wrote:
>>> > > >> >>
>>> > > >> >>> I run it from IDE right now, but would like to create a command
>>> > line
>>> > > >> >>> app eventually.
>>> > > >> >>>
>>> > > >> >>> I should clarify that the exception above is thrown on the YARN
>>> > > node,
>>> > > >> >>> not in the IDE.
>>> > > >> >>>
>>> > > >> >>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com>
>>> > > wrote:
>>> > > >> >>> > Hi Kristoffer,
>>> > > >> >>> >
>>> > > >> >>> > The example itself shouldn't need any modification. However,
>>> how
>>> > > do
>>> > > >> >>> > you run that class? Do you run it from IDE or from command
>>> line
>>> > > using
>>> > > >> >>> > "java" command?
>>> > > >> >>> >
>>> > > >> >>> > Terence
>>> > > >> >>> >
>>> > > >> >>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <
>>> > > >> stoffe@gmail.com>
>>> > > >> >>> wrote:
>>> > > >> >>> >> Hi Terence,
>>> > > >> >>> >>
>>> > > >> >>> >> I'm quite new to Twill and not sure how to do that exactly.
>>> > Could
>>> > > >> you
>>> > > >> >>> >> show me how to modify the following example to do the same?
>>> > > >> >>> >>
>>> > > >> >>> >>
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>>> > > >> >>> >>
>>> > > >> >>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <
>>> chtyim@gmail.com
>>> > >
>>> > > >> wrote:
>>> > > >> >>> >>> Hi Kristoffer,
>>> > > >> >>> >>>
>>> > > >> >>> >>> Seems like the exception comes from the YARN class
>>> > > >> "ConverterUtils". I
>>> > > >> >>> >>> believe need to start the application with the version 2.7.1
>>> > > Hadoop
>>> > > >> >>> >>> Jars. How to do start the twill application? Usually on a
>>> > > cluster
>>> > > >> with
>>> > > >> >>> >>> hadoop installed, you can get all the hadoop jars in the
>>> > > classpath
>>> > > >> by
>>> > > >> >>> >>> running this:
>>> > > >> >>> >>>
>>> > > >> >>> >>> export CP=`hadoop classpath`
>>> > > >> >>> >>> java -cp .:$CP YourApp ...
>>> > > >> >>> >>>
>>> > > >> >>> >>> Assuming your app classes and Twill jars are in the current
>>> > > >> directory.
>>> > > >> >>> >>>
>>> > > >> >>> >>> Terence
>>> > > >> >>> >>>
>>> > > >> >>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <
>>> > > >> stoffe@gmail.com>
>>> > > >> >>> wrote:
>>> > > >> >>> >>>> Here's the full stacktrace.
>>> > > >> >>> >>>>
>>> > > >> >>> >>>> Exception in thread "main"
>>> > > >> java.lang.reflect.InvocationTargetException
>>> > > >> >>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>> > Method)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> > > >> >>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>> > > >> >>> >>>> at
>>> > > >> org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
>>> > > >> >>> >>>> Caused by: java.lang.RuntimeException:
>>> > > >> >>> >>>> java.lang.reflect.InvocationTargetException
>>> > > >> >>> >>>> at
>>> > > >> com.google.common.base.Throwables.propagate(Throwables.java:160)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
>>> > > >> >>> >>>> ... 5 more
>>> > > >> >>> >>>> Caused by: java.lang.reflect.InvocationTargetException
>>> > > >> >>> >>>> at
>>> > > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> > > >> >>> Method)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> > > >> >>> >>>> at
>>> > > java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
>>> > > >> >>> >>>> ... 6 more
>>> > > >> >>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid
>>> > > >> ContainerId:
>>> > > >> >>> >>>> container_e25_1453466340022_0004_01_000001
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
>>> > > >> >>> >>>> ... 11 more
>>> > > >> >>> >>>> Caused by: java.lang.NumberFormatException: For input
>>> string:
>>> > > >> "e25"
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>>> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:589)
>>> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:631)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>> > > >> >>> >>>> at
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>> > > >> >>> >>>> ... 14 more
>>> > > >> >>> >>>>
>>> > > >> >>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
>>> > > >> >>> stoffe@gmail.com> wrote:
>>> > > >> >>> >>>>> Hi
>>> > > >> >>> >>>>>
>>> > > >> >>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an
>>> > > >> exception
>>> > > >> >>> as
>>> > > >> >>> >>>>> soon as the application starts on the resource manager
>>> that
>>> > > >> tells me
>>> > > >> >>> >>>>> the container id cannot be parsed.
>>> > > >> >>> >>>>>
>>> > > >> >>> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
>>> > > >> >>> >>>>> container_e04_1427159778706_0002_01_000001
>>> > > >> >>> >>>>>
>>> > > >> >>> >>>>> I don't have the exact stacktrace but I recall it failing
>>> in
>>> > > >> >>> >>>>> ConverterUtils.toContainerId because it assumes that that
>>> > the
>>> > > >> first
>>> > > >> >>> >>>>> token is an application attempt to be parsed as an
>>> integer.
>>> > > This
>>> > > >> >>> class
>>> > > >> >>> >>>>> resides in hadoop-yarn-common 2.3.0.
>>> > > >> >>> >>>>>
>>> > > >> >>> >>>>> Is there any way to either tweak the container id or make
>>> > > twill
>>> > > >> use
>>> > > >> >>> >>>>> the 2.7.1 jar instead?
>>> > > >> >>> >>>>>
>>> > > >> >>> >>>>> Cheers,
>>> > > >> >>> >>>>> -Kristoffer
>>> > > >> >>> >>>>>
>>> > > >> >>> >>>>>
>>> > > >> >>> >>>>> [1]
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>>> > > >> >>>
>>> > > >>
>>> > >
>>> >
>>>

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
I'm not sure where I can find those logs? There is no container or
application with this id in the yarn UI. And there is no directory
with that name on the machine that started the application.

On Sat, Jan 23, 2016 at 11:17 PM, Poorna Chandra <po...@cask.co> wrote:
> The logs pasted in your previous post are from the App Master -
> container_e29_1453498444043_0012_01_000001.
>
> The App Master starts up fine now, and launches the application container -
> container_e29_1453498444043_0012_01_000002. It is the application container
> that dies on launch. We'll need the logs for the application container to
> see why is is dying.
>
> Poorna.
>
> On Sat, Jan 23, 2016 at 1:52 PM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
>
>> I pasted both stdout and stderr in my previous post.
>> Den 23 jan 2016 22:50 skrev "Poorna Chandra" <po...@cask.co>:
>>
>> > Hi Kristoffer,
>> >
>> > Looks like container_e29_1453498444043_0012_01_000002 could not be
>> started
>> > due to some issue. Can you attach the stdout and stderr logs for
>> > container_e29_1453498444043_0012_01_000002?
>> >
>> > Poorna.
>> >
>> >
>> > On Sat, Jan 23, 2016 at 3:53 AM, Kristoffer Sjögren <st...@gmail.com>
>> > wrote:
>> >
>> > > Yes, that almost worked. Now the application starts on Yarn and after
>> > > a while an exception is thrown and the application exits with code 10.
>> > >
>> > >
>> > > Application
>> > >
>> > > About
>> > > Jobs
>> > >
>> > > Tools
>> > >
>> > > Log Type: stdout
>> > >
>> > > Log Upload Time: Sat Jan 23 12:49:41 +0100 2016
>> > >
>> > > Log Length: 21097
>> > >
>> > > UnJar appMaster.jar to tmp/twill.launcher-1453549768670-0
>> > > Launch class
>> (org.apache.twill.internal.appmaster.ApplicationMasterMain)
>> > > with classpath:
>> > >
>> > >
>> >
>> [file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/classes,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/resources,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-cli-1.2.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/scala-library-2.10.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-math3-3.1.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-core-1.0.9.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/xmlenc-0.52.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsch-0.1.42.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpclient-4.1.2.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-configuration-1.6.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/metrics-core-2.2.0.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-6.1.26.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-api-2.7.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-annotations-2.7.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guice-3.0.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-net-3.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-util-6.1.26.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/kafka_2.10-0.8.0.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-api-0.6.0-incubating.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-api-1.7.10.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/paranamer-2.3.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/protobuf-java-2.5.0.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-kerberos-codec-2.0.0-M15.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/avro-1.7.4.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-compress-1.4.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-auth-2.7.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zookeeper-3.4.6.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-core-1.9.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-client-2.7.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-zookeeper-0.6.0-incubating.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-client-1.9.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/gson-2.2.4.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-common-2.7.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-hdfs-2.7.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-asn1-api-1.0.0-M20.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-core-0.6.0-incubating.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-collections-3.2.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-3.7.0.Final.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-common-2.7.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-mapper-asl-1.9.13.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zkclient-0.3.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-jaxrs-1.9.13.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-xc-1.9.13.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsr305-3.0.0.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/snappy-java-1.0.4.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/log4j-1.2.17.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-codec-1.4.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/asm-all-5.0.2.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-all-4.0.23.Final.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/servlet-api-2.5.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guava-13.0.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jopt-simple-3.2.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-framework-2.7.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-client-2.7.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-httpclient-3.1.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-api-0.6.0-incubating.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-lang-2.6.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpcore-4.1.2.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-yarn-0.6.0-incubating.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-util-1.0.0-M20.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/htrace-core-3.1.0-incubating.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-common-0.6.0-incubating.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-io-2.4.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-server-1.9.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-i18n-2.0.0-M15.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-logging-1.1.3.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-core-0.6.0-incubating.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-core-asl-1.9.13.jar,
>> > >
>> > >
>> >
>> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/javax.inject-1.jar]
>> > > Launching main: public static void
>> > >
>> > >
>> >
>> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(java.lang.String[])
>> > > throws java.lang.Exception []
>> > > 12:49:29.586 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos krb5
>> > > configuration not found, setting default realm to empty
>> > > 12:49:30.083 [main] DEBUG o.a.h.h.p.d.s.DataTransferSaslUtil -
>> > > DataTransferProtocol not using SaslPropertiesResolver, no QOP found in
>> > > configuration for dfs.data.transfer.protection
>> > > 12:49:30.552 [main] INFO  o.apache.twill.internal.ServiceMain -
>> > > Starting service ApplicationMasterService [NEW].
>> > > 12:49:30.600 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
>> > > - Broker list is empty. No Kafka producer is created.
>> > > 12:49:30.704 [TrackerService STARTING] INFO
>> > > o.a.t.i.appmaster.TrackerService - Tracker service started at
>> > > http://hdfs-ix03.se-ix.delta.prod:51793
>> > > 12:49:30.922 [TwillZKPathService STARTING] INFO
>> > > o.a.t.i.ServiceMain$TwillZKPathService - Creating container ZK path:
>> > >
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> > > 12:49:31.102 [kafka-publisher] INFO  o.a.t.i.k.c.SimpleKafkaPublisher
>> > > - Update Kafka producer broker list: hdfs-ix03.se-ix.delta.prod:58668
>> > > 12:49:31.288 [ApplicationMasterService] INFO
>> > > o.a.t.internal.AbstractTwillService - Create live node
>> > >
>> > >
>> >
>> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> > > 12:49:31.308 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.ApplicationMasterService - Start application master with
>> > > spec:
>> > >
>> >
>> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
>> > > 12:49:31.318 [main] INFO  o.apache.twill.internal.ServiceMain -
>> > > Service ApplicationMasterService [RUNNING] started.
>> > > 12:49:31.344 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.ApplicationMasterService - Request 1 container with
>> > > capability <memory:512, vCores:1> for runnable JarRunnable
>> > > 12:49:33.368 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.ApplicationMasterService - Got container
>> > > container_e29_1453498444043_0012_01_000002
>> > > 12:49:33.369 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.ApplicationMasterService - Starting runnable JarRunnable
>> > > with
>> > >
>> >
>> RunnableProcessLauncher{container=org.apache.twill.internal.yarn.Hadoop21YarnContainerInfo@5e82cebd
>> > > }
>> > > 12:49:33.417 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.RunnableProcessLauncher - Launching in container
>> > > container_e29_1453498444043_0012_01_000002 at
>> > > hdfs-ix03.se-ix.delta.prod:45454, [$JAVA_HOME/bin/java
>> > > -Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
>> > > -Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
>> > > launcher.jar:$HADOOP_CONF_DIR -Xmx359m
>> > > org.apache.twill.launcher.TwillLauncher container.jar
>> > > org.apache.twill.internal.container.TwillContainerMain true
>> > > 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]
>> > > 12:49:33.473 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.ApplicationMasterService - Runnable JarRunnable fully
>> > > provisioned with 1 instances.
>> > > 12:49:35.302 [zk-client-EventThread] INFO
>> > > o.a.t.i.TwillContainerLauncher - Container LiveNodeData updated:
>> > >
>> > >
>> >
>> {"data":{"containerId":"container_e29_1453498444043_0012_01_000002","host":"hdfs-ix03.se-ix.delta.prod"}}
>> > > 12:49:37.484 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.ApplicationMasterService - Container
>> > > container_e29_1453498444043_0012_01_000002 completed with
>> > > COMPLETE:Exception from container-launch.
>> > > Container id: container_e29_1453498444043_0012_01_000002
>> > > Exit code: 10
>> > > Stack trace: ExitCodeException exitCode=10:
>> > > at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
>> > > at org.apache.hadoop.util.Shell.run(Shell.java:487)
>> > > at
>> > >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
>> > > at
>> > >
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
>> > > at
>> > >
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>> > > at
>> > >
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>> > > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> > > at
>> > >
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> > > at
>> > >
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> > > at java.lang.Thread.run(Thread.java:745)
>> > >
>> > >
>> > > Container exited with a non-zero exit code 10
>> > > .
>> > > 12:49:37.488 [ApplicationMasterService] WARN
>> > > o.a.t.i.appmaster.RunningContainers - Container
>> > > container_e29_1453498444043_0012_01_000002 exited abnormally with
>> > > state COMPLETE, exit code 10.
>> > > 12:49:37.496 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.ApplicationMasterService - All containers completed.
>> > > Shutting down application master.
>> > > 12:49:37.498 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.ApplicationMasterService - Stop application master with
>> > > spec:
>> > >
>> >
>> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
>> > > 12:49:37.500 [ApplicationMasterService] INFO
>> > > o.a.t.i.appmaster.RunningContainers - Stopping all instances of
>> > > JarRunnable
>> > > 12:49:37.500 [ApplicationMasterService] INFO
>> > > o.a.t.i.appmaster.RunningContainers - Terminated all instances of
>> > > JarRunnable
>> > > 12:49:37.512 [ApplicationMasterService] INFO
>> > > o.a.t.i.a.ApplicationMasterService - Application directory deleted:
>> > >
>> hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> > > 12:49:37.512 [ApplicationMasterService] INFO
>> > > o.a.t.internal.AbstractTwillService - Remove live node
>> > >
>> > >
>> >
>> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> > > 12:49:37.516 [ApplicationMasterService] INFO
>> > > o.a.t.internal.AbstractTwillService - Service ApplicationMasterService
>> > > with runId be4bbf01-5e72-4147-b2eb-b84e19214b5b shutdown completed
>> > > 12:49:37.516 [main] INFO  o.apache.twill.internal.ServiceMain -
>> > > Service ApplicationMasterService [TERMINATED] completed.
>> > > 12:49:39.676 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
>> > > - Broker list is empty. No Kafka producer is created.
>> > > 12:49:40.037 [TwillZKPathService STOPPING] INFO
>> > > o.a.t.i.ServiceMain$TwillZKPathService - Removing container ZK path:
>> > >
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
>> > > 12:49:40.248 [TrackerService STOPPING] INFO
>> > > o.a.t.i.appmaster.TrackerService - Tracker service stopped at
>> > > http://hdfs-ix03.se-ix.delta.prod:51793
>> > > Main class completed.
>> > > Launcher completed
>> > > Cleanup directory tmp/twill.launcher-1453549768670-0
>> > >
>> > >
>> > >
>> > > SLF4J: Class path contains multiple SLF4J bindings.
>> > > SLF4J: Found binding in
>> > >
>> > >
>> >
>> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > > SLF4J: Found binding in
>> > >
>> > >
>> >
>> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > > explanation.
>> > > SLF4J: Actual binding is of type
>> > > [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
>> > > 16/01/23 12:49:29 INFO impl.ContainerManagementProtocolProxy:
>> > > yarn.client.max-cached-nodemanagers-proxies : 0
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Verifying properties
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property log.dir is
>> > > overridden to
>> > >
>> >
>> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > default.replication.factor is overridden to 1
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property port is
>> > > overridden to 58668
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > socket.request.max.bytes is overridden to 104857600
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > socket.send.buffer.bytes is overridden to 1048576
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > log.flush.interval.ms is overridden to 1000
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > zookeeper.connect is overridden to
>> > >
>> > >
>> >
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property broker.id
>> > > is overridden to 1
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > log.retention.hours is overridden to 24
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > socket.receive.buffer.bytes is overridden to 1048576
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > zookeeper.connection.timeout.ms is overridden to 3000
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > num.partitions is overridden to 1
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > log.flush.interval.messages is overridden to 10000
>> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
>> > > log.segment.bytes is overridden to 536870912
>> > > 16/01/23 12:49:30 INFO client.ConfiguredRMFailoverProxyProvider:
>> > > Failing over to rm2
>> > > 16/01/23 12:49:30 INFO server.KafkaServer: [Kafka Server 1], Starting
>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1] Log
>> > > directory
>> > >
>> >
>> '/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs'
>> > > not found, creating it.
>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
>> > > Starting log cleaner every 600000 ms
>> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
>> > > Starting log flusher every 3000 ms with the following overrides Map()
>> > > 16/01/23 12:49:30 INFO network.Acceptor: Awaiting socket connections
>> > > on 0.0.0.0:58668.
>> > > 16/01/23 12:49:30 INFO network.SocketServer: [Socket Server on Broker
>> > > 1], Started
>> > > 16/01/23 12:49:30 INFO server.KafkaZooKeeper: connecting to ZK:
>> > >
>> > >
>> >
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
>> > > 16/01/23 12:49:30 INFO zkclient.ZkEventThread: Starting ZkClient event
>> > > thread.
>> > > 16/01/23 12:49:31 INFO zkclient.ZkClient: zookeeper state changed
>> > > (SyncConnected)
>> > > 16/01/23 12:49:31 INFO utils.ZkUtils$: Registered broker 1 at path
>> > > /brokers/ids/1 with address hdfs-ix03.se-ix.delta.prod:58668.
>> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
>> > > Connecting to ZK:
>> > >
>> > >
>> >
>> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Verifying properties
>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> > > metadata.broker.list is overridden to hdfs-ix03.se-ix.delta.prod:58668
>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> > > request.required.acks is overridden to 1
>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> > > partitioner.class is overridden to
>> > > org.apache.twill.internal.kafka.client.IntegerPartitioner
>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> > > compression.codec is overridden to snappy
>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> > > key.serializer.class is overridden to
>> > > org.apache.twill.internal.kafka.client.IntegerEncoder
>> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
>> > > serializer.class is overridden to
>> > > org.apache.twill.internal.kafka.client.ByteBufferEncoder
>> > > 16/01/23 12:49:31 INFO utils.Mx4jLoader$: Will not load MX4J,
>> > > mx4j-tools.jar is not in the classpath
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Controller starting up
>> > > 16/01/23 12:49:31 INFO server.ZookeeperLeaderElector: 1 successfully
>> > > elected as leader
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Broker 1 starting become controller state transition
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Controller 1 incremented epoch to 1
>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>> > > correlation id 0 for 1 topic(s) Set(log)
>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> > > 16/01/23 12:49:31 INFO controller.RequestSendThread:
>> > > [Controller-1-to-broker-1-send-thread], Starting
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Currently active brokers in the cluster: Set(1)
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Currently shutting brokers in the cluster: Set()
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Current list of topics in the cluster: Set()
>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>> > > machine on controller 1]: No state transitions triggered since no
>> > > partitions are assigned to brokers 1
>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>> > > machine on controller 1]: Invoking state change to OnlineReplica for
>> > > replicas
>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>> > > machine on controller 1]: Started replica state machine with initial
>> > > state -> Map()
>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>> > > state machine on Controller 1]: Started partition state machine with
>> > > initial state -> Map()
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Broker 1 is ready to serve as the new controller with epoch 1
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Partitions being reassigned: Map()
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Partitions already reassigned: List()
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Resuming reassignment of partitions: Map()
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Partitions undergoing preferred replica election:
>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Partitions that completed preferred replica election:
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Resuming preferred replica election for partitions:
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Starting preferred replica leader election for partitions
>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>> > > state machine on Controller 1]: Invoking state change to
>> > > OnlinePartition for partitions
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
>> > > Controller startup complete
>> > > 16/01/23 12:49:31 INFO server.KafkaApis: [KafkaApi-1] Auto creation of
>> > > topic log with 1 partitions and replication factor 1 is successful!
>> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1], Started
>> > > 16/01/23 12:49:31 INFO
>> > > server.ZookeeperLeaderElector$LeaderChangeListener: New leader is 1
>> > > 16/01/23 12:49:31 INFO controller.ControllerEpochListener:
>> > > [ControllerEpochListener on 1]: Initialized controller epoch to 1 and
>> > > zk version 0
>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>> > > hdfs-ix03.se-ix.delta.prod:58668
>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
>> > > fetching metadata [{TopicMetadata for topic log ->
>> > > No partition metadata for topic log due to
>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
>> > > kafka.common.LeaderNotAvailableException
>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>> > > correlation id 1 for 1 topic(s) Set(log)
>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> > > 16/01/23 12:49:31 INFO
>> > > controller.PartitionStateMachine$TopicChangeListener:
>> > > [TopicChangeListener on Controller 1]: New topics: [Set(log)], deleted
>> > > topics: [Set()], new partition replica assignment [Map([log,0] ->
>> > > List(1))]
>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
>> > > topic creation callback for [log,0]
>> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
>> > > partition creation callback for [log,0]
>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>> > > hdfs-ix03.se-ix.delta.prod:58668
>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>> > > state machine on Controller 1]: Invoking state change to NewPartition
>> > > for partitions [log,0]
>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
>> > > fetching metadata [{TopicMetadata for topic log ->
>> > > No partition metadata for topic log due to
>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
>> > > kafka.common.LeaderNotAvailableException
>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
>> > > /10.3.24.22.
>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
>> > > /10.3.24.22.
>> > > 16/01/23 12:49:31 ERROR async.DefaultEventHandler: Failed to collate
>> > > messages by topic, partition due to: Failed to fetch topic metadata
>> > > for topic: log
>> > > 16/01/23 12:49:31 INFO async.DefaultEventHandler: Back off for 100 ms
>> > > before retrying send. Remaining retries = 3
>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>> > > machine on controller 1]: Invoking state change to NewReplica for
>> > > replicas PartitionAndReplica(log,0,1)
>> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
>> > > state machine on Controller 1]: Invoking state change to
>> > > OnlinePartition for partitions [log,0]
>> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
>> > > machine on controller 1]: Invoking state change to OnlineReplica for
>> > > replicas PartitionAndReplica(log,0,1)
>> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
>> > > Broker 1]: Handling LeaderAndIsr request
>> > >
>> > >
>> >
>> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
>> > > ->
>> > >
>> >
>> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
>> > > 16/01/23 12:49:31 INFO server.ReplicaFetcherManager:
>> > > [ReplicaFetcherManager on broker 1] Removing fetcher for partition
>> > > [log,0]
>> > > 16/01/23 12:49:31 INFO log.Log: [Kafka Log on Broker 1], Completed
>> > > load of log log-0 with log end offset 0
>> > > 16/01/23 12:49:31 INFO log.LogManager: [Log Manager on Broker 1]
>> > > Created log for partition [log,0] in
>> > >
>> > >
>> >
>> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs.
>> > > 16/01/23 12:49:31 WARN server.HighwaterMarkCheckpoint: No
>> > > highwatermark file is found. Returning 0 as the highwatermark for
>> > > partition [log,0]
>> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
>> > > Broker 1]: Handled leader and isr request
>> > >
>> > >
>> >
>> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
>> > > ->
>> > >
>> >
>> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>> > > correlation id 2 for 1 topic(s) Set(log)
>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
>> > > "partitions":{ "0":[ 1 ] }, "version":1 }
>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>> > > hdfs-ix03.se-ix.delta.prod:58668
>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
>> > > /10.3.24.22.
>> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
>> > > fetching metadata [{TopicMetadata for topic log ->
>> > > No partition metadata for topic log due to
>> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
>> > > kafka.common.LeaderNotAvailableException
>> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
>> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
>> > > correlation id 3 for 1 topic(s) Set(log)
>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
>> > > hdfs-ix03.se-ix.delta.prod:58668
>> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
>> > > /10.3.24.22.
>> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
>> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
>> > > 16/01/23 12:49:33 INFO impl.AMRMClientImpl: Received new token for :
>> > > hdfs-ix03.se-ix.delta.prod:45454
>> > > 16/01/23 12:49:33 INFO impl.ContainerManagementProtocolProxy: Opening
>> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
>> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
>> > > /10.3.24.22.
>> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
>> > > /10.3.24.22.
>> > > 16/01/23 12:49:39 INFO server.KafkaServer: [Kafka Server 1], Shutting
>> > down
>> > > 16/01/23 12:49:39 INFO server.KafkaZooKeeper: Closing zookeeper
>> client...
>> > > 16/01/23 12:49:39 INFO zkclient.ZkEventThread: Terminate ZkClient event
>> > > thread.
>> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
>> > > 1], Shutting down
>> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
>> > > 1], Shutdown completed
>> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
>> > > Handler on Broker 1], shutting down
>> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
>> > > Handler on Broker 1], shutted down completely
>> > > 16/01/23 12:49:39 INFO utils.KafkaScheduler: Shutdown Kafka scheduler
>> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
>> > > Broker 1]: Shut down
>> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
>> > > [ReplicaFetcherManager on broker 1] shutting down
>> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
>> > > [ReplicaFetcherManager on broker 1] shutdown completed
>> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
>> > > Broker 1]: Shutted down completely
>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
>> > > [Controller-1-to-broker-1-send-thread], Shutting down
>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
>> > > [Controller-1-to-broker-1-send-thread], Stopped
>> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
>> > > [Controller-1-to-broker-1-send-thread], Shutdown completed
>> > > 16/01/23 12:49:40 INFO controller.KafkaController: [Controller 1]:
>> > > Controller shutdown complete
>> > > 16/01/23 12:49:40 INFO server.KafkaServer: [Kafka Server 1], Shut down
>> > > completed
>> > > 16/01/23 12:49:40 INFO impl.ContainerManagementProtocolProxy: Opening
>> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
>> > > 16/01/23 12:49:40 INFO impl.AMRMClientImpl: Waiting for application to
>> > > be successfully unregistered.
>> > > 16/01/23 12:49:40 INFO producer.SyncProducer: Disconnecting from
>> > > hdfs-ix03.se-ix.delta.prod:58668
>> > > 16/01/23 12:49:40 WARN async.DefaultEventHandler: Failed to send
>> > > producer request with correlation id 35 to broker 1 with data for
>> > > partitions [log,0]
>> > > java.nio.channels.ClosedByInterruptException
>> > > at
>> > >
>> >
>> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>> > > at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:957)
>> > > at
>> > sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:204)
>> > > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
>> > > at
>> > >
>> >
>> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
>> > > at kafka.utils.Utils$.read(Unknown Source)
>> > > at kafka.network.BoundedByteBufferReceive.readFrom(Unknown Source)
>> > > at kafka.network.Receive$class.readCompletely(Unknown Source)
>> > > at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown
>> Source)
>> > > at kafka.network.BlockingChannel.receive(Unknown Source)
>> > > at kafka.producer.SyncProducer.liftedTree1$1(Unknown Source)
>> > > at
>> > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Unknown
>> > > Source)
>> > > at
>> > >
>> >
>> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Unknown
>> > > Source)
>> > > at
>> > >
>> >
>> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
>> > > Source)
>> > > at
>> > >
>> >
>> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
>> > > Source)
>> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(Unknown
>> > Source)
>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
>> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
>> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
>> > > at kafka.producer.SyncProducer.send(Unknown Source)
>> > > at
>> > >
>> >
>> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(Unknown
>> > > Source)
>> > > at
>> > >
>> >
>> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
>> > > Source)
>> > > at
>> > >
>> >
>> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
>> > > Source)
>> > > at
>> > >
>> >
>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>> > > at
>> > >
>> >
>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>> > > at
>> > >
>> >
>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>> > > at
>> > >
>> >
>> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>> > > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>> > > at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>> > > at
>> > >
>> >
>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>> > > at
>> > kafka.producer.async.DefaultEventHandler.dispatchSerializedData(Unknown
>> > > Source)
>> > > at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
>> > > at kafka.producer.Producer.send(Unknown Source)
>> > > at kafka.javaapi.producer.Producer.send(Unknown Source)
>> > > at
>> > >
>> >
>> org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122)
>> > > at
>> > >
>> >
>> org.apache.twill.internal.logging.KafkaAppender.doPublishLogs(KafkaAppender.java:268)
>> > > at
>> > >
>> >
>> org.apache.twill.internal.logging.KafkaAppender.publishLogs(KafkaAppender.java:228)
>> > > at
>> > >
>> >
>> org.apache.twill.internal.logging.KafkaAppender.access$700(KafkaAppender.java:66)
>> > > at
>> > >
>> >
>> org.apache.twill.internal.logging.KafkaAppender$2.run(KafkaAppender.java:280)
>> > > at
>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>> > > at
>> > >
>> >
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> > > at
>> > >
>> >
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>> > > at
>> > >
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> > > at
>> > >
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> > > at java.lang.Thread.run(Thread.java:745)
>> > > 16/01/23 12:49:40 INFO async.DefaultEventHandler: Back off for 100 ms
>> > > before retrying send. Remaining retries = 3
>> > > 16/01/23 12:49:40 INFO producer.Producer: Shutting down producer
>> > > 16/01/23 12:49:40 INFO producer.ProducerPool: Closing all sync
>> producers
>> > >
>> > >
>> > > On Sat, Jan 23, 2016 at 1:22 AM, Terence Yim <ch...@gmail.com> wrote:
>> > > > Hi,
>> > > >
>> > > > It's due to a very old version of ASM library that bring it by
>> > > hadoop/yarn.
>> > > > Please add exclusion of asm library to all hadoop dependencies.
>> > > >
>> > > > <exclusion>
>> > > >   <groupId>asm</groupId>
>> > > >   <artifactId>asm</artifactId>
>> > > > </exclusion>
>> > > >
>> > > > Terence
>> > > >
>> > > >
>> > > > On Fri, Jan 22, 2016 at 2:34 PM, Kristoffer Sjögren <
>> stoffe@gmail.com>
>> > > > wrote:
>> > > >
>> > > >> Further adding the following dependencies cause another exception.
>> > > >>
>> > > >> <dependency>
>> > > >>   <groupId>com.google.guava</groupId>
>> > > >>   <artifactId>guava</artifactId>
>> > > >>   <version>13.0</version>
>> > > >> </dependency>
>> > > >> <dependency>
>> > > >>   <groupId>org.apache.hadoop</groupId>
>> > > >>   <artifactId>hadoop-hdfs</artifactId>
>> > > >>   <version>2.7.1</version>
>> > > >> </dependency>
>> > > >>
>> > > >> Exception in thread " STARTING"
>> > > >> java.lang.IncompatibleClassChangeError: class
>> > > >> org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
>> > > >> has interface org.objectweb.asm.ClassVisitor as super class
>> > > >> at java.lang.ClassLoader.defineClass1(Native Method)
>> > > >> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
>> > > >> at
>> > > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>> > > >> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>> > > >> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>> > > >> at java.security.AccessController.doPrivileged(Native Method)
>> > > >> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> > > >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
>> > > >> at
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
>> > > >> at
>> > > >>
>> > >
>> >
>> com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
>> > > >> at java.lang.Thread.run(Thread.java:745)
>> > > >>
>> > > >> On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <
>> > stoffe@gmail.com>
>> > > >> wrote:
>> > > >> > Add those dependencies fail with the following exception.
>> > > >> >
>> > > >> > Exception in thread "main" java.lang.AbstractMethodError:
>> > > >> >
>> > > >>
>> > >
>> >
>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
>> > > >> > at
>> > > >>
>> > >
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
>> > > >> > at
>> > > >>
>> > >
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
>> > > >> > at
>> org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
>> > > >> > at
>> > > >>
>> > >
>> >
>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
>> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
>> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
>> > > >> > at
>> > > >>
>> > >
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
>> > > >> > at
>> > > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>> > > >> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>> > > >> > at
>> > > >>
>> > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>> > > >> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
>> > > >> > at
>> > > >>
>> > >
>> >
>> org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
>> > > >> > at
>> > > >>
>> > >
>> >
>> org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
>> > > >> > at deephacks.BundledJarExample.main(BundledJarExample.java:70)
>> > > >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > > >> > at
>> > > >>
>> > >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > > >> > at
>> > > >>
>> > >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > > >> > at java.lang.reflect.Method.invoke(Method.java:497)
>> > > >> > at
>> > > com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
>> > > >> >
>> > > >> > On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <ch...@gmail.com>
>> > > wrote:
>> > > >> >> Hi,
>> > > >> >>
>> > > >> >> If you run it from IDE, you and simply add a dependency on hadoop
>> > > with
>> > > >> >> version 2.7.1. E.g. if you are using Maven, you can add the
>> > > following to
>> > > >> >> your pom.xml dependencies section.
>> > > >> >>
>> > > >> >> <dependency>
>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>> > > >> >>   <artifactId>hadoop-yarn-api</artifactId>
>> > > >> >>   <version>2.7.1</version>
>> > > >> >> </dependency>
>> > > >> >> <dependency>
>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>> > > >> >>   <artifactId>hadoop-yarn-common</artifactId>
>> > > >> >>   <version>2.7.1</version>
>> > > >> >> </dependency>
>> > > >> >> <dependency>
>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>> > > >> >>   <artifactId>hadoop-yarn-client</artifactId>
>> > > >> >>   <version>2.7.1</version>
>> > > >> >> </dependency>
>> > > >> >> <dependency>
>> > > >> >>   <groupId>org.apache.hadoop</groupId>
>> > > >> >>   <artifactId>hadoop-common</artifactId>
>> > > >> >>   <version>2.7.1</version>
>> > > >> >> </dependency>
>> > > >> >>
>> > > >> >> Terence
>> > > >> >>
>> > > >> >> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <
>> > > stoffe@gmail.com>
>> > > >> >> wrote:
>> > > >> >>
>> > > >> >>> I run it from IDE right now, but would like to create a command
>> > line
>> > > >> >>> app eventually.
>> > > >> >>>
>> > > >> >>> I should clarify that the exception above is thrown on the YARN
>> > > node,
>> > > >> >>> not in the IDE.
>> > > >> >>>
>> > > >> >>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com>
>> > > wrote:
>> > > >> >>> > Hi Kristoffer,
>> > > >> >>> >
>> > > >> >>> > The example itself shouldn't need any modification. However,
>> how
>> > > do
>> > > >> >>> > you run that class? Do you run it from IDE or from command
>> line
>> > > using
>> > > >> >>> > "java" command?
>> > > >> >>> >
>> > > >> >>> > Terence
>> > > >> >>> >
>> > > >> >>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <
>> > > >> stoffe@gmail.com>
>> > > >> >>> wrote:
>> > > >> >>> >> Hi Terence,
>> > > >> >>> >>
>> > > >> >>> >> I'm quite new to Twill and not sure how to do that exactly.
>> > Could
>> > > >> you
>> > > >> >>> >> show me how to modify the following example to do the same?
>> > > >> >>> >>
>> > > >> >>> >>
>> > > >> >>>
>> > > >>
>> > >
>> >
>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>> > > >> >>> >>
>> > > >> >>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <
>> chtyim@gmail.com
>> > >
>> > > >> wrote:
>> > > >> >>> >>> Hi Kristoffer,
>> > > >> >>> >>>
>> > > >> >>> >>> Seems like the exception comes from the YARN class
>> > > >> "ConverterUtils". I
>> > > >> >>> >>> believe need to start the application with the version 2.7.1
>> > > Hadoop
>> > > >> >>> >>> Jars. How to do start the twill application? Usually on a
>> > > cluster
>> > > >> with
>> > > >> >>> >>> hadoop installed, you can get all the hadoop jars in the
>> > > classpath
>> > > >> by
>> > > >> >>> >>> running this:
>> > > >> >>> >>>
>> > > >> >>> >>> export CP=`hadoop classpath`
>> > > >> >>> >>> java -cp .:$CP YourApp ...
>> > > >> >>> >>>
>> > > >> >>> >>> Assuming your app classes and Twill jars are in the current
>> > > >> directory.
>> > > >> >>> >>>
>> > > >> >>> >>> Terence
>> > > >> >>> >>>
>> > > >> >>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <
>> > > >> stoffe@gmail.com>
>> > > >> >>> wrote:
>> > > >> >>> >>>> Here's the full stacktrace.
>> > > >> >>> >>>>
>> > > >> >>> >>>> Exception in thread "main"
>> > > >> java.lang.reflect.InvocationTargetException
>> > > >> >>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> > Method)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > > >> >>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
>> > > >> >>> >>>> at
>> > > >> org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
>> > > >> >>> >>>> Caused by: java.lang.RuntimeException:
>> > > >> >>> >>>> java.lang.reflect.InvocationTargetException
>> > > >> >>> >>>> at
>> > > >> com.google.common.base.Throwables.propagate(Throwables.java:160)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
>> > > >> >>> >>>> ... 5 more
>> > > >> >>> >>>> Caused by: java.lang.reflect.InvocationTargetException
>> > > >> >>> >>>> at
>> > > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> > > >> >>> Method)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> > > >> >>> >>>> at
>> > > java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
>> > > >> >>> >>>> ... 6 more
>> > > >> >>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid
>> > > >> ContainerId:
>> > > >> >>> >>>> container_e25_1453466340022_0004_01_000001
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
>> > > >> >>> >>>> ... 11 more
>> > > >> >>> >>>> Caused by: java.lang.NumberFormatException: For input
>> string:
>> > > >> "e25"
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:589)
>> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:631)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>> > > >> >>> >>>> at
>> > > >> >>>
>> > > >>
>> > >
>> >
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>> > > >> >>> >>>> ... 14 more
>> > > >> >>> >>>>
>> > > >> >>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
>> > > >> >>> stoffe@gmail.com> wrote:
>> > > >> >>> >>>>> Hi
>> > > >> >>> >>>>>
>> > > >> >>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an
>> > > >> exception
>> > > >> >>> as
>> > > >> >>> >>>>> soon as the application starts on the resource manager
>> that
>> > > >> tells me
>> > > >> >>> >>>>> the container id cannot be parsed.
>> > > >> >>> >>>>>
>> > > >> >>> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
>> > > >> >>> >>>>> container_e04_1427159778706_0002_01_000001
>> > > >> >>> >>>>>
>> > > >> >>> >>>>> I don't have the exact stacktrace but I recall it failing
>> in
>> > > >> >>> >>>>> ConverterUtils.toContainerId because it assumes that that
>> > the
>> > > >> first
>> > > >> >>> >>>>> token is an application attempt to be parsed as an
>> integer.
>> > > This
>> > > >> >>> class
>> > > >> >>> >>>>> resides in hadoop-yarn-common 2.3.0.
>> > > >> >>> >>>>>
>> > > >> >>> >>>>> Is there any way to either tweak the container id or make
>> > > twill
>> > > >> use
>> > > >> >>> >>>>> the 2.7.1 jar instead?
>> > > >> >>> >>>>>
>> > > >> >>> >>>>> Cheers,
>> > > >> >>> >>>>> -Kristoffer
>> > > >> >>> >>>>>
>> > > >> >>> >>>>>
>> > > >> >>> >>>>> [1]
>> > > >> >>>
>> > > >>
>> > >
>> >
>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>> > > >> >>>
>> > > >>
>> > >
>> >
>>

Re: Yarn 2.7.1

Posted by Poorna Chandra <po...@cask.co>.
The logs pasted in your previous post are from the App Master -
container_e29_1453498444043_0012_01_000001.

The App Master starts up fine now, and launches the application container -
container_e29_1453498444043_0012_01_000002. It is the application container
that dies on launch. We'll need the logs for the application container to
see why is is dying.

Poorna.

On Sat, Jan 23, 2016 at 1:52 PM, Kristoffer Sjögren <st...@gmail.com>
wrote:

> I pasted both stdout and stderr in my previous post.
> Den 23 jan 2016 22:50 skrev "Poorna Chandra" <po...@cask.co>:
>
> > Hi Kristoffer,
> >
> > Looks like container_e29_1453498444043_0012_01_000002 could not be
> started
> > due to some issue. Can you attach the stdout and stderr logs for
> > container_e29_1453498444043_0012_01_000002?
> >
> > Poorna.
> >
> >
> > On Sat, Jan 23, 2016 at 3:53 AM, Kristoffer Sjögren <st...@gmail.com>
> > wrote:
> >
> > > Yes, that almost worked. Now the application starts on Yarn and after
> > > a while an exception is thrown and the application exits with code 10.
> > >
> > >
> > > Application
> > >
> > > About
> > > Jobs
> > >
> > > Tools
> > >
> > > Log Type: stdout
> > >
> > > Log Upload Time: Sat Jan 23 12:49:41 +0100 2016
> > >
> > > Log Length: 21097
> > >
> > > UnJar appMaster.jar to tmp/twill.launcher-1453549768670-0
> > > Launch class
> (org.apache.twill.internal.appmaster.ApplicationMasterMain)
> > > with classpath:
> > >
> > >
> >
> [file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/classes,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/resources,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-cli-1.2.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/scala-library-2.10.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-math3-3.1.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-core-1.0.9.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/xmlenc-0.52.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsch-0.1.42.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpclient-4.1.2.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-configuration-1.6.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/metrics-core-2.2.0.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-6.1.26.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-api-2.7.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-annotations-2.7.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guice-3.0.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-net-3.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-util-6.1.26.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/kafka_2.10-0.8.0.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-api-0.6.0-incubating.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-api-1.7.10.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/paranamer-2.3.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/protobuf-java-2.5.0.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-kerberos-codec-2.0.0-M15.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/avro-1.7.4.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-compress-1.4.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-auth-2.7.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zookeeper-3.4.6.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-core-1.9.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-client-2.7.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-zookeeper-0.6.0-incubating.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-client-1.9.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/gson-2.2.4.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-common-2.7.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-hdfs-2.7.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-asn1-api-1.0.0-M20.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-core-0.6.0-incubating.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-collections-3.2.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-3.7.0.Final.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-common-2.7.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-mapper-asl-1.9.13.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zkclient-0.3.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-jaxrs-1.9.13.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-xc-1.9.13.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsr305-3.0.0.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/snappy-java-1.0.4.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/log4j-1.2.17.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-codec-1.4.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/asm-all-5.0.2.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-all-4.0.23.Final.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/servlet-api-2.5.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guava-13.0.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jopt-simple-3.2.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-framework-2.7.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-client-2.7.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-httpclient-3.1.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-api-0.6.0-incubating.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-lang-2.6.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpcore-4.1.2.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-yarn-0.6.0-incubating.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-util-1.0.0-M20.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/htrace-core-3.1.0-incubating.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-common-0.6.0-incubating.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-io-2.4.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-server-1.9.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-i18n-2.0.0-M15.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-logging-1.1.3.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-core-0.6.0-incubating.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-core-asl-1.9.13.jar,
> > >
> > >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/javax.inject-1.jar]
> > > Launching main: public static void
> > >
> > >
> >
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(java.lang.String[])
> > > throws java.lang.Exception []
> > > 12:49:29.586 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos krb5
> > > configuration not found, setting default realm to empty
> > > 12:49:30.083 [main] DEBUG o.a.h.h.p.d.s.DataTransferSaslUtil -
> > > DataTransferProtocol not using SaslPropertiesResolver, no QOP found in
> > > configuration for dfs.data.transfer.protection
> > > 12:49:30.552 [main] INFO  o.apache.twill.internal.ServiceMain -
> > > Starting service ApplicationMasterService [NEW].
> > > 12:49:30.600 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
> > > - Broker list is empty. No Kafka producer is created.
> > > 12:49:30.704 [TrackerService STARTING] INFO
> > > o.a.t.i.appmaster.TrackerService - Tracker service started at
> > > http://hdfs-ix03.se-ix.delta.prod:51793
> > > 12:49:30.922 [TwillZKPathService STARTING] INFO
> > > o.a.t.i.ServiceMain$TwillZKPathService - Creating container ZK path:
> > >
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > > 12:49:31.102 [kafka-publisher] INFO  o.a.t.i.k.c.SimpleKafkaPublisher
> > > - Update Kafka producer broker list: hdfs-ix03.se-ix.delta.prod:58668
> > > 12:49:31.288 [ApplicationMasterService] INFO
> > > o.a.t.internal.AbstractTwillService - Create live node
> > >
> > >
> >
> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > > 12:49:31.308 [ApplicationMasterService] INFO
> > > o.a.t.i.a.ApplicationMasterService - Start application master with
> > > spec:
> > >
> >
> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
> > > 12:49:31.318 [main] INFO  o.apache.twill.internal.ServiceMain -
> > > Service ApplicationMasterService [RUNNING] started.
> > > 12:49:31.344 [ApplicationMasterService] INFO
> > > o.a.t.i.a.ApplicationMasterService - Request 1 container with
> > > capability <memory:512, vCores:1> for runnable JarRunnable
> > > 12:49:33.368 [ApplicationMasterService] INFO
> > > o.a.t.i.a.ApplicationMasterService - Got container
> > > container_e29_1453498444043_0012_01_000002
> > > 12:49:33.369 [ApplicationMasterService] INFO
> > > o.a.t.i.a.ApplicationMasterService - Starting runnable JarRunnable
> > > with
> > >
> >
> RunnableProcessLauncher{container=org.apache.twill.internal.yarn.Hadoop21YarnContainerInfo@5e82cebd
> > > }
> > > 12:49:33.417 [ApplicationMasterService] INFO
> > > o.a.t.i.a.RunnableProcessLauncher - Launching in container
> > > container_e29_1453498444043_0012_01_000002 at
> > > hdfs-ix03.se-ix.delta.prod:45454, [$JAVA_HOME/bin/java
> > > -Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
> > > -Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
> > > launcher.jar:$HADOOP_CONF_DIR -Xmx359m
> > > org.apache.twill.launcher.TwillLauncher container.jar
> > > org.apache.twill.internal.container.TwillContainerMain true
> > > 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]
> > > 12:49:33.473 [ApplicationMasterService] INFO
> > > o.a.t.i.a.ApplicationMasterService - Runnable JarRunnable fully
> > > provisioned with 1 instances.
> > > 12:49:35.302 [zk-client-EventThread] INFO
> > > o.a.t.i.TwillContainerLauncher - Container LiveNodeData updated:
> > >
> > >
> >
> {"data":{"containerId":"container_e29_1453498444043_0012_01_000002","host":"hdfs-ix03.se-ix.delta.prod"}}
> > > 12:49:37.484 [ApplicationMasterService] INFO
> > > o.a.t.i.a.ApplicationMasterService - Container
> > > container_e29_1453498444043_0012_01_000002 completed with
> > > COMPLETE:Exception from container-launch.
> > > Container id: container_e29_1453498444043_0012_01_000002
> > > Exit code: 10
> > > Stack trace: ExitCodeException exitCode=10:
> > > at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
> > > at org.apache.hadoop.util.Shell.run(Shell.java:487)
> > > at
> > >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
> > > at
> > >
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
> > > at
> > >
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> > > at
> > >
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> > > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > > at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > > at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > > at java.lang.Thread.run(Thread.java:745)
> > >
> > >
> > > Container exited with a non-zero exit code 10
> > > .
> > > 12:49:37.488 [ApplicationMasterService] WARN
> > > o.a.t.i.appmaster.RunningContainers - Container
> > > container_e29_1453498444043_0012_01_000002 exited abnormally with
> > > state COMPLETE, exit code 10.
> > > 12:49:37.496 [ApplicationMasterService] INFO
> > > o.a.t.i.a.ApplicationMasterService - All containers completed.
> > > Shutting down application master.
> > > 12:49:37.498 [ApplicationMasterService] INFO
> > > o.a.t.i.a.ApplicationMasterService - Stop application master with
> > > spec:
> > >
> >
> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
> > > 12:49:37.500 [ApplicationMasterService] INFO
> > > o.a.t.i.appmaster.RunningContainers - Stopping all instances of
> > > JarRunnable
> > > 12:49:37.500 [ApplicationMasterService] INFO
> > > o.a.t.i.appmaster.RunningContainers - Terminated all instances of
> > > JarRunnable
> > > 12:49:37.512 [ApplicationMasterService] INFO
> > > o.a.t.i.a.ApplicationMasterService - Application directory deleted:
> > >
> hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > > 12:49:37.512 [ApplicationMasterService] INFO
> > > o.a.t.internal.AbstractTwillService - Remove live node
> > >
> > >
> >
> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > > 12:49:37.516 [ApplicationMasterService] INFO
> > > o.a.t.internal.AbstractTwillService - Service ApplicationMasterService
> > > with runId be4bbf01-5e72-4147-b2eb-b84e19214b5b shutdown completed
> > > 12:49:37.516 [main] INFO  o.apache.twill.internal.ServiceMain -
> > > Service ApplicationMasterService [TERMINATED] completed.
> > > 12:49:39.676 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
> > > - Broker list is empty. No Kafka producer is created.
> > > 12:49:40.037 [TwillZKPathService STOPPING] INFO
> > > o.a.t.i.ServiceMain$TwillZKPathService - Removing container ZK path:
> > >
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > > 12:49:40.248 [TrackerService STOPPING] INFO
> > > o.a.t.i.appmaster.TrackerService - Tracker service stopped at
> > > http://hdfs-ix03.se-ix.delta.prod:51793
> > > Main class completed.
> > > Launcher completed
> > > Cleanup directory tmp/twill.launcher-1453549768670-0
> > >
> > >
> > >
> > > SLF4J: Class path contains multiple SLF4J bindings.
> > > SLF4J: Found binding in
> > >
> > >
> >
> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > > SLF4J: Found binding in
> > >
> > >
> >
> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > > explanation.
> > > SLF4J: Actual binding is of type
> > > [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
> > > 16/01/23 12:49:29 INFO impl.ContainerManagementProtocolProxy:
> > > yarn.client.max-cached-nodemanagers-proxies : 0
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Verifying properties
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property log.dir is
> > > overridden to
> > >
> >
> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > default.replication.factor is overridden to 1
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property port is
> > > overridden to 58668
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > socket.request.max.bytes is overridden to 104857600
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > socket.send.buffer.bytes is overridden to 1048576
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > log.flush.interval.ms is overridden to 1000
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > zookeeper.connect is overridden to
> > >
> > >
> >
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property broker.id
> > > is overridden to 1
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > log.retention.hours is overridden to 24
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > socket.receive.buffer.bytes is overridden to 1048576
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > zookeeper.connection.timeout.ms is overridden to 3000
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > num.partitions is overridden to 1
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > log.flush.interval.messages is overridden to 10000
> > > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > > log.segment.bytes is overridden to 536870912
> > > 16/01/23 12:49:30 INFO client.ConfiguredRMFailoverProxyProvider:
> > > Failing over to rm2
> > > 16/01/23 12:49:30 INFO server.KafkaServer: [Kafka Server 1], Starting
> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1] Log
> > > directory
> > >
> >
> '/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs'
> > > not found, creating it.
> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
> > > Starting log cleaner every 600000 ms
> > > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
> > > Starting log flusher every 3000 ms with the following overrides Map()
> > > 16/01/23 12:49:30 INFO network.Acceptor: Awaiting socket connections
> > > on 0.0.0.0:58668.
> > > 16/01/23 12:49:30 INFO network.SocketServer: [Socket Server on Broker
> > > 1], Started
> > > 16/01/23 12:49:30 INFO server.KafkaZooKeeper: connecting to ZK:
> > >
> > >
> >
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> > > 16/01/23 12:49:30 INFO zkclient.ZkEventThread: Starting ZkClient event
> > > thread.
> > > 16/01/23 12:49:31 INFO zkclient.ZkClient: zookeeper state changed
> > > (SyncConnected)
> > > 16/01/23 12:49:31 INFO utils.ZkUtils$: Registered broker 1 at path
> > > /brokers/ids/1 with address hdfs-ix03.se-ix.delta.prod:58668.
> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
> > > Connecting to ZK:
> > >
> > >
> >
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Verifying properties
> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > > metadata.broker.list is overridden to hdfs-ix03.se-ix.delta.prod:58668
> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > > request.required.acks is overridden to 1
> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > > partitioner.class is overridden to
> > > org.apache.twill.internal.kafka.client.IntegerPartitioner
> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > > compression.codec is overridden to snappy
> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > > key.serializer.class is overridden to
> > > org.apache.twill.internal.kafka.client.IntegerEncoder
> > > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > > serializer.class is overridden to
> > > org.apache.twill.internal.kafka.client.ByteBufferEncoder
> > > 16/01/23 12:49:31 INFO utils.Mx4jLoader$: Will not load MX4J,
> > > mx4j-tools.jar is not in the classpath
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Controller starting up
> > > 16/01/23 12:49:31 INFO server.ZookeeperLeaderElector: 1 successfully
> > > elected as leader
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Broker 1 starting become controller state transition
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Controller 1 incremented epoch to 1
> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> > > correlation id 0 for 1 topic(s) Set(log)
> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > > 16/01/23 12:49:31 INFO controller.RequestSendThread:
> > > [Controller-1-to-broker-1-send-thread], Starting
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Currently active brokers in the cluster: Set(1)
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Currently shutting brokers in the cluster: Set()
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Current list of topics in the cluster: Set()
> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > > machine on controller 1]: No state transitions triggered since no
> > > partitions are assigned to brokers 1
> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > > machine on controller 1]: Invoking state change to OnlineReplica for
> > > replicas
> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > > machine on controller 1]: Started replica state machine with initial
> > > state -> Map()
> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> > > state machine on Controller 1]: Started partition state machine with
> > > initial state -> Map()
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Broker 1 is ready to serve as the new controller with epoch 1
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Partitions being reassigned: Map()
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Partitions already reassigned: List()
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Resuming reassignment of partitions: Map()
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Partitions undergoing preferred replica election:
> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> > > "partitions":{ "0":[ 1 ] }, "version":1 }
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Partitions that completed preferred replica election:
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Resuming preferred replica election for partitions:
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Starting preferred replica leader election for partitions
> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> > > state machine on Controller 1]: Invoking state change to
> > > OnlinePartition for partitions
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > > Controller startup complete
> > > 16/01/23 12:49:31 INFO server.KafkaApis: [KafkaApi-1] Auto creation of
> > > topic log with 1 partitions and replication factor 1 is successful!
> > > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1], Started
> > > 16/01/23 12:49:31 INFO
> > > server.ZookeeperLeaderElector$LeaderChangeListener: New leader is 1
> > > 16/01/23 12:49:31 INFO controller.ControllerEpochListener:
> > > [ControllerEpochListener on 1]: Initialized controller epoch to 1 and
> > > zk version 0
> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> > > hdfs-ix03.se-ix.delta.prod:58668
> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> > > fetching metadata [{TopicMetadata for topic log ->
> > > No partition metadata for topic log due to
> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
> > > kafka.common.LeaderNotAvailableException
> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> > > correlation id 1 for 1 topic(s) Set(log)
> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > > 16/01/23 12:49:31 INFO
> > > controller.PartitionStateMachine$TopicChangeListener:
> > > [TopicChangeListener on Controller 1]: New topics: [Set(log)], deleted
> > > topics: [Set()], new partition replica assignment [Map([log,0] ->
> > > List(1))]
> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> > > "partitions":{ "0":[ 1 ] }, "version":1 }
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
> > > topic creation callback for [log,0]
> > > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
> > > partition creation callback for [log,0]
> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> > > hdfs-ix03.se-ix.delta.prod:58668
> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> > > state machine on Controller 1]: Invoking state change to NewPartition
> > > for partitions [log,0]
> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> > > fetching metadata [{TopicMetadata for topic log ->
> > > No partition metadata for topic log due to
> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
> > > kafka.common.LeaderNotAvailableException
> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> > > /10.3.24.22.
> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> > > /10.3.24.22.
> > > 16/01/23 12:49:31 ERROR async.DefaultEventHandler: Failed to collate
> > > messages by topic, partition due to: Failed to fetch topic metadata
> > > for topic: log
> > > 16/01/23 12:49:31 INFO async.DefaultEventHandler: Back off for 100 ms
> > > before retrying send. Remaining retries = 3
> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > > machine on controller 1]: Invoking state change to NewReplica for
> > > replicas PartitionAndReplica(log,0,1)
> > > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> > > state machine on Controller 1]: Invoking state change to
> > > OnlinePartition for partitions [log,0]
> > > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > > machine on controller 1]: Invoking state change to OnlineReplica for
> > > replicas PartitionAndReplica(log,0,1)
> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
> > > Broker 1]: Handling LeaderAndIsr request
> > >
> > >
> >
> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
> > > ->
> > >
> >
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
> > > 16/01/23 12:49:31 INFO server.ReplicaFetcherManager:
> > > [ReplicaFetcherManager on broker 1] Removing fetcher for partition
> > > [log,0]
> > > 16/01/23 12:49:31 INFO log.Log: [Kafka Log on Broker 1], Completed
> > > load of log log-0 with log end offset 0
> > > 16/01/23 12:49:31 INFO log.LogManager: [Log Manager on Broker 1]
> > > Created log for partition [log,0] in
> > >
> > >
> >
> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs.
> > > 16/01/23 12:49:31 WARN server.HighwaterMarkCheckpoint: No
> > > highwatermark file is found. Returning 0 as the highwatermark for
> > > partition [log,0]
> > > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
> > > Broker 1]: Handled leader and isr request
> > >
> > >
> >
> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
> > > ->
> > >
> >
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> > > correlation id 2 for 1 topic(s) Set(log)
> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> > > "partitions":{ "0":[ 1 ] }, "version":1 }
> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> > > hdfs-ix03.se-ix.delta.prod:58668
> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> > > /10.3.24.22.
> > > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> > > fetching metadata [{TopicMetadata for topic log ->
> > > No partition metadata for topic log due to
> > > kafka.common.LeaderNotAvailableException}] for topic [log]: class
> > > kafka.common.LeaderNotAvailableException
> > > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> > > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> > > correlation id 3 for 1 topic(s) Set(log)
> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> > > hdfs-ix03.se-ix.delta.prod:58668
> > > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> > > /10.3.24.22.
> > > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > > 16/01/23 12:49:33 INFO impl.AMRMClientImpl: Received new token for :
> > > hdfs-ix03.se-ix.delta.prod:45454
> > > 16/01/23 12:49:33 INFO impl.ContainerManagementProtocolProxy: Opening
> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
> > > /10.3.24.22.
> > > 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
> > > /10.3.24.22.
> > > 16/01/23 12:49:39 INFO server.KafkaServer: [Kafka Server 1], Shutting
> > down
> > > 16/01/23 12:49:39 INFO server.KafkaZooKeeper: Closing zookeeper
> client...
> > > 16/01/23 12:49:39 INFO zkclient.ZkEventThread: Terminate ZkClient event
> > > thread.
> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
> > > 1], Shutting down
> > > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
> > > 1], Shutdown completed
> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
> > > Handler on Broker 1], shutting down
> > > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
> > > Handler on Broker 1], shutted down completely
> > > 16/01/23 12:49:39 INFO utils.KafkaScheduler: Shutdown Kafka scheduler
> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
> > > Broker 1]: Shut down
> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
> > > [ReplicaFetcherManager on broker 1] shutting down
> > > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
> > > [ReplicaFetcherManager on broker 1] shutdown completed
> > > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
> > > Broker 1]: Shutted down completely
> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
> > > [Controller-1-to-broker-1-send-thread], Shutting down
> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
> > > [Controller-1-to-broker-1-send-thread], Stopped
> > > 16/01/23 12:49:40 INFO controller.RequestSendThread:
> > > [Controller-1-to-broker-1-send-thread], Shutdown completed
> > > 16/01/23 12:49:40 INFO controller.KafkaController: [Controller 1]:
> > > Controller shutdown complete
> > > 16/01/23 12:49:40 INFO server.KafkaServer: [Kafka Server 1], Shut down
> > > completed
> > > 16/01/23 12:49:40 INFO impl.ContainerManagementProtocolProxy: Opening
> > > proxy : hdfs-ix03.se-ix.delta.prod:45454
> > > 16/01/23 12:49:40 INFO impl.AMRMClientImpl: Waiting for application to
> > > be successfully unregistered.
> > > 16/01/23 12:49:40 INFO producer.SyncProducer: Disconnecting from
> > > hdfs-ix03.se-ix.delta.prod:58668
> > > 16/01/23 12:49:40 WARN async.DefaultEventHandler: Failed to send
> > > producer request with correlation id 35 to broker 1 with data for
> > > partitions [log,0]
> > > java.nio.channels.ClosedByInterruptException
> > > at
> > >
> >
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> > > at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:957)
> > > at
> > sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:204)
> > > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
> > > at
> > >
> >
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
> > > at kafka.utils.Utils$.read(Unknown Source)
> > > at kafka.network.BoundedByteBufferReceive.readFrom(Unknown Source)
> > > at kafka.network.Receive$class.readCompletely(Unknown Source)
> > > at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown
> Source)
> > > at kafka.network.BlockingChannel.receive(Unknown Source)
> > > at kafka.producer.SyncProducer.liftedTree1$1(Unknown Source)
> > > at
> > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Unknown
> > > Source)
> > > at
> > >
> >
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Unknown
> > > Source)
> > > at
> > >
> >
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
> > > Source)
> > > at
> > >
> >
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
> > > Source)
> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(Unknown
> > Source)
> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
> > > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
> > > at kafka.metrics.KafkaTimer.time(Unknown Source)
> > > at kafka.producer.SyncProducer.send(Unknown Source)
> > > at
> > >
> >
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(Unknown
> > > Source)
> > > at
> > >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
> > > Source)
> > > at
> > >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
> > > Source)
> > > at
> > >
> >
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> > > at
> > >
> >
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> > > at
> > >
> >
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> > > at
> > >
> >
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> > > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> > > at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> > > at
> > >
> >
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> > > at
> > kafka.producer.async.DefaultEventHandler.dispatchSerializedData(Unknown
> > > Source)
> > > at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
> > > at kafka.producer.Producer.send(Unknown Source)
> > > at kafka.javaapi.producer.Producer.send(Unknown Source)
> > > at
> > >
> >
> org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122)
> > > at
> > >
> >
> org.apache.twill.internal.logging.KafkaAppender.doPublishLogs(KafkaAppender.java:268)
> > > at
> > >
> >
> org.apache.twill.internal.logging.KafkaAppender.publishLogs(KafkaAppender.java:228)
> > > at
> > >
> >
> org.apache.twill.internal.logging.KafkaAppender.access$700(KafkaAppender.java:66)
> > > at
> > >
> >
> org.apache.twill.internal.logging.KafkaAppender$2.run(KafkaAppender.java:280)
> > > at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> > > at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> > > at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> > > at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > > at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > > at java.lang.Thread.run(Thread.java:745)
> > > 16/01/23 12:49:40 INFO async.DefaultEventHandler: Back off for 100 ms
> > > before retrying send. Remaining retries = 3
> > > 16/01/23 12:49:40 INFO producer.Producer: Shutting down producer
> > > 16/01/23 12:49:40 INFO producer.ProducerPool: Closing all sync
> producers
> > >
> > >
> > > On Sat, Jan 23, 2016 at 1:22 AM, Terence Yim <ch...@gmail.com> wrote:
> > > > Hi,
> > > >
> > > > It's due to a very old version of ASM library that bring it by
> > > hadoop/yarn.
> > > > Please add exclusion of asm library to all hadoop dependencies.
> > > >
> > > > <exclusion>
> > > >   <groupId>asm</groupId>
> > > >   <artifactId>asm</artifactId>
> > > > </exclusion>
> > > >
> > > > Terence
> > > >
> > > >
> > > > On Fri, Jan 22, 2016 at 2:34 PM, Kristoffer Sjögren <
> stoffe@gmail.com>
> > > > wrote:
> > > >
> > > >> Further adding the following dependencies cause another exception.
> > > >>
> > > >> <dependency>
> > > >>   <groupId>com.google.guava</groupId>
> > > >>   <artifactId>guava</artifactId>
> > > >>   <version>13.0</version>
> > > >> </dependency>
> > > >> <dependency>
> > > >>   <groupId>org.apache.hadoop</groupId>
> > > >>   <artifactId>hadoop-hdfs</artifactId>
> > > >>   <version>2.7.1</version>
> > > >> </dependency>
> > > >>
> > > >> Exception in thread " STARTING"
> > > >> java.lang.IncompatibleClassChangeError: class
> > > >> org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
> > > >> has interface org.objectweb.asm.ClassVisitor as super class
> > > >> at java.lang.ClassLoader.defineClass1(Native Method)
> > > >> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> > > >> at
> > > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> > > >> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> > > >> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> > > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> > > >> at java.security.AccessController.doPrivileged(Native Method)
> > > >> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> > > >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> > > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
> > > >> at
> > > >>
> > >
> >
> org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
> > > >> at
> > > >>
> > >
> >
> com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
> > > >> at java.lang.Thread.run(Thread.java:745)
> > > >>
> > > >> On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <
> > stoffe@gmail.com>
> > > >> wrote:
> > > >> > Add those dependencies fail with the following exception.
> > > >> >
> > > >> > Exception in thread "main" java.lang.AbstractMethodError:
> > > >> >
> > > >>
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
> > > >> > at
> > > >>
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
> > > >> > at
> > > >>
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
> > > >> > at
> org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
> > > >> > at
> > > >>
> > >
> >
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
> > > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
> > > >> > at
> > > >>
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
> > > >> > at
> > > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> > > >> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> > > >> > at
> > > >>
> > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> > > >> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> > > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> > > >> > at
> > > >>
> > >
> >
> org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
> > > >> > at
> > > >>
> > >
> >
> org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
> > > >> > at deephacks.BundledJarExample.main(BundledJarExample.java:70)
> > > >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >> > at
> > > >>
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > >> > at
> > > >>
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >> > at java.lang.reflect.Method.invoke(Method.java:497)
> > > >> > at
> > > com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> > > >> >
> > > >> > On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <ch...@gmail.com>
> > > wrote:
> > > >> >> Hi,
> > > >> >>
> > > >> >> If you run it from IDE, you and simply add a dependency on hadoop
> > > with
> > > >> >> version 2.7.1. E.g. if you are using Maven, you can add the
> > > following to
> > > >> >> your pom.xml dependencies section.
> > > >> >>
> > > >> >> <dependency>
> > > >> >>   <groupId>org.apache.hadoop</groupId>
> > > >> >>   <artifactId>hadoop-yarn-api</artifactId>
> > > >> >>   <version>2.7.1</version>
> > > >> >> </dependency>
> > > >> >> <dependency>
> > > >> >>   <groupId>org.apache.hadoop</groupId>
> > > >> >>   <artifactId>hadoop-yarn-common</artifactId>
> > > >> >>   <version>2.7.1</version>
> > > >> >> </dependency>
> > > >> >> <dependency>
> > > >> >>   <groupId>org.apache.hadoop</groupId>
> > > >> >>   <artifactId>hadoop-yarn-client</artifactId>
> > > >> >>   <version>2.7.1</version>
> > > >> >> </dependency>
> > > >> >> <dependency>
> > > >> >>   <groupId>org.apache.hadoop</groupId>
> > > >> >>   <artifactId>hadoop-common</artifactId>
> > > >> >>   <version>2.7.1</version>
> > > >> >> </dependency>
> > > >> >>
> > > >> >> Terence
> > > >> >>
> > > >> >> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <
> > > stoffe@gmail.com>
> > > >> >> wrote:
> > > >> >>
> > > >> >>> I run it from IDE right now, but would like to create a command
> > line
> > > >> >>> app eventually.
> > > >> >>>
> > > >> >>> I should clarify that the exception above is thrown on the YARN
> > > node,
> > > >> >>> not in the IDE.
> > > >> >>>
> > > >> >>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com>
> > > wrote:
> > > >> >>> > Hi Kristoffer,
> > > >> >>> >
> > > >> >>> > The example itself shouldn't need any modification. However,
> how
> > > do
> > > >> >>> > you run that class? Do you run it from IDE or from command
> line
> > > using
> > > >> >>> > "java" command?
> > > >> >>> >
> > > >> >>> > Terence
> > > >> >>> >
> > > >> >>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <
> > > >> stoffe@gmail.com>
> > > >> >>> wrote:
> > > >> >>> >> Hi Terence,
> > > >> >>> >>
> > > >> >>> >> I'm quite new to Twill and not sure how to do that exactly.
> > Could
> > > >> you
> > > >> >>> >> show me how to modify the following example to do the same?
> > > >> >>> >>
> > > >> >>> >>
> > > >> >>>
> > > >>
> > >
> >
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> > > >> >>> >>
> > > >> >>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <
> chtyim@gmail.com
> > >
> > > >> wrote:
> > > >> >>> >>> Hi Kristoffer,
> > > >> >>> >>>
> > > >> >>> >>> Seems like the exception comes from the YARN class
> > > >> "ConverterUtils". I
> > > >> >>> >>> believe need to start the application with the version 2.7.1
> > > Hadoop
> > > >> >>> >>> Jars. How to do start the twill application? Usually on a
> > > cluster
> > > >> with
> > > >> >>> >>> hadoop installed, you can get all the hadoop jars in the
> > > classpath
> > > >> by
> > > >> >>> >>> running this:
> > > >> >>> >>>
> > > >> >>> >>> export CP=`hadoop classpath`
> > > >> >>> >>> java -cp .:$CP YourApp ...
> > > >> >>> >>>
> > > >> >>> >>> Assuming your app classes and Twill jars are in the current
> > > >> directory.
> > > >> >>> >>>
> > > >> >>> >>> Terence
> > > >> >>> >>>
> > > >> >>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <
> > > >> stoffe@gmail.com>
> > > >> >>> wrote:
> > > >> >>> >>>> Here's the full stacktrace.
> > > >> >>> >>>>
> > > >> >>> >>>> Exception in thread "main"
> > > >> java.lang.reflect.InvocationTargetException
> > > >> >>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > Method)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >> >>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
> > > >> >>> >>>> at
> > > >> org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
> > > >> >>> >>>> Caused by: java.lang.RuntimeException:
> > > >> >>> >>>> java.lang.reflect.InvocationTargetException
> > > >> >>> >>>> at
> > > >> com.google.common.base.Throwables.propagate(Throwables.java:160)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
> > > >> >>> >>>> ... 5 more
> > > >> >>> >>>> Caused by: java.lang.reflect.InvocationTargetException
> > > >> >>> >>>> at
> > > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > > >> >>> Method)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > > >> >>> >>>> at
> > > java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
> > > >> >>> >>>> ... 6 more
> > > >> >>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid
> > > >> ContainerId:
> > > >> >>> >>>> container_e25_1453466340022_0004_01_000001
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
> > > >> >>> >>>> ... 11 more
> > > >> >>> >>>> Caused by: java.lang.NumberFormatException: For input
> string:
> > > >> "e25"
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:589)
> > > >> >>> >>>> at java.lang.Long.parseLong(Long.java:631)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
> > > >> >>> >>>> at
> > > >> >>>
> > > >>
> > >
> >
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
> > > >> >>> >>>> ... 14 more
> > > >> >>> >>>>
> > > >> >>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
> > > >> >>> stoffe@gmail.com> wrote:
> > > >> >>> >>>>> Hi
> > > >> >>> >>>>>
> > > >> >>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an
> > > >> exception
> > > >> >>> as
> > > >> >>> >>>>> soon as the application starts on the resource manager
> that
> > > >> tells me
> > > >> >>> >>>>> the container id cannot be parsed.
> > > >> >>> >>>>>
> > > >> >>> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
> > > >> >>> >>>>> container_e04_1427159778706_0002_01_000001
> > > >> >>> >>>>>
> > > >> >>> >>>>> I don't have the exact stacktrace but I recall it failing
> in
> > > >> >>> >>>>> ConverterUtils.toContainerId because it assumes that that
> > the
> > > >> first
> > > >> >>> >>>>> token is an application attempt to be parsed as an
> integer.
> > > This
> > > >> >>> class
> > > >> >>> >>>>> resides in hadoop-yarn-common 2.3.0.
> > > >> >>> >>>>>
> > > >> >>> >>>>> Is there any way to either tweak the container id or make
> > > twill
> > > >> use
> > > >> >>> >>>>> the 2.7.1 jar instead?
> > > >> >>> >>>>>
> > > >> >>> >>>>> Cheers,
> > > >> >>> >>>>> -Kristoffer
> > > >> >>> >>>>>
> > > >> >>> >>>>>
> > > >> >>> >>>>> [1]
> > > >> >>>
> > > >>
> > >
> >
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> > > >> >>>
> > > >>
> > >
> >
>

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
I pasted both stdout and stderr in my previous post.
Den 23 jan 2016 22:50 skrev "Poorna Chandra" <po...@cask.co>:

> Hi Kristoffer,
>
> Looks like container_e29_1453498444043_0012_01_000002 could not be started
> due to some issue. Can you attach the stdout and stderr logs for
> container_e29_1453498444043_0012_01_000002?
>
> Poorna.
>
>
> On Sat, Jan 23, 2016 at 3:53 AM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
>
> > Yes, that almost worked. Now the application starts on Yarn and after
> > a while an exception is thrown and the application exits with code 10.
> >
> >
> > Application
> >
> > About
> > Jobs
> >
> > Tools
> >
> > Log Type: stdout
> >
> > Log Upload Time: Sat Jan 23 12:49:41 +0100 2016
> >
> > Log Length: 21097
> >
> > UnJar appMaster.jar to tmp/twill.launcher-1453549768670-0
> > Launch class (org.apache.twill.internal.appmaster.ApplicationMasterMain)
> > with classpath:
> >
> >
> [file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/classes,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/resources,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-cli-1.2.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/scala-library-2.10.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-math3-3.1.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-core-1.0.9.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/xmlenc-0.52.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsch-0.1.42.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpclient-4.1.2.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-configuration-1.6.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/metrics-core-2.2.0.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-6.1.26.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-api-2.7.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-annotations-2.7.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guice-3.0.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-net-3.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-util-6.1.26.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/kafka_2.10-0.8.0.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-api-0.6.0-incubating.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-api-1.7.10.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/paranamer-2.3.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/protobuf-java-2.5.0.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-kerberos-codec-2.0.0-M15.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/avro-1.7.4.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-compress-1.4.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-auth-2.7.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zookeeper-3.4.6.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-core-1.9.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-client-2.7.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-zookeeper-0.6.0-incubating.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-client-1.9.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/gson-2.2.4.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-common-2.7.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-hdfs-2.7.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-asn1-api-1.0.0-M20.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-core-0.6.0-incubating.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-collections-3.2.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-3.7.0.Final.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-common-2.7.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-mapper-asl-1.9.13.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zkclient-0.3.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-jaxrs-1.9.13.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-xc-1.9.13.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsr305-3.0.0.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/snappy-java-1.0.4.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/log4j-1.2.17.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-codec-1.4.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/asm-all-5.0.2.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-all-4.0.23.Final.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/servlet-api-2.5.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guava-13.0.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jopt-simple-3.2.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-framework-2.7.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-client-2.7.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-httpclient-3.1.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-api-0.6.0-incubating.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-lang-2.6.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpcore-4.1.2.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-yarn-0.6.0-incubating.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-util-1.0.0-M20.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/htrace-core-3.1.0-incubating.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-common-0.6.0-incubating.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-io-2.4.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-server-1.9.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-i18n-2.0.0-M15.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-logging-1.1.3.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-core-0.6.0-incubating.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-core-asl-1.9.13.jar,
> >
> >
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/javax.inject-1.jar]
> > Launching main: public static void
> >
> >
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(java.lang.String[])
> > throws java.lang.Exception []
> > 12:49:29.586 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos krb5
> > configuration not found, setting default realm to empty
> > 12:49:30.083 [main] DEBUG o.a.h.h.p.d.s.DataTransferSaslUtil -
> > DataTransferProtocol not using SaslPropertiesResolver, no QOP found in
> > configuration for dfs.data.transfer.protection
> > 12:49:30.552 [main] INFO  o.apache.twill.internal.ServiceMain -
> > Starting service ApplicationMasterService [NEW].
> > 12:49:30.600 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
> > - Broker list is empty. No Kafka producer is created.
> > 12:49:30.704 [TrackerService STARTING] INFO
> > o.a.t.i.appmaster.TrackerService - Tracker service started at
> > http://hdfs-ix03.se-ix.delta.prod:51793
> > 12:49:30.922 [TwillZKPathService STARTING] INFO
> > o.a.t.i.ServiceMain$TwillZKPathService - Creating container ZK path:
> > zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > 12:49:31.102 [kafka-publisher] INFO  o.a.t.i.k.c.SimpleKafkaPublisher
> > - Update Kafka producer broker list: hdfs-ix03.se-ix.delta.prod:58668
> > 12:49:31.288 [ApplicationMasterService] INFO
> > o.a.t.internal.AbstractTwillService - Create live node
> >
> >
> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > 12:49:31.308 [ApplicationMasterService] INFO
> > o.a.t.i.a.ApplicationMasterService - Start application master with
> > spec:
> >
> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
> > 12:49:31.318 [main] INFO  o.apache.twill.internal.ServiceMain -
> > Service ApplicationMasterService [RUNNING] started.
> > 12:49:31.344 [ApplicationMasterService] INFO
> > o.a.t.i.a.ApplicationMasterService - Request 1 container with
> > capability <memory:512, vCores:1> for runnable JarRunnable
> > 12:49:33.368 [ApplicationMasterService] INFO
> > o.a.t.i.a.ApplicationMasterService - Got container
> > container_e29_1453498444043_0012_01_000002
> > 12:49:33.369 [ApplicationMasterService] INFO
> > o.a.t.i.a.ApplicationMasterService - Starting runnable JarRunnable
> > with
> >
> RunnableProcessLauncher{container=org.apache.twill.internal.yarn.Hadoop21YarnContainerInfo@5e82cebd
> > }
> > 12:49:33.417 [ApplicationMasterService] INFO
> > o.a.t.i.a.RunnableProcessLauncher - Launching in container
> > container_e29_1453498444043_0012_01_000002 at
> > hdfs-ix03.se-ix.delta.prod:45454, [$JAVA_HOME/bin/java
> > -Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
> > -Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
> > launcher.jar:$HADOOP_CONF_DIR -Xmx359m
> > org.apache.twill.launcher.TwillLauncher container.jar
> > org.apache.twill.internal.container.TwillContainerMain true
> > 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]
> > 12:49:33.473 [ApplicationMasterService] INFO
> > o.a.t.i.a.ApplicationMasterService - Runnable JarRunnable fully
> > provisioned with 1 instances.
> > 12:49:35.302 [zk-client-EventThread] INFO
> > o.a.t.i.TwillContainerLauncher - Container LiveNodeData updated:
> >
> >
> {"data":{"containerId":"container_e29_1453498444043_0012_01_000002","host":"hdfs-ix03.se-ix.delta.prod"}}
> > 12:49:37.484 [ApplicationMasterService] INFO
> > o.a.t.i.a.ApplicationMasterService - Container
> > container_e29_1453498444043_0012_01_000002 completed with
> > COMPLETE:Exception from container-launch.
> > Container id: container_e29_1453498444043_0012_01_000002
> > Exit code: 10
> > Stack trace: ExitCodeException exitCode=10:
> > at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
> > at org.apache.hadoop.util.Shell.run(Shell.java:487)
> > at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 10
> > .
> > 12:49:37.488 [ApplicationMasterService] WARN
> > o.a.t.i.appmaster.RunningContainers - Container
> > container_e29_1453498444043_0012_01_000002 exited abnormally with
> > state COMPLETE, exit code 10.
> > 12:49:37.496 [ApplicationMasterService] INFO
> > o.a.t.i.a.ApplicationMasterService - All containers completed.
> > Shutting down application master.
> > 12:49:37.498 [ApplicationMasterService] INFO
> > o.a.t.i.a.ApplicationMasterService - Stop application master with
> > spec:
> >
> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
> > 12:49:37.500 [ApplicationMasterService] INFO
> > o.a.t.i.appmaster.RunningContainers - Stopping all instances of
> > JarRunnable
> > 12:49:37.500 [ApplicationMasterService] INFO
> > o.a.t.i.appmaster.RunningContainers - Terminated all instances of
> > JarRunnable
> > 12:49:37.512 [ApplicationMasterService] INFO
> > o.a.t.i.a.ApplicationMasterService - Application directory deleted:
> > hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > 12:49:37.512 [ApplicationMasterService] INFO
> > o.a.t.internal.AbstractTwillService - Remove live node
> >
> >
> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > 12:49:37.516 [ApplicationMasterService] INFO
> > o.a.t.internal.AbstractTwillService - Service ApplicationMasterService
> > with runId be4bbf01-5e72-4147-b2eb-b84e19214b5b shutdown completed
> > 12:49:37.516 [main] INFO  o.apache.twill.internal.ServiceMain -
> > Service ApplicationMasterService [TERMINATED] completed.
> > 12:49:39.676 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
> > - Broker list is empty. No Kafka producer is created.
> > 12:49:40.037 [TwillZKPathService STOPPING] INFO
> > o.a.t.i.ServiceMain$TwillZKPathService - Removing container ZK path:
> > zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> > 12:49:40.248 [TrackerService STOPPING] INFO
> > o.a.t.i.appmaster.TrackerService - Tracker service stopped at
> > http://hdfs-ix03.se-ix.delta.prod:51793
> > Main class completed.
> > Launcher completed
> > Cleanup directory tmp/twill.launcher-1453549768670-0
> >
> >
> >
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> >
> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> >
> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type
> > [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
> > 16/01/23 12:49:29 INFO impl.ContainerManagementProtocolProxy:
> > yarn.client.max-cached-nodemanagers-proxies : 0
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Verifying properties
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property log.dir is
> > overridden to
> >
> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > default.replication.factor is overridden to 1
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property port is
> > overridden to 58668
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > socket.request.max.bytes is overridden to 104857600
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > socket.send.buffer.bytes is overridden to 1048576
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > log.flush.interval.ms is overridden to 1000
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > zookeeper.connect is overridden to
> >
> >
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property broker.id
> > is overridden to 1
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > log.retention.hours is overridden to 24
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > socket.receive.buffer.bytes is overridden to 1048576
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > zookeeper.connection.timeout.ms is overridden to 3000
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > num.partitions is overridden to 1
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > log.flush.interval.messages is overridden to 10000
> > 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> > log.segment.bytes is overridden to 536870912
> > 16/01/23 12:49:30 INFO client.ConfiguredRMFailoverProxyProvider:
> > Failing over to rm2
> > 16/01/23 12:49:30 INFO server.KafkaServer: [Kafka Server 1], Starting
> > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1] Log
> > directory
> >
> '/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs'
> > not found, creating it.
> > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
> > Starting log cleaner every 600000 ms
> > 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
> > Starting log flusher every 3000 ms with the following overrides Map()
> > 16/01/23 12:49:30 INFO network.Acceptor: Awaiting socket connections
> > on 0.0.0.0:58668.
> > 16/01/23 12:49:30 INFO network.SocketServer: [Socket Server on Broker
> > 1], Started
> > 16/01/23 12:49:30 INFO server.KafkaZooKeeper: connecting to ZK:
> >
> >
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> > 16/01/23 12:49:30 INFO zkclient.ZkEventThread: Starting ZkClient event
> > thread.
> > 16/01/23 12:49:31 INFO zkclient.ZkClient: zookeeper state changed
> > (SyncConnected)
> > 16/01/23 12:49:31 INFO utils.ZkUtils$: Registered broker 1 at path
> > /brokers/ids/1 with address hdfs-ix03.se-ix.delta.prod:58668.
> > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
> > Connecting to ZK:
> >
> >
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Verifying properties
> > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > metadata.broker.list is overridden to hdfs-ix03.se-ix.delta.prod:58668
> > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > request.required.acks is overridden to 1
> > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > partitioner.class is overridden to
> > org.apache.twill.internal.kafka.client.IntegerPartitioner
> > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > compression.codec is overridden to snappy
> > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > key.serializer.class is overridden to
> > org.apache.twill.internal.kafka.client.IntegerEncoder
> > 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> > serializer.class is overridden to
> > org.apache.twill.internal.kafka.client.ByteBufferEncoder
> > 16/01/23 12:49:31 INFO utils.Mx4jLoader$: Will not load MX4J,
> > mx4j-tools.jar is not in the classpath
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Controller starting up
> > 16/01/23 12:49:31 INFO server.ZookeeperLeaderElector: 1 successfully
> > elected as leader
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Broker 1 starting become controller state transition
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Controller 1 incremented epoch to 1
> > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> > correlation id 0 for 1 topic(s) Set(log)
> > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > 16/01/23 12:49:31 INFO controller.RequestSendThread:
> > [Controller-1-to-broker-1-send-thread], Starting
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Currently active brokers in the cluster: Set(1)
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Currently shutting brokers in the cluster: Set()
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Current list of topics in the cluster: Set()
> > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > machine on controller 1]: No state transitions triggered since no
> > partitions are assigned to brokers 1
> > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > machine on controller 1]: Invoking state change to OnlineReplica for
> > replicas
> > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > machine on controller 1]: Started replica state machine with initial
> > state -> Map()
> > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> > state machine on Controller 1]: Started partition state machine with
> > initial state -> Map()
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Broker 1 is ready to serve as the new controller with epoch 1
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Partitions being reassigned: Map()
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Partitions already reassigned: List()
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Resuming reassignment of partitions: Map()
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Partitions undergoing preferred replica election:
> > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> > "partitions":{ "0":[ 1 ] }, "version":1 }
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Partitions that completed preferred replica election:
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Resuming preferred replica election for partitions:
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Starting preferred replica leader election for partitions
> > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> > state machine on Controller 1]: Invoking state change to
> > OnlinePartition for partitions
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> > Controller startup complete
> > 16/01/23 12:49:31 INFO server.KafkaApis: [KafkaApi-1] Auto creation of
> > topic log with 1 partitions and replication factor 1 is successful!
> > 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1], Started
> > 16/01/23 12:49:31 INFO
> > server.ZookeeperLeaderElector$LeaderChangeListener: New leader is 1
> > 16/01/23 12:49:31 INFO controller.ControllerEpochListener:
> > [ControllerEpochListener on 1]: Initialized controller epoch to 1 and
> > zk version 0
> > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> > hdfs-ix03.se-ix.delta.prod:58668
> > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> > fetching metadata [{TopicMetadata for topic log ->
> > No partition metadata for topic log due to
> > kafka.common.LeaderNotAvailableException}] for topic [log]: class
> > kafka.common.LeaderNotAvailableException
> > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> > correlation id 1 for 1 topic(s) Set(log)
> > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > 16/01/23 12:49:31 INFO
> > controller.PartitionStateMachine$TopicChangeListener:
> > [TopicChangeListener on Controller 1]: New topics: [Set(log)], deleted
> > topics: [Set()], new partition replica assignment [Map([log,0] ->
> > List(1))]
> > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> > "partitions":{ "0":[ 1 ] }, "version":1 }
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
> > topic creation callback for [log,0]
> > 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
> > partition creation callback for [log,0]
> > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> > hdfs-ix03.se-ix.delta.prod:58668
> > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> > state machine on Controller 1]: Invoking state change to NewPartition
> > for partitions [log,0]
> > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> > fetching metadata [{TopicMetadata for topic log ->
> > No partition metadata for topic log due to
> > kafka.common.LeaderNotAvailableException}] for topic [log]: class
> > kafka.common.LeaderNotAvailableException
> > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> > /10.3.24.22.
> > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> > /10.3.24.22.
> > 16/01/23 12:49:31 ERROR async.DefaultEventHandler: Failed to collate
> > messages by topic, partition due to: Failed to fetch topic metadata
> > for topic: log
> > 16/01/23 12:49:31 INFO async.DefaultEventHandler: Back off for 100 ms
> > before retrying send. Remaining retries = 3
> > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > machine on controller 1]: Invoking state change to NewReplica for
> > replicas PartitionAndReplica(log,0,1)
> > 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> > state machine on Controller 1]: Invoking state change to
> > OnlinePartition for partitions [log,0]
> > 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> > machine on controller 1]: Invoking state change to OnlineReplica for
> > replicas PartitionAndReplica(log,0,1)
> > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
> > Broker 1]: Handling LeaderAndIsr request
> >
> >
> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
> > ->
> >
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
> > 16/01/23 12:49:31 INFO server.ReplicaFetcherManager:
> > [ReplicaFetcherManager on broker 1] Removing fetcher for partition
> > [log,0]
> > 16/01/23 12:49:31 INFO log.Log: [Kafka Log on Broker 1], Completed
> > load of log log-0 with log end offset 0
> > 16/01/23 12:49:31 INFO log.LogManager: [Log Manager on Broker 1]
> > Created log for partition [log,0] in
> >
> >
> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs.
> > 16/01/23 12:49:31 WARN server.HighwaterMarkCheckpoint: No
> > highwatermark file is found. Returning 0 as the highwatermark for
> > partition [log,0]
> > 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
> > Broker 1]: Handled leader and isr request
> >
> >
> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
> > ->
> >
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
> > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> > correlation id 2 for 1 topic(s) Set(log)
> > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> > "partitions":{ "0":[ 1 ] }, "version":1 }
> > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> > hdfs-ix03.se-ix.delta.prod:58668
> > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> > /10.3.24.22.
> > 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> > fetching metadata [{TopicMetadata for topic log ->
> > No partition metadata for topic log due to
> > kafka.common.LeaderNotAvailableException}] for topic [log]: class
> > kafka.common.LeaderNotAvailableException
> > 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> > broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> > correlation id 3 for 1 topic(s) Set(log)
> > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> > hdfs-ix03.se-ix.delta.prod:58668
> > 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> > /10.3.24.22.
> > 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> > hdfs-ix03.se-ix.delta.prod:58668 for producing
> > 16/01/23 12:49:33 INFO impl.AMRMClientImpl: Received new token for :
> > hdfs-ix03.se-ix.delta.prod:45454
> > 16/01/23 12:49:33 INFO impl.ContainerManagementProtocolProxy: Opening
> > proxy : hdfs-ix03.se-ix.delta.prod:45454
> > 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
> > /10.3.24.22.
> > 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
> > /10.3.24.22.
> > 16/01/23 12:49:39 INFO server.KafkaServer: [Kafka Server 1], Shutting
> down
> > 16/01/23 12:49:39 INFO server.KafkaZooKeeper: Closing zookeeper client...
> > 16/01/23 12:49:39 INFO zkclient.ZkEventThread: Terminate ZkClient event
> > thread.
> > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
> > 1], Shutting down
> > 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
> > 1], Shutdown completed
> > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
> > Handler on Broker 1], shutting down
> > 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
> > Handler on Broker 1], shutted down completely
> > 16/01/23 12:49:39 INFO utils.KafkaScheduler: Shutdown Kafka scheduler
> > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
> > Broker 1]: Shut down
> > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
> > [ReplicaFetcherManager on broker 1] shutting down
> > 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
> > [ReplicaFetcherManager on broker 1] shutdown completed
> > 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
> > Broker 1]: Shutted down completely
> > 16/01/23 12:49:40 INFO controller.RequestSendThread:
> > [Controller-1-to-broker-1-send-thread], Shutting down
> > 16/01/23 12:49:40 INFO controller.RequestSendThread:
> > [Controller-1-to-broker-1-send-thread], Stopped
> > 16/01/23 12:49:40 INFO controller.RequestSendThread:
> > [Controller-1-to-broker-1-send-thread], Shutdown completed
> > 16/01/23 12:49:40 INFO controller.KafkaController: [Controller 1]:
> > Controller shutdown complete
> > 16/01/23 12:49:40 INFO server.KafkaServer: [Kafka Server 1], Shut down
> > completed
> > 16/01/23 12:49:40 INFO impl.ContainerManagementProtocolProxy: Opening
> > proxy : hdfs-ix03.se-ix.delta.prod:45454
> > 16/01/23 12:49:40 INFO impl.AMRMClientImpl: Waiting for application to
> > be successfully unregistered.
> > 16/01/23 12:49:40 INFO producer.SyncProducer: Disconnecting from
> > hdfs-ix03.se-ix.delta.prod:58668
> > 16/01/23 12:49:40 WARN async.DefaultEventHandler: Failed to send
> > producer request with correlation id 35 to broker 1 with data for
> > partitions [log,0]
> > java.nio.channels.ClosedByInterruptException
> > at
> >
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> > at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:957)
> > at
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:204)
> > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
> > at
> >
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
> > at kafka.utils.Utils$.read(Unknown Source)
> > at kafka.network.BoundedByteBufferReceive.readFrom(Unknown Source)
> > at kafka.network.Receive$class.readCompletely(Unknown Source)
> > at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown Source)
> > at kafka.network.BlockingChannel.receive(Unknown Source)
> > at kafka.producer.SyncProducer.liftedTree1$1(Unknown Source)
> > at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Unknown
> > Source)
> > at
> >
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Unknown
> > Source)
> > at
> >
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
> > Source)
> > at
> >
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
> > Source)
> > at kafka.metrics.KafkaTimer.time(Unknown Source)
> > at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(Unknown
> Source)
> > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
> > at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
> > at kafka.metrics.KafkaTimer.time(Unknown Source)
> > at kafka.producer.SyncProducer.send(Unknown Source)
> > at
> >
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(Unknown
> > Source)
> > at
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
> > Source)
> > at
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
> > Source)
> > at
> >
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> > at
> >
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> > at
> >
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> > at
> >
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> > at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> > at
> >
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> > at
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(Unknown
> > Source)
> > at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
> > at kafka.producer.Producer.send(Unknown Source)
> > at kafka.javaapi.producer.Producer.send(Unknown Source)
> > at
> >
> org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122)
> > at
> >
> org.apache.twill.internal.logging.KafkaAppender.doPublishLogs(KafkaAppender.java:268)
> > at
> >
> org.apache.twill.internal.logging.KafkaAppender.publishLogs(KafkaAppender.java:228)
> > at
> >
> org.apache.twill.internal.logging.KafkaAppender.access$700(KafkaAppender.java:66)
> > at
> >
> org.apache.twill.internal.logging.KafkaAppender$2.run(KafkaAppender.java:280)
> > at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> > 16/01/23 12:49:40 INFO async.DefaultEventHandler: Back off for 100 ms
> > before retrying send. Remaining retries = 3
> > 16/01/23 12:49:40 INFO producer.Producer: Shutting down producer
> > 16/01/23 12:49:40 INFO producer.ProducerPool: Closing all sync producers
> >
> >
> > On Sat, Jan 23, 2016 at 1:22 AM, Terence Yim <ch...@gmail.com> wrote:
> > > Hi,
> > >
> > > It's due to a very old version of ASM library that bring it by
> > hadoop/yarn.
> > > Please add exclusion of asm library to all hadoop dependencies.
> > >
> > > <exclusion>
> > >   <groupId>asm</groupId>
> > >   <artifactId>asm</artifactId>
> > > </exclusion>
> > >
> > > Terence
> > >
> > >
> > > On Fri, Jan 22, 2016 at 2:34 PM, Kristoffer Sjögren <st...@gmail.com>
> > > wrote:
> > >
> > >> Further adding the following dependencies cause another exception.
> > >>
> > >> <dependency>
> > >>   <groupId>com.google.guava</groupId>
> > >>   <artifactId>guava</artifactId>
> > >>   <version>13.0</version>
> > >> </dependency>
> > >> <dependency>
> > >>   <groupId>org.apache.hadoop</groupId>
> > >>   <artifactId>hadoop-hdfs</artifactId>
> > >>   <version>2.7.1</version>
> > >> </dependency>
> > >>
> > >> Exception in thread " STARTING"
> > >> java.lang.IncompatibleClassChangeError: class
> > >> org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
> > >> has interface org.objectweb.asm.ClassVisitor as super class
> > >> at java.lang.ClassLoader.defineClass1(Native Method)
> > >> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> > >> at
> > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> > >> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> > >> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> > >> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> > >> at java.security.AccessController.doPrivileged(Native Method)
> > >> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> > >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> > >> at
> > >>
> >
> org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
> > >> at
> > >>
> >
> org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
> > >> at
> > >>
> >
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
> > >> at
> > >>
> >
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
> > >> at
> > >>
> >
> org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
> > >> at
> > >>
> >
> org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
> > >> at
> > >>
> >
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
> > >> at
> > >>
> >
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
> > >> at
> > >>
> >
> org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
> > >> at
> > >>
> >
> org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
> > >> at
> > >>
> >
> org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
> > >> at
> > >>
> >
> com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
> > >> at java.lang.Thread.run(Thread.java:745)
> > >>
> > >> On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <
> stoffe@gmail.com>
> > >> wrote:
> > >> > Add those dependencies fail with the following exception.
> > >> >
> > >> > Exception in thread "main" java.lang.AbstractMethodError:
> > >> >
> > >>
> >
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
> > >> > at
> > >>
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
> > >> > at
> > >>
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
> > >> > at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
> > >> > at
> > >>
> >
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
> > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
> > >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
> > >> > at
> > >>
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
> > >> > at
> > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> > >> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> > >> > at
> > >>
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> > >> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> > >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> > >> > at
> > >>
> >
> org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
> > >> > at
> > >>
> >
> org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
> > >> > at deephacks.BundledJarExample.main(BundledJarExample.java:70)
> > >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >> > at
> > >>
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > >> > at
> > >>
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >> > at java.lang.reflect.Method.invoke(Method.java:497)
> > >> > at
> > com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> > >> >
> > >> > On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <ch...@gmail.com>
> > wrote:
> > >> >> Hi,
> > >> >>
> > >> >> If you run it from IDE, you and simply add a dependency on hadoop
> > with
> > >> >> version 2.7.1. E.g. if you are using Maven, you can add the
> > following to
> > >> >> your pom.xml dependencies section.
> > >> >>
> > >> >> <dependency>
> > >> >>   <groupId>org.apache.hadoop</groupId>
> > >> >>   <artifactId>hadoop-yarn-api</artifactId>
> > >> >>   <version>2.7.1</version>
> > >> >> </dependency>
> > >> >> <dependency>
> > >> >>   <groupId>org.apache.hadoop</groupId>
> > >> >>   <artifactId>hadoop-yarn-common</artifactId>
> > >> >>   <version>2.7.1</version>
> > >> >> </dependency>
> > >> >> <dependency>
> > >> >>   <groupId>org.apache.hadoop</groupId>
> > >> >>   <artifactId>hadoop-yarn-client</artifactId>
> > >> >>   <version>2.7.1</version>
> > >> >> </dependency>
> > >> >> <dependency>
> > >> >>   <groupId>org.apache.hadoop</groupId>
> > >> >>   <artifactId>hadoop-common</artifactId>
> > >> >>   <version>2.7.1</version>
> > >> >> </dependency>
> > >> >>
> > >> >> Terence
> > >> >>
> > >> >> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <
> > stoffe@gmail.com>
> > >> >> wrote:
> > >> >>
> > >> >>> I run it from IDE right now, but would like to create a command
> line
> > >> >>> app eventually.
> > >> >>>
> > >> >>> I should clarify that the exception above is thrown on the YARN
> > node,
> > >> >>> not in the IDE.
> > >> >>>
> > >> >>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com>
> > wrote:
> > >> >>> > Hi Kristoffer,
> > >> >>> >
> > >> >>> > The example itself shouldn't need any modification. However, how
> > do
> > >> >>> > you run that class? Do you run it from IDE or from command line
> > using
> > >> >>> > "java" command?
> > >> >>> >
> > >> >>> > Terence
> > >> >>> >
> > >> >>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <
> > >> stoffe@gmail.com>
> > >> >>> wrote:
> > >> >>> >> Hi Terence,
> > >> >>> >>
> > >> >>> >> I'm quite new to Twill and not sure how to do that exactly.
> Could
> > >> you
> > >> >>> >> show me how to modify the following example to do the same?
> > >> >>> >>
> > >> >>> >>
> > >> >>>
> > >>
> >
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> > >> >>> >>
> > >> >>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <chtyim@gmail.com
> >
> > >> wrote:
> > >> >>> >>> Hi Kristoffer,
> > >> >>> >>>
> > >> >>> >>> Seems like the exception comes from the YARN class
> > >> "ConverterUtils". I
> > >> >>> >>> believe need to start the application with the version 2.7.1
> > Hadoop
> > >> >>> >>> Jars. How to do start the twill application? Usually on a
> > cluster
> > >> with
> > >> >>> >>> hadoop installed, you can get all the hadoop jars in the
> > classpath
> > >> by
> > >> >>> >>> running this:
> > >> >>> >>>
> > >> >>> >>> export CP=`hadoop classpath`
> > >> >>> >>> java -cp .:$CP YourApp ...
> > >> >>> >>>
> > >> >>> >>> Assuming your app classes and Twill jars are in the current
> > >> directory.
> > >> >>> >>>
> > >> >>> >>> Terence
> > >> >>> >>>
> > >> >>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <
> > >> stoffe@gmail.com>
> > >> >>> wrote:
> > >> >>> >>>> Here's the full stacktrace.
> > >> >>> >>>>
> > >> >>> >>>> Exception in thread "main"
> > >> java.lang.reflect.InvocationTargetException
> > >> >>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >> >>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
> > >> >>> >>>> at
> > >> org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
> > >> >>> >>>> Caused by: java.lang.RuntimeException:
> > >> >>> >>>> java.lang.reflect.InvocationTargetException
> > >> >>> >>>> at
> > >> com.google.common.base.Throwables.propagate(Throwables.java:160)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
> > >> >>> >>>> ... 5 more
> > >> >>> >>>> Caused by: java.lang.reflect.InvocationTargetException
> > >> >>> >>>> at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > >> >>> Method)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > >> >>> >>>> at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
> > >> >>> >>>> ... 6 more
> > >> >>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid
> > >> ContainerId:
> > >> >>> >>>> container_e25_1453466340022_0004_01_000001
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
> > >> >>> >>>> ... 11 more
> > >> >>> >>>> Caused by: java.lang.NumberFormatException: For input string:
> > >> "e25"
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> > >> >>> >>>> at java.lang.Long.parseLong(Long.java:589)
> > >> >>> >>>> at java.lang.Long.parseLong(Long.java:631)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
> > >> >>> >>>> at
> > >> >>>
> > >>
> >
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
> > >> >>> >>>> ... 14 more
> > >> >>> >>>>
> > >> >>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
> > >> >>> stoffe@gmail.com> wrote:
> > >> >>> >>>>> Hi
> > >> >>> >>>>>
> > >> >>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an
> > >> exception
> > >> >>> as
> > >> >>> >>>>> soon as the application starts on the resource manager that
> > >> tells me
> > >> >>> >>>>> the container id cannot be parsed.
> > >> >>> >>>>>
> > >> >>> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
> > >> >>> >>>>> container_e04_1427159778706_0002_01_000001
> > >> >>> >>>>>
> > >> >>> >>>>> I don't have the exact stacktrace but I recall it failing in
> > >> >>> >>>>> ConverterUtils.toContainerId because it assumes that that
> the
> > >> first
> > >> >>> >>>>> token is an application attempt to be parsed as an integer.
> > This
> > >> >>> class
> > >> >>> >>>>> resides in hadoop-yarn-common 2.3.0.
> > >> >>> >>>>>
> > >> >>> >>>>> Is there any way to either tweak the container id or make
> > twill
> > >> use
> > >> >>> >>>>> the 2.7.1 jar instead?
> > >> >>> >>>>>
> > >> >>> >>>>> Cheers,
> > >> >>> >>>>> -Kristoffer
> > >> >>> >>>>>
> > >> >>> >>>>>
> > >> >>> >>>>> [1]
> > >> >>>
> > >>
> >
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> > >> >>>
> > >>
> >
>

Re: Yarn 2.7.1

Posted by Poorna Chandra <po...@cask.co>.
Hi Kristoffer,

Looks like container_e29_1453498444043_0012_01_000002 could not be started
due to some issue. Can you attach the stdout and stderr logs for
container_e29_1453498444043_0012_01_000002?

Poorna.


On Sat, Jan 23, 2016 at 3:53 AM, Kristoffer Sjögren <st...@gmail.com>
wrote:

> Yes, that almost worked. Now the application starts on Yarn and after
> a while an exception is thrown and the application exits with code 10.
>
>
> Application
>
> About
> Jobs
>
> Tools
>
> Log Type: stdout
>
> Log Upload Time: Sat Jan 23 12:49:41 +0100 2016
>
> Log Length: 21097
>
> UnJar appMaster.jar to tmp/twill.launcher-1453549768670-0
> Launch class (org.apache.twill.internal.appmaster.ApplicationMasterMain)
> with classpath:
>
> [file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/classes,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/resources,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-cli-1.2.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/scala-library-2.10.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-math3-3.1.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-core-1.0.9.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/xmlenc-0.52.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsch-0.1.42.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpclient-4.1.2.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-configuration-1.6.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/metrics-core-2.2.0.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-6.1.26.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-api-2.7.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-annotations-2.7.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guice-3.0.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-net-3.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-util-6.1.26.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/kafka_2.10-0.8.0.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-api-0.6.0-incubating.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-api-1.7.10.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/paranamer-2.3.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/protobuf-java-2.5.0.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-kerberos-codec-2.0.0-M15.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/avro-1.7.4.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-compress-1.4.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-auth-2.7.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zookeeper-3.4.6.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-core-1.9.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-client-2.7.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-zookeeper-0.6.0-incubating.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-client-1.9.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/gson-2.2.4.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-common-2.7.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-hdfs-2.7.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-asn1-api-1.0.0-M20.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-core-0.6.0-incubating.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-collections-3.2.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-3.7.0.Final.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-common-2.7.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-mapper-asl-1.9.13.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zkclient-0.3.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-jaxrs-1.9.13.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-xc-1.9.13.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsr305-3.0.0.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/snappy-java-1.0.4.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/log4j-1.2.17.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-codec-1.4.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/asm-all-5.0.2.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-all-4.0.23.Final.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/servlet-api-2.5.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guava-13.0.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jopt-simple-3.2.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-framework-2.7.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-client-2.7.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-httpclient-3.1.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-api-0.6.0-incubating.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-lang-2.6.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpcore-4.1.2.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-yarn-0.6.0-incubating.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-util-1.0.0-M20.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/htrace-core-3.1.0-incubating.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-common-0.6.0-incubating.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-io-2.4.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-server-1.9.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-i18n-2.0.0-M15.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-logging-1.1.3.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-core-0.6.0-incubating.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-core-asl-1.9.13.jar,
>
> file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/javax.inject-1.jar]
> Launching main: public static void
>
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(java.lang.String[])
> throws java.lang.Exception []
> 12:49:29.586 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos krb5
> configuration not found, setting default realm to empty
> 12:49:30.083 [main] DEBUG o.a.h.h.p.d.s.DataTransferSaslUtil -
> DataTransferProtocol not using SaslPropertiesResolver, no QOP found in
> configuration for dfs.data.transfer.protection
> 12:49:30.552 [main] INFO  o.apache.twill.internal.ServiceMain -
> Starting service ApplicationMasterService [NEW].
> 12:49:30.600 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
> - Broker list is empty. No Kafka producer is created.
> 12:49:30.704 [TrackerService STARTING] INFO
> o.a.t.i.appmaster.TrackerService - Tracker service started at
> http://hdfs-ix03.se-ix.delta.prod:51793
> 12:49:30.922 [TwillZKPathService STARTING] INFO
> o.a.t.i.ServiceMain$TwillZKPathService - Creating container ZK path:
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> 12:49:31.102 [kafka-publisher] INFO  o.a.t.i.k.c.SimpleKafkaPublisher
> - Update Kafka producer broker list: hdfs-ix03.se-ix.delta.prod:58668
> 12:49:31.288 [ApplicationMasterService] INFO
> o.a.t.internal.AbstractTwillService - Create live node
>
> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> 12:49:31.308 [ApplicationMasterService] INFO
> o.a.t.i.a.ApplicationMasterService - Start application master with
> spec:
> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
> 12:49:31.318 [main] INFO  o.apache.twill.internal.ServiceMain -
> Service ApplicationMasterService [RUNNING] started.
> 12:49:31.344 [ApplicationMasterService] INFO
> o.a.t.i.a.ApplicationMasterService - Request 1 container with
> capability <memory:512, vCores:1> for runnable JarRunnable
> 12:49:33.368 [ApplicationMasterService] INFO
> o.a.t.i.a.ApplicationMasterService - Got container
> container_e29_1453498444043_0012_01_000002
> 12:49:33.369 [ApplicationMasterService] INFO
> o.a.t.i.a.ApplicationMasterService - Starting runnable JarRunnable
> with
> RunnableProcessLauncher{container=org.apache.twill.internal.yarn.Hadoop21YarnContainerInfo@5e82cebd
> }
> 12:49:33.417 [ApplicationMasterService] INFO
> o.a.t.i.a.RunnableProcessLauncher - Launching in container
> container_e29_1453498444043_0012_01_000002 at
> hdfs-ix03.se-ix.delta.prod:45454, [$JAVA_HOME/bin/java
> -Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
> -Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
> launcher.jar:$HADOOP_CONF_DIR -Xmx359m
> org.apache.twill.launcher.TwillLauncher container.jar
> org.apache.twill.internal.container.TwillContainerMain true
> 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]
> 12:49:33.473 [ApplicationMasterService] INFO
> o.a.t.i.a.ApplicationMasterService - Runnable JarRunnable fully
> provisioned with 1 instances.
> 12:49:35.302 [zk-client-EventThread] INFO
> o.a.t.i.TwillContainerLauncher - Container LiveNodeData updated:
>
> {"data":{"containerId":"container_e29_1453498444043_0012_01_000002","host":"hdfs-ix03.se-ix.delta.prod"}}
> 12:49:37.484 [ApplicationMasterService] INFO
> o.a.t.i.a.ApplicationMasterService - Container
> container_e29_1453498444043_0012_01_000002 completed with
> COMPLETE:Exception from container-launch.
> Container id: container_e29_1453498444043_0012_01_000002
> Exit code: 10
> Stack trace: ExitCodeException exitCode=10:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
> at org.apache.hadoop.util.Shell.run(Shell.java:487)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 10
> .
> 12:49:37.488 [ApplicationMasterService] WARN
> o.a.t.i.appmaster.RunningContainers - Container
> container_e29_1453498444043_0012_01_000002 exited abnormally with
> state COMPLETE, exit code 10.
> 12:49:37.496 [ApplicationMasterService] INFO
> o.a.t.i.a.ApplicationMasterService - All containers completed.
> Shutting down application master.
> 12:49:37.498 [ApplicationMasterService] INFO
> o.a.t.i.a.ApplicationMasterService - Stop application master with
> spec:
> {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
> 12:49:37.500 [ApplicationMasterService] INFO
> o.a.t.i.appmaster.RunningContainers - Stopping all instances of
> JarRunnable
> 12:49:37.500 [ApplicationMasterService] INFO
> o.a.t.i.appmaster.RunningContainers - Terminated all instances of
> JarRunnable
> 12:49:37.512 [ApplicationMasterService] INFO
> o.a.t.i.a.ApplicationMasterService - Application directory deleted:
> hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> 12:49:37.512 [ApplicationMasterService] INFO
> o.a.t.internal.AbstractTwillService - Remove live node
>
> zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> 12:49:37.516 [ApplicationMasterService] INFO
> o.a.t.internal.AbstractTwillService - Service ApplicationMasterService
> with runId be4bbf01-5e72-4147-b2eb-b84e19214b5b shutdown completed
> 12:49:37.516 [main] INFO  o.apache.twill.internal.ServiceMain -
> Service ApplicationMasterService [TERMINATED] completed.
> 12:49:39.676 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
> - Broker list is empty. No Kafka producer is created.
> 12:49:40.037 [TwillZKPathService STOPPING] INFO
> o.a.t.i.ServiceMain$TwillZKPathService - Removing container ZK path:
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
> 12:49:40.248 [TrackerService STOPPING] INFO
> o.a.t.i.appmaster.TrackerService - Tracker service stopped at
> http://hdfs-ix03.se-ix.delta.prod:51793
> Main class completed.
> Launcher completed
> Cleanup directory tmp/twill.launcher-1453549768670-0
>
>
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
>
> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
>
> [jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type
> [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
> 16/01/23 12:49:29 INFO impl.ContainerManagementProtocolProxy:
> yarn.client.max-cached-nodemanagers-proxies : 0
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Verifying properties
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property log.dir is
> overridden to
> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> default.replication.factor is overridden to 1
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property port is
> overridden to 58668
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> socket.request.max.bytes is overridden to 104857600
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> socket.send.buffer.bytes is overridden to 1048576
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> log.flush.interval.ms is overridden to 1000
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> zookeeper.connect is overridden to
>
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property broker.id
> is overridden to 1
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> log.retention.hours is overridden to 24
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> socket.receive.buffer.bytes is overridden to 1048576
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> zookeeper.connection.timeout.ms is overridden to 3000
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> num.partitions is overridden to 1
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> log.flush.interval.messages is overridden to 10000
> 16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
> log.segment.bytes is overridden to 536870912
> 16/01/23 12:49:30 INFO client.ConfiguredRMFailoverProxyProvider:
> Failing over to rm2
> 16/01/23 12:49:30 INFO server.KafkaServer: [Kafka Server 1], Starting
> 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1] Log
> directory
> '/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs'
> not found, creating it.
> 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
> Starting log cleaner every 600000 ms
> 16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
> Starting log flusher every 3000 ms with the following overrides Map()
> 16/01/23 12:49:30 INFO network.Acceptor: Awaiting socket connections
> on 0.0.0.0:58668.
> 16/01/23 12:49:30 INFO network.SocketServer: [Socket Server on Broker
> 1], Started
> 16/01/23 12:49:30 INFO server.KafkaZooKeeper: connecting to ZK:
>
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> 16/01/23 12:49:30 INFO zkclient.ZkEventThread: Starting ZkClient event
> thread.
> 16/01/23 12:49:31 INFO zkclient.ZkClient: zookeeper state changed
> (SyncConnected)
> 16/01/23 12:49:31 INFO utils.ZkUtils$: Registered broker 1 at path
> /brokers/ids/1 with address hdfs-ix03.se-ix.delta.prod:58668.
> 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
> Connecting to ZK:
>
> zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
> 16/01/23 12:49:31 INFO utils.VerifiableProperties: Verifying properties
> 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> metadata.broker.list is overridden to hdfs-ix03.se-ix.delta.prod:58668
> 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> request.required.acks is overridden to 1
> 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> partitioner.class is overridden to
> org.apache.twill.internal.kafka.client.IntegerPartitioner
> 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> compression.codec is overridden to snappy
> 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> key.serializer.class is overridden to
> org.apache.twill.internal.kafka.client.IntegerEncoder
> 16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
> serializer.class is overridden to
> org.apache.twill.internal.kafka.client.ByteBufferEncoder
> 16/01/23 12:49:31 INFO utils.Mx4jLoader$: Will not load MX4J,
> mx4j-tools.jar is not in the classpath
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Controller starting up
> 16/01/23 12:49:31 INFO server.ZookeeperLeaderElector: 1 successfully
> elected as leader
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Broker 1 starting become controller state transition
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Controller 1 incremented epoch to 1
> 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> correlation id 0 for 1 topic(s) Set(log)
> 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> hdfs-ix03.se-ix.delta.prod:58668 for producing
> 16/01/23 12:49:31 INFO controller.RequestSendThread:
> [Controller-1-to-broker-1-send-thread], Starting
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Currently active brokers in the cluster: Set(1)
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Currently shutting brokers in the cluster: Set()
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Current list of topics in the cluster: Set()
> 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> machine on controller 1]: No state transitions triggered since no
> partitions are assigned to brokers 1
> 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> machine on controller 1]: Invoking state change to OnlineReplica for
> replicas
> 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> machine on controller 1]: Started replica state machine with initial
> state -> Map()
> 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> state machine on Controller 1]: Started partition state machine with
> initial state -> Map()
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Broker 1 is ready to serve as the new controller with epoch 1
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Partitions being reassigned: Map()
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Partitions already reassigned: List()
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Resuming reassignment of partitions: Map()
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Partitions undergoing preferred replica election:
> 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> "partitions":{ "0":[ 1 ] }, "version":1 }
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Partitions that completed preferred replica election:
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Resuming preferred replica election for partitions:
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Starting preferred replica leader election for partitions
> 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> state machine on Controller 1]: Invoking state change to
> OnlinePartition for partitions
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
> Controller startup complete
> 16/01/23 12:49:31 INFO server.KafkaApis: [KafkaApi-1] Auto creation of
> topic log with 1 partitions and replication factor 1 is successful!
> 16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1], Started
> 16/01/23 12:49:31 INFO
> server.ZookeeperLeaderElector$LeaderChangeListener: New leader is 1
> 16/01/23 12:49:31 INFO controller.ControllerEpochListener:
> [ControllerEpochListener on 1]: Initialized controller epoch to 1 and
> zk version 0
> 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> hdfs-ix03.se-ix.delta.prod:58668
> 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> fetching metadata [{TopicMetadata for topic log ->
> No partition metadata for topic log due to
> kafka.common.LeaderNotAvailableException}] for topic [log]: class
> kafka.common.LeaderNotAvailableException
> 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> correlation id 1 for 1 topic(s) Set(log)
> 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> hdfs-ix03.se-ix.delta.prod:58668 for producing
> 16/01/23 12:49:31 INFO
> controller.PartitionStateMachine$TopicChangeListener:
> [TopicChangeListener on Controller 1]: New topics: [Set(log)], deleted
> topics: [Set()], new partition replica assignment [Map([log,0] ->
> List(1))]
> 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> "partitions":{ "0":[ 1 ] }, "version":1 }
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
> topic creation callback for [log,0]
> 16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
> partition creation callback for [log,0]
> 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> hdfs-ix03.se-ix.delta.prod:58668
> 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> state machine on Controller 1]: Invoking state change to NewPartition
> for partitions [log,0]
> 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> fetching metadata [{TopicMetadata for topic log ->
> No partition metadata for topic log due to
> kafka.common.LeaderNotAvailableException}] for topic [log]: class
> kafka.common.LeaderNotAvailableException
> 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> /10.3.24.22.
> 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> /10.3.24.22.
> 16/01/23 12:49:31 ERROR async.DefaultEventHandler: Failed to collate
> messages by topic, partition due to: Failed to fetch topic metadata
> for topic: log
> 16/01/23 12:49:31 INFO async.DefaultEventHandler: Back off for 100 ms
> before retrying send. Remaining retries = 3
> 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> machine on controller 1]: Invoking state change to NewReplica for
> replicas PartitionAndReplica(log,0,1)
> 16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
> state machine on Controller 1]: Invoking state change to
> OnlinePartition for partitions [log,0]
> 16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
> machine on controller 1]: Invoking state change to OnlineReplica for
> replicas PartitionAndReplica(log,0,1)
> 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
> Broker 1]: Handling LeaderAndIsr request
>
> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
> ->
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
> 16/01/23 12:49:31 INFO server.ReplicaFetcherManager:
> [ReplicaFetcherManager on broker 1] Removing fetcher for partition
> [log,0]
> 16/01/23 12:49:31 INFO log.Log: [Kafka Log on Broker 1], Completed
> load of log log-0 with log end offset 0
> 16/01/23 12:49:31 INFO log.LogManager: [Log Manager on Broker 1]
> Created log for partition [log,0] in
>
> /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs.
> 16/01/23 12:49:31 WARN server.HighwaterMarkCheckpoint: No
> highwatermark file is found. Returning 0 as the highwatermark for
> partition [log,0]
> 16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
> Broker 1]: Handled leader and isr request
>
> Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
> ->
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
> 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> correlation id 2 for 1 topic(s) Set(log)
> 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> hdfs-ix03.se-ix.delta.prod:58668 for producing
> 16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
> "partitions":{ "0":[ 1 ] }, "version":1 }
> 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> hdfs-ix03.se-ix.delta.prod:58668
> 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> /10.3.24.22.
> 16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
> fetching metadata [{TopicMetadata for topic log ->
> No partition metadata for topic log due to
> kafka.common.LeaderNotAvailableException}] for topic [log]: class
> kafka.common.LeaderNotAvailableException
> 16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
> broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
> correlation id 3 for 1 topic(s) Set(log)
> 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> hdfs-ix03.se-ix.delta.prod:58668 for producing
> 16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
> hdfs-ix03.se-ix.delta.prod:58668
> 16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
> /10.3.24.22.
> 16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
> hdfs-ix03.se-ix.delta.prod:58668 for producing
> 16/01/23 12:49:33 INFO impl.AMRMClientImpl: Received new token for :
> hdfs-ix03.se-ix.delta.prod:45454
> 16/01/23 12:49:33 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : hdfs-ix03.se-ix.delta.prod:45454
> 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
> /10.3.24.22.
> 16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
> /10.3.24.22.
> 16/01/23 12:49:39 INFO server.KafkaServer: [Kafka Server 1], Shutting down
> 16/01/23 12:49:39 INFO server.KafkaZooKeeper: Closing zookeeper client...
> 16/01/23 12:49:39 INFO zkclient.ZkEventThread: Terminate ZkClient event
> thread.
> 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
> 1], Shutting down
> 16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
> 1], Shutdown completed
> 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
> Handler on Broker 1], shutting down
> 16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
> Handler on Broker 1], shutted down completely
> 16/01/23 12:49:39 INFO utils.KafkaScheduler: Shutdown Kafka scheduler
> 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
> Broker 1]: Shut down
> 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
> [ReplicaFetcherManager on broker 1] shutting down
> 16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
> [ReplicaFetcherManager on broker 1] shutdown completed
> 16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
> Broker 1]: Shutted down completely
> 16/01/23 12:49:40 INFO controller.RequestSendThread:
> [Controller-1-to-broker-1-send-thread], Shutting down
> 16/01/23 12:49:40 INFO controller.RequestSendThread:
> [Controller-1-to-broker-1-send-thread], Stopped
> 16/01/23 12:49:40 INFO controller.RequestSendThread:
> [Controller-1-to-broker-1-send-thread], Shutdown completed
> 16/01/23 12:49:40 INFO controller.KafkaController: [Controller 1]:
> Controller shutdown complete
> 16/01/23 12:49:40 INFO server.KafkaServer: [Kafka Server 1], Shut down
> completed
> 16/01/23 12:49:40 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : hdfs-ix03.se-ix.delta.prod:45454
> 16/01/23 12:49:40 INFO impl.AMRMClientImpl: Waiting for application to
> be successfully unregistered.
> 16/01/23 12:49:40 INFO producer.SyncProducer: Disconnecting from
> hdfs-ix03.se-ix.delta.prod:58668
> 16/01/23 12:49:40 WARN async.DefaultEventHandler: Failed to send
> producer request with correlation id 35 to broker 1 with data for
> partitions [log,0]
> java.nio.channels.ClosedByInterruptException
> at
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:957)
> at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:204)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
> at
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
> at kafka.utils.Utils$.read(Unknown Source)
> at kafka.network.BoundedByteBufferReceive.readFrom(Unknown Source)
> at kafka.network.Receive$class.readCompletely(Unknown Source)
> at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown Source)
> at kafka.network.BlockingChannel.receive(Unknown Source)
> at kafka.producer.SyncProducer.liftedTree1$1(Unknown Source)
> at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Unknown
> Source)
> at
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Unknown
> Source)
> at
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
> Source)
> at
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
> Source)
> at kafka.metrics.KafkaTimer.time(Unknown Source)
> at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(Unknown Source)
> at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
> at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
> at kafka.metrics.KafkaTimer.time(Unknown Source)
> at kafka.producer.SyncProducer.send(Unknown Source)
> at
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(Unknown
> Source)
> at
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
> Source)
> at
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
> Source)
> at
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(Unknown
> Source)
> at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
> at kafka.producer.Producer.send(Unknown Source)
> at kafka.javaapi.producer.Producer.send(Unknown Source)
> at
> org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122)
> at
> org.apache.twill.internal.logging.KafkaAppender.doPublishLogs(KafkaAppender.java:268)
> at
> org.apache.twill.internal.logging.KafkaAppender.publishLogs(KafkaAppender.java:228)
> at
> org.apache.twill.internal.logging.KafkaAppender.access$700(KafkaAppender.java:66)
> at
> org.apache.twill.internal.logging.KafkaAppender$2.run(KafkaAppender.java:280)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 16/01/23 12:49:40 INFO async.DefaultEventHandler: Back off for 100 ms
> before retrying send. Remaining retries = 3
> 16/01/23 12:49:40 INFO producer.Producer: Shutting down producer
> 16/01/23 12:49:40 INFO producer.ProducerPool: Closing all sync producers
>
>
> On Sat, Jan 23, 2016 at 1:22 AM, Terence Yim <ch...@gmail.com> wrote:
> > Hi,
> >
> > It's due to a very old version of ASM library that bring it by
> hadoop/yarn.
> > Please add exclusion of asm library to all hadoop dependencies.
> >
> > <exclusion>
> >   <groupId>asm</groupId>
> >   <artifactId>asm</artifactId>
> > </exclusion>
> >
> > Terence
> >
> >
> > On Fri, Jan 22, 2016 at 2:34 PM, Kristoffer Sjögren <st...@gmail.com>
> > wrote:
> >
> >> Further adding the following dependencies cause another exception.
> >>
> >> <dependency>
> >>   <groupId>com.google.guava</groupId>
> >>   <artifactId>guava</artifactId>
> >>   <version>13.0</version>
> >> </dependency>
> >> <dependency>
> >>   <groupId>org.apache.hadoop</groupId>
> >>   <artifactId>hadoop-hdfs</artifactId>
> >>   <version>2.7.1</version>
> >> </dependency>
> >>
> >> Exception in thread " STARTING"
> >> java.lang.IncompatibleClassChangeError: class
> >> org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
> >> has interface org.objectweb.asm.ClassVisitor as super class
> >> at java.lang.ClassLoader.defineClass1(Native Method)
> >> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> >> at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> >> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> >> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> >> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> >> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >> at
> >>
> org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
> >> at
> >>
> org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
> >> at
> >>
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
> >> at
> >>
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
> >> at
> >>
> org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
> >> at
> >>
> org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
> >> at
> >>
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
> >> at
> >>
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
> >> at
> >>
> org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
> >> at
> >>
> org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
> >> at
> >>
> org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
> >> at
> >>
> com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
> >> at java.lang.Thread.run(Thread.java:745)
> >>
> >> On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <st...@gmail.com>
> >> wrote:
> >> > Add those dependencies fail with the following exception.
> >> >
> >> > Exception in thread "main" java.lang.AbstractMethodError:
> >> >
> >>
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
> >> > at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
> >> > at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
> >> > at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
> >> > at
> >>
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
> >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
> >> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
> >> > at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
> >> > at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> >> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> >> > at
> >> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> >> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> >> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> >> > at
> >>
> org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
> >> > at
> >>
> org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
> >> > at deephacks.BundledJarExample.main(BundledJarExample.java:70)
> >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> > at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >> > at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >> > at java.lang.reflect.Method.invoke(Method.java:497)
> >> > at
> com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> >> >
> >> > On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <ch...@gmail.com>
> wrote:
> >> >> Hi,
> >> >>
> >> >> If you run it from IDE, you and simply add a dependency on hadoop
> with
> >> >> version 2.7.1. E.g. if you are using Maven, you can add the
> following to
> >> >> your pom.xml dependencies section.
> >> >>
> >> >> <dependency>
> >> >>   <groupId>org.apache.hadoop</groupId>
> >> >>   <artifactId>hadoop-yarn-api</artifactId>
> >> >>   <version>2.7.1</version>
> >> >> </dependency>
> >> >> <dependency>
> >> >>   <groupId>org.apache.hadoop</groupId>
> >> >>   <artifactId>hadoop-yarn-common</artifactId>
> >> >>   <version>2.7.1</version>
> >> >> </dependency>
> >> >> <dependency>
> >> >>   <groupId>org.apache.hadoop</groupId>
> >> >>   <artifactId>hadoop-yarn-client</artifactId>
> >> >>   <version>2.7.1</version>
> >> >> </dependency>
> >> >> <dependency>
> >> >>   <groupId>org.apache.hadoop</groupId>
> >> >>   <artifactId>hadoop-common</artifactId>
> >> >>   <version>2.7.1</version>
> >> >> </dependency>
> >> >>
> >> >> Terence
> >> >>
> >> >> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <
> stoffe@gmail.com>
> >> >> wrote:
> >> >>
> >> >>> I run it from IDE right now, but would like to create a command line
> >> >>> app eventually.
> >> >>>
> >> >>> I should clarify that the exception above is thrown on the YARN
> node,
> >> >>> not in the IDE.
> >> >>>
> >> >>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com>
> wrote:
> >> >>> > Hi Kristoffer,
> >> >>> >
> >> >>> > The example itself shouldn't need any modification. However, how
> do
> >> >>> > you run that class? Do you run it from IDE or from command line
> using
> >> >>> > "java" command?
> >> >>> >
> >> >>> > Terence
> >> >>> >
> >> >>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <
> >> stoffe@gmail.com>
> >> >>> wrote:
> >> >>> >> Hi Terence,
> >> >>> >>
> >> >>> >> I'm quite new to Twill and not sure how to do that exactly. Could
> >> you
> >> >>> >> show me how to modify the following example to do the same?
> >> >>> >>
> >> >>> >>
> >> >>>
> >>
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> >> >>> >>
> >> >>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <ch...@gmail.com>
> >> wrote:
> >> >>> >>> Hi Kristoffer,
> >> >>> >>>
> >> >>> >>> Seems like the exception comes from the YARN class
> >> "ConverterUtils". I
> >> >>> >>> believe need to start the application with the version 2.7.1
> Hadoop
> >> >>> >>> Jars. How to do start the twill application? Usually on a
> cluster
> >> with
> >> >>> >>> hadoop installed, you can get all the hadoop jars in the
> classpath
> >> by
> >> >>> >>> running this:
> >> >>> >>>
> >> >>> >>> export CP=`hadoop classpath`
> >> >>> >>> java -cp .:$CP YourApp ...
> >> >>> >>>
> >> >>> >>> Assuming your app classes and Twill jars are in the current
> >> directory.
> >> >>> >>>
> >> >>> >>> Terence
> >> >>> >>>
> >> >>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <
> >> stoffe@gmail.com>
> >> >>> wrote:
> >> >>> >>>> Here's the full stacktrace.
> >> >>> >>>>
> >> >>> >>>> Exception in thread "main"
> >> java.lang.reflect.InvocationTargetException
> >> >>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> >>> >>>> at
> >> >>>
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >> >>> >>>> at
> >> >>>
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >> >>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
> >> >>> >>>> at
> >> org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
> >> >>> >>>> Caused by: java.lang.RuntimeException:
> >> >>> >>>> java.lang.reflect.InvocationTargetException
> >> >>> >>>> at
> >> com.google.common.base.Throwables.propagate(Throwables.java:160)
> >> >>> >>>> at
> >> >>>
> >>
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
> >> >>> >>>> at
> >> >>>
> >>
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
> >> >>> >>>> ... 5 more
> >> >>> >>>> Caused by: java.lang.reflect.InvocationTargetException
> >> >>> >>>> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> >>> Method)
> >> >>> >>>> at
> >> >>>
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >> >>> >>>> at
> >> >>>
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >> >>> >>>> at
> java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> >> >>> >>>> at
> >> >>>
> >>
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
> >> >>> >>>> ... 6 more
> >> >>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid
> >> ContainerId:
> >> >>> >>>> container_e25_1453466340022_0004_01_000001
> >> >>> >>>> at
> >> >>>
> >>
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
> >> >>> >>>> at
> >> >>>
> >>
> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
> >> >>> >>>> at
> >> >>>
> >>
> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
> >> >>> >>>> at
> >> >>>
> >>
> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
> >> >>> >>>> ... 11 more
> >> >>> >>>> Caused by: java.lang.NumberFormatException: For input string:
> >> "e25"
> >> >>> >>>> at
> >> >>>
> >>
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> >> >>> >>>> at java.lang.Long.parseLong(Long.java:589)
> >> >>> >>>> at java.lang.Long.parseLong(Long.java:631)
> >> >>> >>>> at
> >> >>>
> >>
> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
> >> >>> >>>> at
> >> >>>
> >>
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
> >> >>> >>>> ... 14 more
> >> >>> >>>>
> >> >>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
> >> >>> stoffe@gmail.com> wrote:
> >> >>> >>>>> Hi
> >> >>> >>>>>
> >> >>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an
> >> exception
> >> >>> as
> >> >>> >>>>> soon as the application starts on the resource manager that
> >> tells me
> >> >>> >>>>> the container id cannot be parsed.
> >> >>> >>>>>
> >> >>> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
> >> >>> >>>>> container_e04_1427159778706_0002_01_000001
> >> >>> >>>>>
> >> >>> >>>>> I don't have the exact stacktrace but I recall it failing in
> >> >>> >>>>> ConverterUtils.toContainerId because it assumes that that the
> >> first
> >> >>> >>>>> token is an application attempt to be parsed as an integer.
> This
> >> >>> class
> >> >>> >>>>> resides in hadoop-yarn-common 2.3.0.
> >> >>> >>>>>
> >> >>> >>>>> Is there any way to either tweak the container id or make
> twill
> >> use
> >> >>> >>>>> the 2.7.1 jar instead?
> >> >>> >>>>>
> >> >>> >>>>> Cheers,
> >> >>> >>>>> -Kristoffer
> >> >>> >>>>>
> >> >>> >>>>>
> >> >>> >>>>> [1]
> >> >>>
> >>
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> >> >>>
> >>
>

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
Yes, that almost worked. Now the application starts on Yarn and after
a while an exception is thrown and the application exits with code 10.


Application

About
Jobs

Tools

Log Type: stdout

Log Upload Time: Sat Jan 23 12:49:41 +0100 2016

Log Length: 21097

UnJar appMaster.jar to tmp/twill.launcher-1453549768670-0
Launch class (org.apache.twill.internal.appmaster.ApplicationMasterMain)
with classpath:
[file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/classes,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/resources,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-cli-1.2.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/scala-library-2.10.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-math3-3.1.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-core-1.0.9.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/xmlenc-0.52.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsch-0.1.42.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpclient-4.1.2.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-configuration-1.6.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/metrics-core-2.2.0.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-6.1.26.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-api-2.7.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-annotations-2.7.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guice-3.0.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-net-3.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jetty-util-6.1.26.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/kafka_2.10-0.8.0.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-api-0.6.0-incubating.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-api-1.7.10.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/paranamer-2.3.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/protobuf-java-2.5.0.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-kerberos-codec-2.0.0-M15.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/avro-1.7.4.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-compress-1.4.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-auth-2.7.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zookeeper-3.4.6.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-core-1.9.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-client-2.7.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-zookeeper-0.6.0-incubating.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-client-1.9.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/gson-2.2.4.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-common-2.7.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-hdfs-2.7.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-asn1-api-1.0.0-M20.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-core-0.6.0-incubating.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-collections-3.2.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-3.7.0.Final.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-common-2.7.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-mapper-asl-1.9.13.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/zkclient-0.3.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-jaxrs-1.9.13.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-xc-1.9.13.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jsr305-3.0.0.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/snappy-java-1.0.4.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/log4j-1.2.17.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-codec-1.4.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/asm-all-5.0.2.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/netty-all-4.0.23.Final.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/servlet-api-2.5.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/guava-13.0.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jopt-simple-3.2.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/curator-framework-2.7.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/hadoop-yarn-client-2.7.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-httpclient-3.1.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-discovery-api-0.6.0-incubating.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-lang-2.6.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/httpcore-4.1.2.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-yarn-0.6.0-incubating.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/api-util-1.0.0-M20.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/htrace-core-3.1.0-incubating.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-common-0.6.0-incubating.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-io-2.4.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jersey-server-1.9.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/apacheds-i18n-2.0.0-M15.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/commons-logging-1.1.3.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/twill-core-0.6.0-incubating.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/jackson-core-asl-1.9.13.jar,
file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/javax.inject-1.jar]
Launching main: public static void
org.apache.twill.internal.appmaster.ApplicationMasterMain.main(java.lang.String[])
throws java.lang.Exception []
12:49:29.586 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos krb5
configuration not found, setting default realm to empty
12:49:30.083 [main] DEBUG o.a.h.h.p.d.s.DataTransferSaslUtil -
DataTransferProtocol not using SaslPropertiesResolver, no QOP found in
configuration for dfs.data.transfer.protection
12:49:30.552 [main] INFO  o.apache.twill.internal.ServiceMain -
Starting service ApplicationMasterService [NEW].
12:49:30.600 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
- Broker list is empty. No Kafka producer is created.
12:49:30.704 [TrackerService STARTING] INFO
o.a.t.i.appmaster.TrackerService - Tracker service started at
http://hdfs-ix03.se-ix.delta.prod:51793
12:49:30.922 [TwillZKPathService STARTING] INFO
o.a.t.i.ServiceMain$TwillZKPathService - Creating container ZK path:
zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
12:49:31.102 [kafka-publisher] INFO  o.a.t.i.k.c.SimpleKafkaPublisher
- Update Kafka producer broker list: hdfs-ix03.se-ix.delta.prod:58668
12:49:31.288 [ApplicationMasterService] INFO
o.a.t.internal.AbstractTwillService - Create live node
zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
12:49:31.308 [ApplicationMasterService] INFO
o.a.t.i.a.ApplicationMasterService - Start application master with
spec: {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
12:49:31.318 [main] INFO  o.apache.twill.internal.ServiceMain -
Service ApplicationMasterService [RUNNING] started.
12:49:31.344 [ApplicationMasterService] INFO
o.a.t.i.a.ApplicationMasterService - Request 1 container with
capability <memory:512, vCores:1> for runnable JarRunnable
12:49:33.368 [ApplicationMasterService] INFO
o.a.t.i.a.ApplicationMasterService - Got container
container_e29_1453498444043_0012_01_000002
12:49:33.369 [ApplicationMasterService] INFO
o.a.t.i.a.ApplicationMasterService - Starting runnable JarRunnable
with RunnableProcessLauncher{container=org.apache.twill.internal.yarn.Hadoop21YarnContainerInfo@5e82cebd}
12:49:33.417 [ApplicationMasterService] INFO
o.a.t.i.a.RunnableProcessLauncher - Launching in container
container_e29_1453498444043_0012_01_000002 at
hdfs-ix03.se-ix.delta.prod:45454, [$JAVA_HOME/bin/java
-Djava.io.tmpdir=tmp -Dyarn.container=$YARN_CONTAINER_ID
-Dtwill.runnable=$TWILL_APP_NAME.$TWILL_RUNNABLE_NAME -cp
launcher.jar:$HADOOP_CONF_DIR -Xmx359m
org.apache.twill.launcher.TwillLauncher container.jar
org.apache.twill.internal.container.TwillContainerMain true
1><LOG_DIR>/stdout 2><LOG_DIR>/stderr]
12:49:33.473 [ApplicationMasterService] INFO
o.a.t.i.a.ApplicationMasterService - Runnable JarRunnable fully
provisioned with 1 instances.
12:49:35.302 [zk-client-EventThread] INFO
o.a.t.i.TwillContainerLauncher - Container LiveNodeData updated:
{"data":{"containerId":"container_e29_1453498444043_0012_01_000002","host":"hdfs-ix03.se-ix.delta.prod"}}
12:49:37.484 [ApplicationMasterService] INFO
o.a.t.i.a.ApplicationMasterService - Container
container_e29_1453498444043_0012_01_000002 completed with
COMPLETE:Exception from container-launch.
Container id: container_e29_1453498444043_0012_01_000002
Exit code: 10
Stack trace: ExitCodeException exitCode=10:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 10
.
12:49:37.488 [ApplicationMasterService] WARN
o.a.t.i.appmaster.RunningContainers - Container
container_e29_1453498444043_0012_01_000002 exited abnormally with
state COMPLETE, exit code 10.
12:49:37.496 [ApplicationMasterService] INFO
o.a.t.i.a.ApplicationMasterService - All containers completed.
Shutting down application master.
12:49:37.498 [ApplicationMasterService] INFO
o.a.t.i.a.ApplicationMasterService - Stop application master with
spec: {"name":"JarApp","runnables":{"JarRunnable":{"name":"JarRunnable","runnable":{"classname":"org.apache.twill.ext.BundledJarRunnable","name":"JarRunnable","arguments":{}},"resources":{"cores":1,"memorySize":512,"instances":1,"uplink":-1,"downlink":-1},"files":[{"name":"twill-app-1.0.0-SNAPSHOT.jar","uri":"hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/twill-app-1.0.0-SNAPSHOT.jar.e09cf92c-56f7-42a8-84ac-53f2665afa1d.jar","lastModified":1453549766870,"size":7090,"archive":false,"pattern":null}]}},"orders":[{"names":["JarRunnable"],"type":"STARTED"}],"placementPolicies":[],"handler":{"classname":"org.apache.twill.internal.LogOnlyEventHandler","configs":{}}}
12:49:37.500 [ApplicationMasterService] INFO
o.a.t.i.appmaster.RunningContainers - Stopping all instances of
JarRunnable
12:49:37.500 [ApplicationMasterService] INFO
o.a.t.i.appmaster.RunningContainers - Terminated all instances of
JarRunnable
12:49:37.512 [ApplicationMasterService] INFO
o.a.t.i.a.ApplicationMasterService - Application directory deleted:
hdfs://hdpcluster/user/stoffe/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
12:49:37.512 [ApplicationMasterService] INFO
o.a.t.internal.AbstractTwillService - Remove live node
zookeeper01.se-ix.delta.prod/JarApp/instances/be4bbf01-5e72-4147-b2eb-b84e19214b5b
12:49:37.516 [ApplicationMasterService] INFO
o.a.t.internal.AbstractTwillService - Service ApplicationMasterService
with runId be4bbf01-5e72-4147-b2eb-b84e19214b5b shutdown completed
12:49:37.516 [main] INFO  o.apache.twill.internal.ServiceMain -
Service ApplicationMasterService [TERMINATED] completed.
12:49:39.676 [kafka-publisher] WARN  o.a.t.i.k.c.SimpleKafkaPublisher
- Broker list is empty. No Kafka producer is created.
12:49:40.037 [TwillZKPathService STOPPING] INFO
o.a.t.i.ServiceMain$TwillZKPathService - Removing container ZK path:
zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b
12:49:40.248 [TrackerService STOPPING] INFO
o.a.t.i.appmaster.TrackerService - Tracker service stopped at
http://hdfs-ix03.se-ix.delta.prod:51793
Main class completed.
Launcher completed
Cleanup directory tmp/twill.launcher-1453549768670-0



SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/tmp/twill.launcher-1453549768670-0/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type
[ch.qos.logback.classic.util.ContextSelectorStaticBinder]
16/01/23 12:49:29 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
16/01/23 12:49:30 INFO utils.VerifiableProperties: Verifying properties
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property log.dir is
overridden to /hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
default.replication.factor is overridden to 1
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property port is
overridden to 58668
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
socket.request.max.bytes is overridden to 104857600
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
socket.send.buffer.bytes is overridden to 1048576
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
log.flush.interval.ms is overridden to 1000
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
zookeeper.connect is overridden to
zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property broker.id
is overridden to 1
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
log.retention.hours is overridden to 24
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
socket.receive.buffer.bytes is overridden to 1048576
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
zookeeper.connection.timeout.ms is overridden to 3000
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
num.partitions is overridden to 1
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
log.flush.interval.messages is overridden to 10000
16/01/23 12:49:30 INFO utils.VerifiableProperties: Property
log.segment.bytes is overridden to 536870912
16/01/23 12:49:30 INFO client.ConfiguredRMFailoverProxyProvider:
Failing over to rm2
16/01/23 12:49:30 INFO server.KafkaServer: [Kafka Server 1], Starting
16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1] Log
directory '/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs'
not found, creating it.
16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
Starting log cleaner every 600000 ms
16/01/23 12:49:30 INFO log.LogManager: [Log Manager on Broker 1]
Starting log flusher every 3000 ms with the following overrides Map()
16/01/23 12:49:30 INFO network.Acceptor: Awaiting socket connections
on 0.0.0.0:58668.
16/01/23 12:49:30 INFO network.SocketServer: [Socket Server on Broker
1], Started
16/01/23 12:49:30 INFO server.KafkaZooKeeper: connecting to ZK:
zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
16/01/23 12:49:30 INFO zkclient.ZkEventThread: Starting ZkClient event thread.
16/01/23 12:49:31 INFO zkclient.ZkClient: zookeeper state changed
(SyncConnected)
16/01/23 12:49:31 INFO utils.ZkUtils$: Registered broker 1 at path
/brokers/ids/1 with address hdfs-ix03.se-ix.delta.prod:58668.
16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1],
Connecting to ZK:
zookeeper01.se-ix.delta.prod/JarApp/be4bbf01-5e72-4147-b2eb-b84e19214b5b/kafka
16/01/23 12:49:31 INFO utils.VerifiableProperties: Verifying properties
16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
metadata.broker.list is overridden to hdfs-ix03.se-ix.delta.prod:58668
16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
request.required.acks is overridden to 1
16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
partitioner.class is overridden to
org.apache.twill.internal.kafka.client.IntegerPartitioner
16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
compression.codec is overridden to snappy
16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
key.serializer.class is overridden to
org.apache.twill.internal.kafka.client.IntegerEncoder
16/01/23 12:49:31 INFO utils.VerifiableProperties: Property
serializer.class is overridden to
org.apache.twill.internal.kafka.client.ByteBufferEncoder
16/01/23 12:49:31 INFO utils.Mx4jLoader$: Will not load MX4J,
mx4j-tools.jar is not in the classpath
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Controller starting up
16/01/23 12:49:31 INFO server.ZookeeperLeaderElector: 1 successfully
elected as leader
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Broker 1 starting become controller state transition
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Controller 1 incremented epoch to 1
16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
correlation id 0 for 1 topic(s) Set(log)
16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
hdfs-ix03.se-ix.delta.prod:58668 for producing
16/01/23 12:49:31 INFO controller.RequestSendThread:
[Controller-1-to-broker-1-send-thread], Starting
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Currently active brokers in the cluster: Set(1)
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Currently shutting brokers in the cluster: Set()
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Current list of topics in the cluster: Set()
16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
machine on controller 1]: No state transitions triggered since no
partitions are assigned to brokers 1
16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
machine on controller 1]: Invoking state change to OnlineReplica for
replicas
16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
machine on controller 1]: Started replica state machine with initial
state -> Map()
16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
state machine on Controller 1]: Started partition state machine with
initial state -> Map()
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Broker 1 is ready to serve as the new controller with epoch 1
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Partitions being reassigned: Map()
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Partitions already reassigned: List()
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Resuming reassignment of partitions: Map()
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Partitions undergoing preferred replica election:
16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
"partitions":{ "0":[ 1 ] }, "version":1 }
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Partitions that completed preferred replica election:
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Resuming preferred replica election for partitions:
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Starting preferred replica leader election for partitions
16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
state machine on Controller 1]: Invoking state change to
OnlinePartition for partitions
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]:
Controller startup complete
16/01/23 12:49:31 INFO server.KafkaApis: [KafkaApi-1] Auto creation of
topic log with 1 partitions and replication factor 1 is successful!
16/01/23 12:49:31 INFO server.KafkaServer: [Kafka Server 1], Started
16/01/23 12:49:31 INFO
server.ZookeeperLeaderElector$LeaderChangeListener: New leader is 1
16/01/23 12:49:31 INFO controller.ControllerEpochListener:
[ControllerEpochListener on 1]: Initialized controller epoch to 1 and
zk version 0
16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
hdfs-ix03.se-ix.delta.prod:58668
16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
fetching metadata [{TopicMetadata for topic log ->
No partition metadata for topic log due to
kafka.common.LeaderNotAvailableException}] for topic [log]: class
kafka.common.LeaderNotAvailableException
16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
correlation id 1 for 1 topic(s) Set(log)
16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
hdfs-ix03.se-ix.delta.prod:58668 for producing
16/01/23 12:49:31 INFO
controller.PartitionStateMachine$TopicChangeListener:
[TopicChangeListener on Controller 1]: New topics: [Set(log)], deleted
topics: [Set()], new partition replica assignment [Map([log,0] ->
List(1))]
16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
"partitions":{ "0":[ 1 ] }, "version":1 }
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
topic creation callback for [log,0]
16/01/23 12:49:31 INFO controller.KafkaController: [Controller 1]: New
partition creation callback for [log,0]
16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
hdfs-ix03.se-ix.delta.prod:58668
16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
state machine on Controller 1]: Invoking state change to NewPartition
for partitions [log,0]
16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
fetching metadata [{TopicMetadata for topic log ->
No partition metadata for topic log due to
kafka.common.LeaderNotAvailableException}] for topic [log]: class
kafka.common.LeaderNotAvailableException
16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
/10.3.24.22.
16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
/10.3.24.22.
16/01/23 12:49:31 ERROR async.DefaultEventHandler: Failed to collate
messages by topic, partition due to: Failed to fetch topic metadata
for topic: log
16/01/23 12:49:31 INFO async.DefaultEventHandler: Back off for 100 ms
before retrying send. Remaining retries = 3
16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
machine on controller 1]: Invoking state change to NewReplica for
replicas PartitionAndReplica(log,0,1)
16/01/23 12:49:31 INFO controller.PartitionStateMachine: [Partition
state machine on Controller 1]: Invoking state change to
OnlinePartition for partitions [log,0]
16/01/23 12:49:31 INFO controller.ReplicaStateMachine: [Replica state
machine on controller 1]: Invoking state change to OnlineReplica for
replicas PartitionAndReplica(log,0,1)
16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
Broker 1]: Handling LeaderAndIsr request
Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
-> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
16/01/23 12:49:31 INFO server.ReplicaFetcherManager:
[ReplicaFetcherManager on broker 1] Removing fetcher for partition
[log,0]
16/01/23 12:49:31 INFO log.Log: [Kafka Log on Broker 1], Completed
load of log log-0 with log end offset 0
16/01/23 12:49:31 INFO log.LogManager: [Log Manager on Broker 1]
Created log for partition [log,0] in
/hadoop/yarn/local/usercache/stoffe/appcache/application_1453498444043_0012/container_e29_1453498444043_0012_01_000001/kafka-logs.
16/01/23 12:49:31 WARN server.HighwaterMarkCheckpoint: No
highwatermark file is found. Returning 0 as the highwatermark for
partition [log,0]
16/01/23 12:49:31 INFO server.ReplicaManager: [Replica Manager on
Broker 1]: Handled leader and isr request
Name:LeaderAndIsrRequest;Version:0;Controller:1;ControllerEpoch:1;CorrelationId:6;ClientId:id_1-host_null-port_58668;PartitionState:(log,0)
-> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:1);Leaders:id:1,host:hdfs-ix03.se-ix.delta.prod,port:58668
16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
correlation id 2 for 1 topic(s) Set(log)
16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
hdfs-ix03.se-ix.delta.prod:58668 for producing
16/01/23 12:49:31 INFO admin.AdminUtils$: Topic creation {
"partitions":{ "0":[ 1 ] }, "version":1 }
16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
hdfs-ix03.se-ix.delta.prod:58668
16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
/10.3.24.22.
16/01/23 12:49:31 WARN producer.BrokerPartitionInfo: Error while
fetching metadata [{TopicMetadata for topic log ->
No partition metadata for topic log due to
kafka.common.LeaderNotAvailableException}] for topic [log]: class
kafka.common.LeaderNotAvailableException
16/01/23 12:49:31 INFO client.ClientUtils$: Fetching metadata from
broker id:0,host:hdfs-ix03.se-ix.delta.prod,port:58668 with
correlation id 3 for 1 topic(s) Set(log)
16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
hdfs-ix03.se-ix.delta.prod:58668 for producing
16/01/23 12:49:31 INFO producer.SyncProducer: Disconnecting from
hdfs-ix03.se-ix.delta.prod:58668
16/01/23 12:49:31 INFO network.Processor: Closing socket connection to
/10.3.24.22.
16/01/23 12:49:31 INFO producer.SyncProducer: Connected to
hdfs-ix03.se-ix.delta.prod:58668 for producing
16/01/23 12:49:33 INFO impl.AMRMClientImpl: Received new token for :
hdfs-ix03.se-ix.delta.prod:45454
16/01/23 12:49:33 INFO impl.ContainerManagementProtocolProxy: Opening
proxy : hdfs-ix03.se-ix.delta.prod:45454
16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
/10.3.24.22.
16/01/23 12:49:35 INFO network.Processor: Closing socket connection to
/10.3.24.22.
16/01/23 12:49:39 INFO server.KafkaServer: [Kafka Server 1], Shutting down
16/01/23 12:49:39 INFO server.KafkaZooKeeper: Closing zookeeper client...
16/01/23 12:49:39 INFO zkclient.ZkEventThread: Terminate ZkClient event thread.
16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
1], Shutting down
16/01/23 12:49:39 INFO network.SocketServer: [Socket Server on Broker
1], Shutdown completed
16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
Handler on Broker 1], shutting down
16/01/23 12:49:39 INFO server.KafkaRequestHandlerPool: [Kafka Request
Handler on Broker 1], shutted down completely
16/01/23 12:49:39 INFO utils.KafkaScheduler: Shutdown Kafka scheduler
16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
Broker 1]: Shut down
16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
[ReplicaFetcherManager on broker 1] shutting down
16/01/23 12:49:39 INFO server.ReplicaFetcherManager:
[ReplicaFetcherManager on broker 1] shutdown completed
16/01/23 12:49:39 INFO server.ReplicaManager: [Replica Manager on
Broker 1]: Shutted down completely
16/01/23 12:49:40 INFO controller.RequestSendThread:
[Controller-1-to-broker-1-send-thread], Shutting down
16/01/23 12:49:40 INFO controller.RequestSendThread:
[Controller-1-to-broker-1-send-thread], Stopped
16/01/23 12:49:40 INFO controller.RequestSendThread:
[Controller-1-to-broker-1-send-thread], Shutdown completed
16/01/23 12:49:40 INFO controller.KafkaController: [Controller 1]:
Controller shutdown complete
16/01/23 12:49:40 INFO server.KafkaServer: [Kafka Server 1], Shut down completed
16/01/23 12:49:40 INFO impl.ContainerManagementProtocolProxy: Opening
proxy : hdfs-ix03.se-ix.delta.prod:45454
16/01/23 12:49:40 INFO impl.AMRMClientImpl: Waiting for application to
be successfully unregistered.
16/01/23 12:49:40 INFO producer.SyncProducer: Disconnecting from
hdfs-ix03.se-ix.delta.prod:58668
16/01/23 12:49:40 WARN async.DefaultEventHandler: Failed to send
producer request with correlation id 35 to broker 1 with data for
partitions [log,0]
java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:957)
at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:204)
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
at kafka.utils.Utils$.read(Unknown Source)
at kafka.network.BoundedByteBufferReceive.readFrom(Unknown Source)
at kafka.network.Receive$class.readCompletely(Unknown Source)
at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown Source)
at kafka.network.BlockingChannel.receive(Unknown Source)
at kafka.producer.SyncProducer.liftedTree1$1(Unknown Source)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Unknown
Source)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Unknown
Source)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
Source)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(Unknown
Source)
at kafka.metrics.KafkaTimer.time(Unknown Source)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(Unknown Source)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(Unknown Source)
at kafka.metrics.KafkaTimer.time(Unknown Source)
at kafka.producer.SyncProducer.send(Unknown Source)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(Unknown
Source)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
Source)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(Unknown
Source)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(Unknown
Source)
at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
at kafka.producer.Producer.send(Unknown Source)
at kafka.javaapi.producer.Producer.send(Unknown Source)
at org.apache.twill.internal.kafka.client.SimpleKafkaPublisher$SimplePreparer.send(SimpleKafkaPublisher.java:122)
at org.apache.twill.internal.logging.KafkaAppender.doPublishLogs(KafkaAppender.java:268)
at org.apache.twill.internal.logging.KafkaAppender.publishLogs(KafkaAppender.java:228)
at org.apache.twill.internal.logging.KafkaAppender.access$700(KafkaAppender.java:66)
at org.apache.twill.internal.logging.KafkaAppender$2.run(KafkaAppender.java:280)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/01/23 12:49:40 INFO async.DefaultEventHandler: Back off for 100 ms
before retrying send. Remaining retries = 3
16/01/23 12:49:40 INFO producer.Producer: Shutting down producer
16/01/23 12:49:40 INFO producer.ProducerPool: Closing all sync producers


On Sat, Jan 23, 2016 at 1:22 AM, Terence Yim <ch...@gmail.com> wrote:
> Hi,
>
> It's due to a very old version of ASM library that bring it by hadoop/yarn.
> Please add exclusion of asm library to all hadoop dependencies.
>
> <exclusion>
>   <groupId>asm</groupId>
>   <artifactId>asm</artifactId>
> </exclusion>
>
> Terence
>
>
> On Fri, Jan 22, 2016 at 2:34 PM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
>
>> Further adding the following dependencies cause another exception.
>>
>> <dependency>
>>   <groupId>com.google.guava</groupId>
>>   <artifactId>guava</artifactId>
>>   <version>13.0</version>
>> </dependency>
>> <dependency>
>>   <groupId>org.apache.hadoop</groupId>
>>   <artifactId>hadoop-hdfs</artifactId>
>>   <version>2.7.1</version>
>> </dependency>
>>
>> Exception in thread " STARTING"
>> java.lang.IncompatibleClassChangeError: class
>> org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
>> has interface org.objectweb.asm.ClassVisitor as super class
>> at java.lang.ClassLoader.defineClass1(Native Method)
>> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
>> at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> at
>> org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
>> at
>> org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
>> at
>> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
>> at
>> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
>> at
>> org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
>> at
>> org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
>> at
>> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
>> at
>> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
>> at
>> org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
>> at
>> org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
>> at
>> org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
>> at
>> com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <st...@gmail.com>
>> wrote:
>> > Add those dependencies fail with the following exception.
>> >
>> > Exception in thread "main" java.lang.AbstractMethodError:
>> >
>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
>> > at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
>> > at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
>> > at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
>> > at
>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
>> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
>> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
>> > at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
>> > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>> > at
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
>> > at
>> org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
>> > at
>> org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
>> > at deephacks.BundledJarExample.main(BundledJarExample.java:70)
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.lang.reflect.Method.invoke(Method.java:497)
>> > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
>> >
>> > On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <ch...@gmail.com> wrote:
>> >> Hi,
>> >>
>> >> If you run it from IDE, you and simply add a dependency on hadoop with
>> >> version 2.7.1. E.g. if you are using Maven, you can add the following to
>> >> your pom.xml dependencies section.
>> >>
>> >> <dependency>
>> >>   <groupId>org.apache.hadoop</groupId>
>> >>   <artifactId>hadoop-yarn-api</artifactId>
>> >>   <version>2.7.1</version>
>> >> </dependency>
>> >> <dependency>
>> >>   <groupId>org.apache.hadoop</groupId>
>> >>   <artifactId>hadoop-yarn-common</artifactId>
>> >>   <version>2.7.1</version>
>> >> </dependency>
>> >> <dependency>
>> >>   <groupId>org.apache.hadoop</groupId>
>> >>   <artifactId>hadoop-yarn-client</artifactId>
>> >>   <version>2.7.1</version>
>> >> </dependency>
>> >> <dependency>
>> >>   <groupId>org.apache.hadoop</groupId>
>> >>   <artifactId>hadoop-common</artifactId>
>> >>   <version>2.7.1</version>
>> >> </dependency>
>> >>
>> >> Terence
>> >>
>> >> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <st...@gmail.com>
>> >> wrote:
>> >>
>> >>> I run it from IDE right now, but would like to create a command line
>> >>> app eventually.
>> >>>
>> >>> I should clarify that the exception above is thrown on the YARN node,
>> >>> not in the IDE.
>> >>>
>> >>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com> wrote:
>> >>> > Hi Kristoffer,
>> >>> >
>> >>> > The example itself shouldn't need any modification. However, how do
>> >>> > you run that class? Do you run it from IDE or from command line using
>> >>> > "java" command?
>> >>> >
>> >>> > Terence
>> >>> >
>> >>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <
>> stoffe@gmail.com>
>> >>> wrote:
>> >>> >> Hi Terence,
>> >>> >>
>> >>> >> I'm quite new to Twill and not sure how to do that exactly. Could
>> you
>> >>> >> show me how to modify the following example to do the same?
>> >>> >>
>> >>> >>
>> >>>
>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>> >>> >>
>> >>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <ch...@gmail.com>
>> wrote:
>> >>> >>> Hi Kristoffer,
>> >>> >>>
>> >>> >>> Seems like the exception comes from the YARN class
>> "ConverterUtils". I
>> >>> >>> believe need to start the application with the version 2.7.1 Hadoop
>> >>> >>> Jars. How to do start the twill application? Usually on a cluster
>> with
>> >>> >>> hadoop installed, you can get all the hadoop jars in the classpath
>> by
>> >>> >>> running this:
>> >>> >>>
>> >>> >>> export CP=`hadoop classpath`
>> >>> >>> java -cp .:$CP YourApp ...
>> >>> >>>
>> >>> >>> Assuming your app classes and Twill jars are in the current
>> directory.
>> >>> >>>
>> >>> >>> Terence
>> >>> >>>
>> >>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <
>> stoffe@gmail.com>
>> >>> wrote:
>> >>> >>>> Here's the full stacktrace.
>> >>> >>>>
>> >>> >>>> Exception in thread "main"
>> java.lang.reflect.InvocationTargetException
>> >>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> >>>> at
>> >>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> >>> >>>> at
>> >>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
>> >>> >>>> at
>> org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
>> >>> >>>> Caused by: java.lang.RuntimeException:
>> >>> >>>> java.lang.reflect.InvocationTargetException
>> >>> >>>> at
>> com.google.common.base.Throwables.propagate(Throwables.java:160)
>> >>> >>>> at
>> >>>
>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
>> >>> >>>> at
>> >>>
>> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
>> >>> >>>> ... 5 more
>> >>> >>>> Caused by: java.lang.reflect.InvocationTargetException
>> >>> >>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> >>> Method)
>> >>> >>>> at
>> >>>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> >>> >>>> at
>> >>>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >>> >>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>> >>> >>>> at
>> >>>
>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
>> >>> >>>> ... 6 more
>> >>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid
>> ContainerId:
>> >>> >>>> container_e25_1453466340022_0004_01_000001
>> >>> >>>> at
>> >>>
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
>> >>> >>>> at
>> >>>
>> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
>> >>> >>>> at
>> >>>
>> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
>> >>> >>>> at
>> >>>
>> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
>> >>> >>>> ... 11 more
>> >>> >>>> Caused by: java.lang.NumberFormatException: For input string:
>> "e25"
>> >>> >>>> at
>> >>>
>> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>> >>> >>>> at java.lang.Long.parseLong(Long.java:589)
>> >>> >>>> at java.lang.Long.parseLong(Long.java:631)
>> >>> >>>> at
>> >>>
>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>> >>> >>>> at
>> >>>
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>> >>> >>>> ... 14 more
>> >>> >>>>
>> >>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
>> >>> stoffe@gmail.com> wrote:
>> >>> >>>>> Hi
>> >>> >>>>>
>> >>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an
>> exception
>> >>> as
>> >>> >>>>> soon as the application starts on the resource manager that
>> tells me
>> >>> >>>>> the container id cannot be parsed.
>> >>> >>>>>
>> >>> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
>> >>> >>>>> container_e04_1427159778706_0002_01_000001
>> >>> >>>>>
>> >>> >>>>> I don't have the exact stacktrace but I recall it failing in
>> >>> >>>>> ConverterUtils.toContainerId because it assumes that that the
>> first
>> >>> >>>>> token is an application attempt to be parsed as an integer. This
>> >>> class
>> >>> >>>>> resides in hadoop-yarn-common 2.3.0.
>> >>> >>>>>
>> >>> >>>>> Is there any way to either tweak the container id or make twill
>> use
>> >>> >>>>> the 2.7.1 jar instead?
>> >>> >>>>>
>> >>> >>>>> Cheers,
>> >>> >>>>> -Kristoffer
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>> [1]
>> >>>
>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>> >>>
>>

Re: Yarn 2.7.1

Posted by Terence Yim <ch...@gmail.com>.
Hi,

It's due to a very old version of ASM library that bring it by hadoop/yarn.
Please add exclusion of asm library to all hadoop dependencies.

<exclusion>
  <groupId>asm</groupId>
  <artifactId>asm</artifactId>
</exclusion>

Terence


On Fri, Jan 22, 2016 at 2:34 PM, Kristoffer Sjögren <st...@gmail.com>
wrote:

> Further adding the following dependencies cause another exception.
>
> <dependency>
>   <groupId>com.google.guava</groupId>
>   <artifactId>guava</artifactId>
>   <version>13.0</version>
> </dependency>
> <dependency>
>   <groupId>org.apache.hadoop</groupId>
>   <artifactId>hadoop-hdfs</artifactId>
>   <version>2.7.1</version>
> </dependency>
>
> Exception in thread " STARTING"
> java.lang.IncompatibleClassChangeError: class
> org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
> has interface org.objectweb.asm.ClassVisitor as super class
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at
> org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
> at
> org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
> at
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
> at
> org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
> at
> org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
> at
> org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
> at
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
> at
> org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
> at
> org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
> at
> org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
> at
> org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
> at
> com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
> at java.lang.Thread.run(Thread.java:745)
>
> On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
> > Add those dependencies fail with the following exception.
> >
> > Exception in thread "main" java.lang.AbstractMethodError:
> >
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
> > at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
> > at
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
> > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> > at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> > at
> org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
> > at
> org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
> > at deephacks.BundledJarExample.main(BundledJarExample.java:70)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:497)
> > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> >
> > On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <ch...@gmail.com> wrote:
> >> Hi,
> >>
> >> If you run it from IDE, you and simply add a dependency on hadoop with
> >> version 2.7.1. E.g. if you are using Maven, you can add the following to
> >> your pom.xml dependencies section.
> >>
> >> <dependency>
> >>   <groupId>org.apache.hadoop</groupId>
> >>   <artifactId>hadoop-yarn-api</artifactId>
> >>   <version>2.7.1</version>
> >> </dependency>
> >> <dependency>
> >>   <groupId>org.apache.hadoop</groupId>
> >>   <artifactId>hadoop-yarn-common</artifactId>
> >>   <version>2.7.1</version>
> >> </dependency>
> >> <dependency>
> >>   <groupId>org.apache.hadoop</groupId>
> >>   <artifactId>hadoop-yarn-client</artifactId>
> >>   <version>2.7.1</version>
> >> </dependency>
> >> <dependency>
> >>   <groupId>org.apache.hadoop</groupId>
> >>   <artifactId>hadoop-common</artifactId>
> >>   <version>2.7.1</version>
> >> </dependency>
> >>
> >> Terence
> >>
> >> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <st...@gmail.com>
> >> wrote:
> >>
> >>> I run it from IDE right now, but would like to create a command line
> >>> app eventually.
> >>>
> >>> I should clarify that the exception above is thrown on the YARN node,
> >>> not in the IDE.
> >>>
> >>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com> wrote:
> >>> > Hi Kristoffer,
> >>> >
> >>> > The example itself shouldn't need any modification. However, how do
> >>> > you run that class? Do you run it from IDE or from command line using
> >>> > "java" command?
> >>> >
> >>> > Terence
> >>> >
> >>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <
> stoffe@gmail.com>
> >>> wrote:
> >>> >> Hi Terence,
> >>> >>
> >>> >> I'm quite new to Twill and not sure how to do that exactly. Could
> you
> >>> >> show me how to modify the following example to do the same?
> >>> >>
> >>> >>
> >>>
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> >>> >>
> >>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <ch...@gmail.com>
> wrote:
> >>> >>> Hi Kristoffer,
> >>> >>>
> >>> >>> Seems like the exception comes from the YARN class
> "ConverterUtils". I
> >>> >>> believe need to start the application with the version 2.7.1 Hadoop
> >>> >>> Jars. How to do start the twill application? Usually on a cluster
> with
> >>> >>> hadoop installed, you can get all the hadoop jars in the classpath
> by
> >>> >>> running this:
> >>> >>>
> >>> >>> export CP=`hadoop classpath`
> >>> >>> java -cp .:$CP YourApp ...
> >>> >>>
> >>> >>> Assuming your app classes and Twill jars are in the current
> directory.
> >>> >>>
> >>> >>> Terence
> >>> >>>
> >>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <
> stoffe@gmail.com>
> >>> wrote:
> >>> >>>> Here's the full stacktrace.
> >>> >>>>
> >>> >>>> Exception in thread "main"
> java.lang.reflect.InvocationTargetException
> >>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> >>>> at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>> >>>> at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
> >>> >>>> at
> org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
> >>> >>>> Caused by: java.lang.RuntimeException:
> >>> >>>> java.lang.reflect.InvocationTargetException
> >>> >>>> at
> com.google.common.base.Throwables.propagate(Throwables.java:160)
> >>> >>>> at
> >>>
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
> >>> >>>> at
> >>>
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
> >>> >>>> ... 5 more
> >>> >>>> Caused by: java.lang.reflect.InvocationTargetException
> >>> >>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >>> Method)
> >>> >>>> at
> >>>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >>> >>>> at
> >>>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>> >>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> >>> >>>> at
> >>>
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
> >>> >>>> ... 6 more
> >>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid
> ContainerId:
> >>> >>>> container_e25_1453466340022_0004_01_000001
> >>> >>>> at
> >>>
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
> >>> >>>> at
> >>>
> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
> >>> >>>> at
> >>>
> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
> >>> >>>> at
> >>>
> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
> >>> >>>> ... 11 more
> >>> >>>> Caused by: java.lang.NumberFormatException: For input string:
> "e25"
> >>> >>>> at
> >>>
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> >>> >>>> at java.lang.Long.parseLong(Long.java:589)
> >>> >>>> at java.lang.Long.parseLong(Long.java:631)
> >>> >>>> at
> >>>
> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
> >>> >>>> at
> >>>
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
> >>> >>>> ... 14 more
> >>> >>>>
> >>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
> >>> stoffe@gmail.com> wrote:
> >>> >>>>> Hi
> >>> >>>>>
> >>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an
> exception
> >>> as
> >>> >>>>> soon as the application starts on the resource manager that
> tells me
> >>> >>>>> the container id cannot be parsed.
> >>> >>>>>
> >>> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
> >>> >>>>> container_e04_1427159778706_0002_01_000001
> >>> >>>>>
> >>> >>>>> I don't have the exact stacktrace but I recall it failing in
> >>> >>>>> ConverterUtils.toContainerId because it assumes that that the
> first
> >>> >>>>> token is an application attempt to be parsed as an integer. This
> >>> class
> >>> >>>>> resides in hadoop-yarn-common 2.3.0.
> >>> >>>>>
> >>> >>>>> Is there any way to either tweak the container id or make twill
> use
> >>> >>>>> the 2.7.1 jar instead?
> >>> >>>>>
> >>> >>>>> Cheers,
> >>> >>>>> -Kristoffer
> >>> >>>>>
> >>> >>>>>
> >>> >>>>> [1]
> >>>
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> >>>
>

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
Further adding the following dependencies cause another exception.

<dependency>
  <groupId>com.google.guava</groupId>
  <artifactId>guava</artifactId>
  <version>13.0</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-hdfs</artifactId>
  <version>2.7.1</version>
</dependency>

Exception in thread " STARTING"
java.lang.IncompatibleClassChangeError: class
org.apache.twill.internal.utils.Dependencies$DependencyClassVisitor
has interface org.objectweb.asm.ClassVisitor as super class
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.twill.internal.utils.Dependencies.findClassDependencies(Dependencies.java:86)
at org.apache.twill.internal.ApplicationBundler.findDependencies(ApplicationBundler.java:198)
at org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:155)
at org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:126)
at org.apache.twill.yarn.YarnTwillPreparer.createAppMasterJar(YarnTwillPreparer.java:402)
at org.apache.twill.yarn.YarnTwillPreparer.access$200(YarnTwillPreparer.java:108)
at org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:299)
at org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:289)
at org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:97)
at org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:76)
at org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:175)
at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
at java.lang.Thread.run(Thread.java:745)

On Fri, Jan 22, 2016 at 11:32 PM, Kristoffer Sjögren <st...@gmail.com> wrote:
> Add those dependencies fail with the following exception.
>
> Exception in thread "main" java.lang.AbstractMethodError:
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
> at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
> at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
> at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
> at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> at org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
> at org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
> at deephacks.BundledJarExample.main(BundledJarExample.java:70)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
>
> On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <ch...@gmail.com> wrote:
>> Hi,
>>
>> If you run it from IDE, you and simply add a dependency on hadoop with
>> version 2.7.1. E.g. if you are using Maven, you can add the following to
>> your pom.xml dependencies section.
>>
>> <dependency>
>>   <groupId>org.apache.hadoop</groupId>
>>   <artifactId>hadoop-yarn-api</artifactId>
>>   <version>2.7.1</version>
>> </dependency>
>> <dependency>
>>   <groupId>org.apache.hadoop</groupId>
>>   <artifactId>hadoop-yarn-common</artifactId>
>>   <version>2.7.1</version>
>> </dependency>
>> <dependency>
>>   <groupId>org.apache.hadoop</groupId>
>>   <artifactId>hadoop-yarn-client</artifactId>
>>   <version>2.7.1</version>
>> </dependency>
>> <dependency>
>>   <groupId>org.apache.hadoop</groupId>
>>   <artifactId>hadoop-common</artifactId>
>>   <version>2.7.1</version>
>> </dependency>
>>
>> Terence
>>
>> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <st...@gmail.com>
>> wrote:
>>
>>> I run it from IDE right now, but would like to create a command line
>>> app eventually.
>>>
>>> I should clarify that the exception above is thrown on the YARN node,
>>> not in the IDE.
>>>
>>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com> wrote:
>>> > Hi Kristoffer,
>>> >
>>> > The example itself shouldn't need any modification. However, how do
>>> > you run that class? Do you run it from IDE or from command line using
>>> > "java" command?
>>> >
>>> > Terence
>>> >
>>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <st...@gmail.com>
>>> wrote:
>>> >> Hi Terence,
>>> >>
>>> >> I'm quite new to Twill and not sure how to do that exactly. Could you
>>> >> show me how to modify the following example to do the same?
>>> >>
>>> >>
>>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>>> >>
>>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <ch...@gmail.com> wrote:
>>> >>> Hi Kristoffer,
>>> >>>
>>> >>> Seems like the exception comes from the YARN class "ConverterUtils". I
>>> >>> believe need to start the application with the version 2.7.1 Hadoop
>>> >>> Jars. How to do start the twill application? Usually on a cluster with
>>> >>> hadoop installed, you can get all the hadoop jars in the classpath by
>>> >>> running this:
>>> >>>
>>> >>> export CP=`hadoop classpath`
>>> >>> java -cp .:$CP YourApp ...
>>> >>>
>>> >>> Assuming your app classes and Twill jars are in the current directory.
>>> >>>
>>> >>> Terence
>>> >>>
>>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <st...@gmail.com>
>>> wrote:
>>> >>>> Here's the full stacktrace.
>>> >>>>
>>> >>>> Exception in thread "main" java.lang.reflect.InvocationTargetException
>>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> >>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>> >>>> at org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
>>> >>>> Caused by: java.lang.RuntimeException:
>>> >>>> java.lang.reflect.InvocationTargetException
>>> >>>> at com.google.common.base.Throwables.propagate(Throwables.java:160)
>>> >>>> at
>>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
>>> >>>> at
>>> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
>>> >>>> ... 5 more
>>> >>>> Caused by: java.lang.reflect.InvocationTargetException
>>> >>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>>> at
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>> >>>> at
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> >>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>>> >>>> at
>>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
>>> >>>> ... 6 more
>>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid ContainerId:
>>> >>>> container_e25_1453466340022_0004_01_000001
>>> >>>> at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
>>> >>>> at
>>> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
>>> >>>> at
>>> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
>>> >>>> at
>>> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
>>> >>>> ... 11 more
>>> >>>> Caused by: java.lang.NumberFormatException: For input string: "e25"
>>> >>>> at
>>> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>>> >>>> at java.lang.Long.parseLong(Long.java:589)
>>> >>>> at java.lang.Long.parseLong(Long.java:631)
>>> >>>> at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>> >>>> at
>>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>> >>>> ... 14 more
>>> >>>>
>>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
>>> stoffe@gmail.com> wrote:
>>> >>>>> Hi
>>> >>>>>
>>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an exception
>>> as
>>> >>>>> soon as the application starts on the resource manager that tells me
>>> >>>>> the container id cannot be parsed.
>>> >>>>>
>>> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
>>> >>>>> container_e04_1427159778706_0002_01_000001
>>> >>>>>
>>> >>>>> I don't have the exact stacktrace but I recall it failing in
>>> >>>>> ConverterUtils.toContainerId because it assumes that that the first
>>> >>>>> token is an application attempt to be parsed as an integer. This
>>> class
>>> >>>>> resides in hadoop-yarn-common 2.3.0.
>>> >>>>>
>>> >>>>> Is there any way to either tweak the container id or make twill use
>>> >>>>> the 2.7.1 jar instead?
>>> >>>>>
>>> >>>>> Cheers,
>>> >>>>> -Kristoffer
>>> >>>>>
>>> >>>>>
>>> >>>>> [1]
>>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>>>

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
Add those dependencies fail with the following exception.

Exception in thread "main" java.lang.AbstractMethodError:
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:149)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:569)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:512)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.twill.yarn.YarnTwillRunnerService.createDefaultLocationFactory(YarnTwillRunnerService.java:615)
at org.apache.twill.yarn.YarnTwillRunnerService.<init>(YarnTwillRunnerService.java:149)
at deephacks.BundledJarExample.main(BundledJarExample.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

On Fri, Jan 22, 2016 at 10:58 PM, Terence Yim <ch...@gmail.com> wrote:
> Hi,
>
> If you run it from IDE, you and simply add a dependency on hadoop with
> version 2.7.1. E.g. if you are using Maven, you can add the following to
> your pom.xml dependencies section.
>
> <dependency>
>   <groupId>org.apache.hadoop</groupId>
>   <artifactId>hadoop-yarn-api</artifactId>
>   <version>2.7.1</version>
> </dependency>
> <dependency>
>   <groupId>org.apache.hadoop</groupId>
>   <artifactId>hadoop-yarn-common</artifactId>
>   <version>2.7.1</version>
> </dependency>
> <dependency>
>   <groupId>org.apache.hadoop</groupId>
>   <artifactId>hadoop-yarn-client</artifactId>
>   <version>2.7.1</version>
> </dependency>
> <dependency>
>   <groupId>org.apache.hadoop</groupId>
>   <artifactId>hadoop-common</artifactId>
>   <version>2.7.1</version>
> </dependency>
>
> Terence
>
> On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
>
>> I run it from IDE right now, but would like to create a command line
>> app eventually.
>>
>> I should clarify that the exception above is thrown on the YARN node,
>> not in the IDE.
>>
>> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com> wrote:
>> > Hi Kristoffer,
>> >
>> > The example itself shouldn't need any modification. However, how do
>> > you run that class? Do you run it from IDE or from command line using
>> > "java" command?
>> >
>> > Terence
>> >
>> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <st...@gmail.com>
>> wrote:
>> >> Hi Terence,
>> >>
>> >> I'm quite new to Twill and not sure how to do that exactly. Could you
>> >> show me how to modify the following example to do the same?
>> >>
>> >>
>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>> >>
>> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <ch...@gmail.com> wrote:
>> >>> Hi Kristoffer,
>> >>>
>> >>> Seems like the exception comes from the YARN class "ConverterUtils". I
>> >>> believe need to start the application with the version 2.7.1 Hadoop
>> >>> Jars. How to do start the twill application? Usually on a cluster with
>> >>> hadoop installed, you can get all the hadoop jars in the classpath by
>> >>> running this:
>> >>>
>> >>> export CP=`hadoop classpath`
>> >>> java -cp .:$CP YourApp ...
>> >>>
>> >>> Assuming your app classes and Twill jars are in the current directory.
>> >>>
>> >>> Terence
>> >>>
>> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <st...@gmail.com>
>> wrote:
>> >>>> Here's the full stacktrace.
>> >>>>
>> >>>> Exception in thread "main" java.lang.reflect.InvocationTargetException
>> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> >>>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
>> >>>> at org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
>> >>>> Caused by: java.lang.RuntimeException:
>> >>>> java.lang.reflect.InvocationTargetException
>> >>>> at com.google.common.base.Throwables.propagate(Throwables.java:160)
>> >>>> at
>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
>> >>>> at
>> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
>> >>>> ... 5 more
>> >>>> Caused by: java.lang.reflect.InvocationTargetException
>> >>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> >>>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> >>>> at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>> >>>> at
>> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
>> >>>> ... 6 more
>> >>>> Caused by: java.lang.IllegalArgumentException: Invalid ContainerId:
>> >>>> container_e25_1453466340022_0004_01_000001
>> >>>> at
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
>> >>>> at
>> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
>> >>>> at
>> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
>> >>>> at
>> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
>> >>>> ... 11 more
>> >>>> Caused by: java.lang.NumberFormatException: For input string: "e25"
>> >>>> at
>> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>> >>>> at java.lang.Long.parseLong(Long.java:589)
>> >>>> at java.lang.Long.parseLong(Long.java:631)
>> >>>> at
>> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>> >>>> at
>> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>> >>>> ... 14 more
>> >>>>
>> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
>> stoffe@gmail.com> wrote:
>> >>>>> Hi
>> >>>>>
>> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an exception
>> as
>> >>>>> soon as the application starts on the resource manager that tells me
>> >>>>> the container id cannot be parsed.
>> >>>>>
>> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
>> >>>>> container_e04_1427159778706_0002_01_000001
>> >>>>>
>> >>>>> I don't have the exact stacktrace but I recall it failing in
>> >>>>> ConverterUtils.toContainerId because it assumes that that the first
>> >>>>> token is an application attempt to be parsed as an integer. This
>> class
>> >>>>> resides in hadoop-yarn-common 2.3.0.
>> >>>>>
>> >>>>> Is there any way to either tweak the container id or make twill use
>> >>>>> the 2.7.1 jar instead?
>> >>>>>
>> >>>>> Cheers,
>> >>>>> -Kristoffer
>> >>>>>
>> >>>>>
>> >>>>> [1]
>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>>

Re: Yarn 2.7.1

Posted by Terence Yim <ch...@gmail.com>.
Hi,

If you run it from IDE, you and simply add a dependency on hadoop with
version 2.7.1. E.g. if you are using Maven, you can add the following to
your pom.xml dependencies section.

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-yarn-api</artifactId>
  <version>2.7.1</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-yarn-common</artifactId>
  <version>2.7.1</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-yarn-client</artifactId>
  <version>2.7.1</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-common</artifactId>
  <version>2.7.1</version>
</dependency>

Terence

On Fri, Jan 22, 2016 at 12:47 PM, Kristoffer Sjögren <st...@gmail.com>
wrote:

> I run it from IDE right now, but would like to create a command line
> app eventually.
>
> I should clarify that the exception above is thrown on the YARN node,
> not in the IDE.
>
> On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com> wrote:
> > Hi Kristoffer,
> >
> > The example itself shouldn't need any modification. However, how do
> > you run that class? Do you run it from IDE or from command line using
> > "java" command?
> >
> > Terence
> >
> > On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
> >> Hi Terence,
> >>
> >> I'm quite new to Twill and not sure how to do that exactly. Could you
> >> show me how to modify the following example to do the same?
> >>
> >>
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
> >>
> >> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <ch...@gmail.com> wrote:
> >>> Hi Kristoffer,
> >>>
> >>> Seems like the exception comes from the YARN class "ConverterUtils". I
> >>> believe need to start the application with the version 2.7.1 Hadoop
> >>> Jars. How to do start the twill application? Usually on a cluster with
> >>> hadoop installed, you can get all the hadoop jars in the classpath by
> >>> running this:
> >>>
> >>> export CP=`hadoop classpath`
> >>> java -cp .:$CP YourApp ...
> >>>
> >>> Assuming your app classes and Twill jars are in the current directory.
> >>>
> >>> Terence
> >>>
> >>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
> >>>> Here's the full stacktrace.
> >>>>
> >>>> Exception in thread "main" java.lang.reflect.InvocationTargetException
> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>>> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>>> at java.lang.reflect.Method.invoke(Method.java:497)
> >>>> at org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
> >>>> Caused by: java.lang.RuntimeException:
> >>>> java.lang.reflect.InvocationTargetException
> >>>> at com.google.common.base.Throwables.propagate(Throwables.java:160)
> >>>> at
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
> >>>> at
> org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
> >>>> ... 5 more
> >>>> Caused by: java.lang.reflect.InvocationTargetException
> >>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>>> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >>>> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> >>>> at
> org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
> >>>> ... 6 more
> >>>> Caused by: java.lang.IllegalArgumentException: Invalid ContainerId:
> >>>> container_e25_1453466340022_0004_01_000001
> >>>> at
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
> >>>> at
> org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
> >>>> at
> org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
> >>>> at
> org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
> >>>> ... 11 more
> >>>> Caused by: java.lang.NumberFormatException: For input string: "e25"
> >>>> at
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> >>>> at java.lang.Long.parseLong(Long.java:589)
> >>>> at java.lang.Long.parseLong(Long.java:631)
> >>>> at
> org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
> >>>> at
> org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
> >>>> ... 14 more
> >>>>
> >>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <
> stoffe@gmail.com> wrote:
> >>>>> Hi
> >>>>>
> >>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an exception
> as
> >>>>> soon as the application starts on the resource manager that tells me
> >>>>> the container id cannot be parsed.
> >>>>>
> >>>>> java.lang.IllegalArgumentException: Invalid containerId:
> >>>>> container_e04_1427159778706_0002_01_000001
> >>>>>
> >>>>> I don't have the exact stacktrace but I recall it failing in
> >>>>> ConverterUtils.toContainerId because it assumes that that the first
> >>>>> token is an application attempt to be parsed as an integer. This
> class
> >>>>> resides in hadoop-yarn-common 2.3.0.
> >>>>>
> >>>>> Is there any way to either tweak the container id or make twill use
> >>>>> the 2.7.1 jar instead?
> >>>>>
> >>>>> Cheers,
> >>>>> -Kristoffer
> >>>>>
> >>>>>
> >>>>> [1]
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
I run it from IDE right now, but would like to create a command line
app eventually.

I should clarify that the exception above is thrown on the YARN node,
not in the IDE.

On Fri, Jan 22, 2016 at 9:32 PM, Terence Yim <ch...@gmail.com> wrote:
> Hi Kristoffer,
>
> The example itself shouldn't need any modification. However, how do
> you run that class? Do you run it from IDE or from command line using
> "java" command?
>
> Terence
>
> On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <st...@gmail.com> wrote:
>> Hi Terence,
>>
>> I'm quite new to Twill and not sure how to do that exactly. Could you
>> show me how to modify the following example to do the same?
>>
>> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>>
>> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <ch...@gmail.com> wrote:
>>> Hi Kristoffer,
>>>
>>> Seems like the exception comes from the YARN class "ConverterUtils". I
>>> believe need to start the application with the version 2.7.1 Hadoop
>>> Jars. How to do start the twill application? Usually on a cluster with
>>> hadoop installed, you can get all the hadoop jars in the classpath by
>>> running this:
>>>
>>> export CP=`hadoop classpath`
>>> java -cp .:$CP YourApp ...
>>>
>>> Assuming your app classes and Twill jars are in the current directory.
>>>
>>> Terence
>>>
>>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <st...@gmail.com> wrote:
>>>> Here's the full stacktrace.
>>>>
>>>> Exception in thread "main" java.lang.reflect.InvocationTargetException
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>>> at org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
>>>> Caused by: java.lang.RuntimeException:
>>>> java.lang.reflect.InvocationTargetException
>>>> at com.google.common.base.Throwables.propagate(Throwables.java:160)
>>>> at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
>>>> at org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
>>>> ... 5 more
>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>>>> at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
>>>> ... 6 more
>>>> Caused by: java.lang.IllegalArgumentException: Invalid ContainerId:
>>>> container_e25_1453466340022_0004_01_000001
>>>> at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
>>>> at org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
>>>> at org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
>>>> at org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
>>>> ... 11 more
>>>> Caused by: java.lang.NumberFormatException: For input string: "e25"
>>>> at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>>>> at java.lang.Long.parseLong(Long.java:589)
>>>> at java.lang.Long.parseLong(Long.java:631)
>>>> at org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>>> at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>>> ... 14 more
>>>>
>>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <st...@gmail.com> wrote:
>>>>> Hi
>>>>>
>>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an exception as
>>>>> soon as the application starts on the resource manager that tells me
>>>>> the container id cannot be parsed.
>>>>>
>>>>> java.lang.IllegalArgumentException: Invalid containerId:
>>>>> container_e04_1427159778706_0002_01_000001
>>>>>
>>>>> I don't have the exact stacktrace but I recall it failing in
>>>>> ConverterUtils.toContainerId because it assumes that that the first
>>>>> token is an application attempt to be parsed as an integer. This class
>>>>> resides in hadoop-yarn-common 2.3.0.
>>>>>
>>>>> Is there any way to either tweak the container id or make twill use
>>>>> the 2.7.1 jar instead?
>>>>>
>>>>> Cheers,
>>>>> -Kristoffer
>>>>>
>>>>>
>>>>> [1] https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java

Re: Yarn 2.7.1

Posted by Terence Yim <ch...@gmail.com>.
Hi Kristoffer,

The example itself shouldn't need any modification. However, how do
you run that class? Do you run it from IDE or from command line using
"java" command?

Terence

On Fri, Jan 22, 2016 at 12:16 PM, Kristoffer Sjögren <st...@gmail.com> wrote:
> Hi Terence,
>
> I'm quite new to Twill and not sure how to do that exactly. Could you
> show me how to modify the following example to do the same?
>
> https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java
>
> On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <ch...@gmail.com> wrote:
>> Hi Kristoffer,
>>
>> Seems like the exception comes from the YARN class "ConverterUtils". I
>> believe need to start the application with the version 2.7.1 Hadoop
>> Jars. How to do start the twill application? Usually on a cluster with
>> hadoop installed, you can get all the hadoop jars in the classpath by
>> running this:
>>
>> export CP=`hadoop classpath`
>> java -cp .:$CP YourApp ...
>>
>> Assuming your app classes and Twill jars are in the current directory.
>>
>> Terence
>>
>> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <st...@gmail.com> wrote:
>>> Here's the full stacktrace.
>>>
>>> Exception in thread "main" java.lang.reflect.InvocationTargetException
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>> at org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
>>> Caused by: java.lang.RuntimeException:
>>> java.lang.reflect.InvocationTargetException
>>> at com.google.common.base.Throwables.propagate(Throwables.java:160)
>>> at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
>>> at org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
>>> ... 5 more
>>> Caused by: java.lang.reflect.InvocationTargetException
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>>> at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
>>> ... 6 more
>>> Caused by: java.lang.IllegalArgumentException: Invalid ContainerId:
>>> container_e25_1453466340022_0004_01_000001
>>> at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
>>> at org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
>>> at org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
>>> at org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
>>> ... 11 more
>>> Caused by: java.lang.NumberFormatException: For input string: "e25"
>>> at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>>> at java.lang.Long.parseLong(Long.java:589)
>>> at java.lang.Long.parseLong(Long.java:631)
>>> at org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>>> at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>>> ... 14 more
>>>
>>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <st...@gmail.com> wrote:
>>>> Hi
>>>>
>>>> I'm trying the basic example [1] on yarn 2.7.1 but get an exception as
>>>> soon as the application starts on the resource manager that tells me
>>>> the container id cannot be parsed.
>>>>
>>>> java.lang.IllegalArgumentException: Invalid containerId:
>>>> container_e04_1427159778706_0002_01_000001
>>>>
>>>> I don't have the exact stacktrace but I recall it failing in
>>>> ConverterUtils.toContainerId because it assumes that that the first
>>>> token is an application attempt to be parsed as an integer. This class
>>>> resides in hadoop-yarn-common 2.3.0.
>>>>
>>>> Is there any way to either tweak the container id or make twill use
>>>> the 2.7.1 jar instead?
>>>>
>>>> Cheers,
>>>> -Kristoffer
>>>>
>>>>
>>>> [1] https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
Hi Terence,

I'm quite new to Twill and not sure how to do that exactly. Could you
show me how to modify the following example to do the same?

https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java

On Fri, Jan 22, 2016 at 7:24 PM, Terence Yim <ch...@gmail.com> wrote:
> Hi Kristoffer,
>
> Seems like the exception comes from the YARN class "ConverterUtils". I
> believe need to start the application with the version 2.7.1 Hadoop
> Jars. How to do start the twill application? Usually on a cluster with
> hadoop installed, you can get all the hadoop jars in the classpath by
> running this:
>
> export CP=`hadoop classpath`
> java -cp .:$CP YourApp ...
>
> Assuming your app classes and Twill jars are in the current directory.
>
> Terence
>
> On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <st...@gmail.com> wrote:
>> Here's the full stacktrace.
>>
>> Exception in thread "main" java.lang.reflect.InvocationTargetException
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:497)
>> at org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
>> Caused by: java.lang.RuntimeException:
>> java.lang.reflect.InvocationTargetException
>> at com.google.common.base.Throwables.propagate(Throwables.java:160)
>> at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
>> at org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
>> ... 5 more
>> Caused by: java.lang.reflect.InvocationTargetException
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>> at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
>> ... 6 more
>> Caused by: java.lang.IllegalArgumentException: Invalid ContainerId:
>> container_e25_1453466340022_0004_01_000001
>> at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
>> at org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
>> at org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
>> at org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
>> ... 11 more
>> Caused by: java.lang.NumberFormatException: For input string: "e25"
>> at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>> at java.lang.Long.parseLong(Long.java:589)
>> at java.lang.Long.parseLong(Long.java:631)
>> at org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
>> at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
>> ... 14 more
>>
>> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <st...@gmail.com> wrote:
>>> Hi
>>>
>>> I'm trying the basic example [1] on yarn 2.7.1 but get an exception as
>>> soon as the application starts on the resource manager that tells me
>>> the container id cannot be parsed.
>>>
>>> java.lang.IllegalArgumentException: Invalid containerId:
>>> container_e04_1427159778706_0002_01_000001
>>>
>>> I don't have the exact stacktrace but I recall it failing in
>>> ConverterUtils.toContainerId because it assumes that that the first
>>> token is an application attempt to be parsed as an integer. This class
>>> resides in hadoop-yarn-common 2.3.0.
>>>
>>> Is there any way to either tweak the container id or make twill use
>>> the 2.7.1 jar instead?
>>>
>>> Cheers,
>>> -Kristoffer
>>>
>>>
>>> [1] https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java

Re: Yarn 2.7.1

Posted by Terence Yim <ch...@gmail.com>.
Hi Kristoffer,

Seems like the exception comes from the YARN class "ConverterUtils". I
believe need to start the application with the version 2.7.1 Hadoop
Jars. How to do start the twill application? Usually on a cluster with
hadoop installed, you can get all the hadoop jars in the classpath by
running this:

export CP=`hadoop classpath`
java -cp .:$CP YourApp ...

Assuming your app classes and Twill jars are in the current directory.

Terence

On Fri, Jan 22, 2016 at 4:54 AM, Kristoffer Sjögren <st...@gmail.com> wrote:
> Here's the full stacktrace.
>
> Exception in thread "main" java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
> Caused by: java.lang.RuntimeException:
> java.lang.reflect.InvocationTargetException
> at com.google.common.base.Throwables.propagate(Throwables.java:160)
> at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
> at org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
> ... 5 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
> ... 6 more
> Caused by: java.lang.IllegalArgumentException: Invalid ContainerId:
> container_e25_1453466340022_0004_01_000001
> at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
> at org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
> at org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
> at org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
> ... 11 more
> Caused by: java.lang.NumberFormatException: For input string: "e25"
> at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Long.parseLong(Long.java:589)
> at java.lang.Long.parseLong(Long.java:631)
> at org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
> at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
> ... 14 more
>
> On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <st...@gmail.com> wrote:
>> Hi
>>
>> I'm trying the basic example [1] on yarn 2.7.1 but get an exception as
>> soon as the application starts on the resource manager that tells me
>> the container id cannot be parsed.
>>
>> java.lang.IllegalArgumentException: Invalid containerId:
>> container_e04_1427159778706_0002_01_000001
>>
>> I don't have the exact stacktrace but I recall it failing in
>> ConverterUtils.toContainerId because it assumes that that the first
>> token is an application attempt to be parsed as an integer. This class
>> resides in hadoop-yarn-common 2.3.0.
>>
>> Is there any way to either tweak the container id or make twill use
>> the 2.7.1 jar instead?
>>
>> Cheers,
>> -Kristoffer
>>
>>
>> [1] https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java

Re: Yarn 2.7.1

Posted by Kristoffer Sjögren <st...@gmail.com>.
Here's the full stacktrace.

Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.twill.launcher.TwillLauncher.main(TwillLauncher.java:89)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:61)
at org.apache.twill.internal.appmaster.ApplicationMasterMain.main(ApplicationMasterMain.java:77)
... 5 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.twill.internal.yarn.VersionDetectYarnAMClientFactory.create(VersionDetectYarnAMClientFactory.java:58)
... 6 more
Caused by: java.lang.IllegalArgumentException: Invalid ContainerId:
container_e25_1453466340022_0004_01_000001
at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
at org.apache.twill.internal.yarn.AbstractYarnAMClient.<init>(AbstractYarnAMClient.java:83)
at org.apache.twill.internal.yarn.Hadoop21YarnAMClient.<init>(Hadoop21YarnAMClient.java:65)
at org.apache.twill.internal.yarn.Hadoop22YarnAMClient.<init>(Hadoop22YarnAMClient.java:34)
... 11 more
Caused by: java.lang.NumberFormatException: For input string: "e25"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
... 14 more

On Thu, Jan 21, 2016 at 10:59 PM, Kristoffer Sjögren <st...@gmail.com> wrote:
> Hi
>
> I'm trying the basic example [1] on yarn 2.7.1 but get an exception as
> soon as the application starts on the resource manager that tells me
> the container id cannot be parsed.
>
> java.lang.IllegalArgumentException: Invalid containerId:
> container_e04_1427159778706_0002_01_000001
>
> I don't have the exact stacktrace but I recall it failing in
> ConverterUtils.toContainerId because it assumes that that the first
> token is an application attempt to be parsed as an integer. This class
> resides in hadoop-yarn-common 2.3.0.
>
> Is there any way to either tweak the container id or make twill use
> the 2.7.1 jar instead?
>
> Cheers,
> -Kristoffer
>
>
> [1] https://github.com/apache/incubator-twill/blob/master/twill-examples/yarn/src/main/java/org/apache/twill/example/yarn/BundledJarExample.java