You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@apex.apache.org by rohit garg <ro...@gmail.com> on 2017/05/12 12:00:01 UTC

Fwd: Hdfs + apex-core

---------- Forwarded message ----------
From: "rohit garg" <ro...@gmail.com>
Date: 12 May 2017 14:02
Subject: Hdfs + apex-core
To: <Us...@apex.apache.org>
Cc:

I have installed apache apex core but when I submit a app to run on yarn it
tries to connect to 0.0.0.0:8032

Re: Hdfs + apex-core

Posted by nikhilrp <ni...@gmail.com>.
Hello chaitanya,

You are right, it is working in cluster mode. We were testing on local mode.

Thanks
Nikhil

On 06-Jun-2017 12:48 AM, "Chaitanya Chebolu [via Apache Apex Users list]" <
ml+s78494n1680h55@n6.nabble.com> wrote:

> Rohit,
>
>   I think the issue is that you are launching the app in local mode. Could
> you please try to launch the app as follows:
> launch /home/apex/kafka2hdfs-1.0-SNAPSHOT.apa
>
> On Tue, Jun 6, 2017 at 12:37 AM, Guilherme Hott <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=1680&i=0>> wrote:
>
>> Hi, I saw that your ERROR is a ClassCastException and since you are
>> consuming multiple topic, is there a chance that the topic doesn't use the
>> same class?
>>
>> On Mon, Jun 5, 2017 at 10:14 AM, userguy <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=1680&i=1>> wrote:
>>
>>> Did any one got chance to look at  the code ..
>>>
>>> or do we have template to read from multiple topics in kafka ???
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-apex-users-list.
>>> 78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1677.html
>>> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>> *Guilherme Hott*
>> *Software Engineer*
>> Skype: guilhermehott
>> @guilhermehott
>> https://www.linkedin.com/in/guilhermehott
>>
>>
>
>
> --
>
> *Chaitanya*
>
> Software Engineer
>
> E: [hidden email] <http:///user/SendEmail.jtp?type=node&node=1680&i=2> |
> Twitter: @chaithu1403
>
> www.datatorrent.com  |  apex.apache.org
>
>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-
> apex-core-tp1608p1680.html
> To unsubscribe from Fwd: Hdfs + apex-core, click here
> <http://apache-apex-users-list.78494.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1608&code=bmlraGlsLnAubmlrQGdtYWlsLmNvbXwxNjA4fDExNDA2Nzg4NA==>
> .
> NAML
> <http://apache-apex-users-list.78494.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1684.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by Chaitanya Chebolu <ch...@datatorrent.com>.
Rohit,

  I think the issue is that you are launching the app in local mode. Could
you please try to launch the app as follows:
launch /home/apex/kafka2hdfs-1.0-SNAPSHOT.apa

On Tue, Jun 6, 2017 at 12:37 AM, Guilherme Hott <gu...@gmail.com>
wrote:

> Hi, I saw that your ERROR is a ClassCastException and since you are
> consuming multiple topic, is there a chance that the topic doesn't use the
> same class?
>
> On Mon, Jun 5, 2017 at 10:14 AM, userguy <ro...@gmail.com> wrote:
>
>> Did any one got chance to look at  the code ..
>>
>> or do we have template to read from multiple topics in kafka ???
>>
>>
>>
>> --
>> View this message in context: http://apache-apex-users-list.
>> 78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1677.html
>> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>>
>
>
>
> --
> *Guilherme Hott*
> *Software Engineer*
> Skype: guilhermehott
> @guilhermehott
> https://www.linkedin.com/in/guilhermehott
>
>


-- 

*Chaitanya*

Software Engineer

E: chaitanya@datatorrent.com | Twitter: @chaithu1403

www.datatorrent.com  |  apex.apache.org

Re: Hdfs + apex-core

Posted by Guilherme Hott <gu...@gmail.com>.
Hi, I saw that your ERROR is a ClassCastException and since you are
consuming multiple topic, is there a chance that the topic doesn't use the
same class?

On Mon, Jun 5, 2017 at 10:14 AM, userguy <ro...@gmail.com> wrote:

> Did any one got chance to look at  the code ..
>
> or do we have template to read from multiple topics in kafka ???
>
>
>
> --
> View this message in context: http://apache-apex-users-list.
> 78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1677.html
> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>



-- 
*Guilherme Hott*
*Software Engineer*
Skype: guilhermehott
@guilhermehott
https://www.linkedin.com/in/guilhermehott

Re: Hdfs + apex-core

Posted by Vlad Rozov <v....@datatorrent.com>.
Do you have partition.assignment.strategy set anywhere in the Kafka 
operator properties?

Thank you,

Vlad

On 6/5/17 10:14, userguy wrote:
> Did any one got chance to look at  the code ..
>
> or do we have template to read from multiple topics in kafka ???
>
>
>
> --
> View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1677.html
> Sent from the Apache Apex Users list mailing list archive at Nabble.com.


Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
Did any one got chance to look at  the code .. 

or do we have template to read from multiple topics in kafka ???



--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1677.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
I am attaching code and ERROR when we try to consume from multiple topic 

kafka2hdfs.zip
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1669/kafka2hdfs.zip>  
kafkalog.txt
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1669/kafkalog.txt>  




--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1669.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
Hello Team 

Your classpath has two versions of library httpclient and httpcore and one
of these is causing the issue. 
httpclient-4.2.5.jar
httpclient-4.3.5.jar
httpcore-4.2.5.jar
httpcore-4.3.2.jar

we removed the jar httpclient-4.2.5.jar and httpcore-4.2.5.jar from hadoop
class path now the error come

2017-05-23 19:23:30,820 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-23 19:24:01,824 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-23 19:24:02,826 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-23 19:24:03,827 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-23 19:24:04,828 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS) 



--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1644.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by AJAY GUPTA <aj...@gmail.com>.
Hi Rohit,

Your classpath has two versions of library httpclient and httpcore and one
of these is causing the issue.
httpclient-4.2.5.jar
httpclient-4.3.5.jar
httpcore-4.2.5.jar
httpcore-4.3.2.jar
You need to have only one version of both these libraries. Depending on
your project, you will have to either exclude or use shading to solve this
issue.

Thanks,
Ajay

On Thu, May 18, 2017 at 5:59 PM, userguy <ro...@gmail.com> wrote:

> Please find the hadoop classpath list
>
> classpath_hadoop.txt
> <http://apache-apex-users-list.78494.x6.nabble.com/file/
> n1637/classpath_hadoop.txt>
>
>
>
>
>
>
> --
> View this message in context: http://apache-apex-users-list.
> 78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1637.html
> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>

Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
Please find the hadoop classpath list 

classpath_hadoop.txt
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1637/classpath_hadoop.txt>  






--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1637.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
Hello Please find the mvn dependency Tree 
mvn_dependency_tree.txt
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1636/mvn_dependency_tree.txt>  

Also find the application Master logs.

container_local.container_local
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1636/container_local.container_local>  
-- These error when we run locality as local 
container_node_local.container_node_local
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1636/container_node_local.container_node_local>  
this error when we run locality as Node Local





--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1636.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by AJAY GUPTA <aj...@gmail.com>.
Hi Rohit,

The conflict could also be with the version present in hadoop classpath.
If possible, send us the list of libraries in hadoop classpath and
the list of libraries used by the application at runtime ("hdfs dfs -ls
datatorrent/apps/<appid>" )

Ajay

On Thu, May 18, 2017 at 2:38 PM, AJAY GUPTA <aj...@gmail.com> wrote:

> Hi Rohit,
>
> It seems there is a dependency conflict for one of the libraries httpclient.
> Can you send us the output of mvn dependency:tree for your application?
>
> Also, it would be great if we could also receive the application master
> logs. You can find them at <hadoop-log-dir>/userlogs/<application-id> .
> You could zip and the logs.
>
>
> Ajay
>
> On Thu, May 18, 2017 at 11:33 AM, userguy <ro...@gmail.com> wrote:
>
>> How does -- Locality Affects the Error while submitting Job in YARN
>>
>> for cluster com.datatorrent.contrib.kafka.defaultcluster, topic
>> jiovodems,
>> kafka partition 28
>> 2017-05-18 11:25:46,680 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 29
>> 2017-05-18 11:25:46,682 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 30
>> 2017-05-18 11:25:46,684 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 31
>> 2017-05-18 11:25:46,686 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 32
>> 2017-05-18 11:25:46,688 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 33
>> 2017-05-18 11:25:46,690 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 34
>> 2017-05-18 11:25:46,691 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 35
>> 2017-05-18 11:25:46,693 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 36
>> 2017-05-18 11:25:46,695 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 37
>> 2017-05-18 11:25:46,696 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 38
>> 2017-05-18 11:25:46,698 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 39
>> 2017-05-18 11:25:46,700 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 40
>> 2017-05-18 11:25:46,702 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 41
>> 2017-05-18 11:25:46,704 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 42
>> 2017-05-18 11:25:46,706 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 43
>> 2017-05-18 11:25:46,713 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 44
>> 2017-05-18 11:25:46,715 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 45
>> 2017-05-18 11:25:46,716 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 46
>> 2017-05-18 11:25:46,718 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 47
>> 2017-05-18 11:25:46,720 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 48
>> 2017-05-18 11:25:46,722 INFO  [main] kafka.AbstractKafkaInputOperator
>> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
>> Create operator partition for cluster
>> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
>> partition 49
>> 2017-05-18 11:25:46,813 INFO  [main] util.AsyncFSStorageAgent
>> (AsyncFSStorageAgent.java:save(91)) - using
>> /var/data/yarn/local/usercache/apex/appcache/application_
>> 1495028624385_0006/container_1495028624385_0006_02_000001/
>> tmp/chkp8177464469451000346
>> as the basepath for checkpointing.
>> 2017-05-18 11:25:47,459 WARN  [DataStreamer for file
>> /user/apex/datatorrent/apps/application_1495028624385_0006/
>> checkpoints/8/_tmp]
>> hdfs.DFSClient (DFSOutputStream.java:closeResponder(953)) - Caught
>> exception
>> java.lang.InterruptedException
>>         at java.lang.Object.wait(Native Method)
>>         at java.lang.Thread.join(Thread.java:1245)
>>         at java.lang.Thread.join(Thread.java:1319)
>>         at
>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeRes
>> ponder(DFSOutputStream.java:951)
>>         at
>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock
>> (DFSOutputStream.java:689)
>>         at
>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(
>> DFSOutputStream.java:878)
>> 2017-05-18 11:25:49,615 INFO  [main] stram.FSRecoveryHandler
>> (FSRecoveryHandler.java:rotateLog(103)) - Creating
>> hdfs://10.139.39.54:9000/user/apex/datatorrent/apps/applicat
>> ion_1495028624385_0006/recovery/log
>> 2017-05-18
>> <http://10.139.39.54:9000/user/apex/datatorrent/apps/application_1495028624385_0006/recovery/log2017-05-18>
>> 11:25:49,669 INFO  [main] stram.StreamingAppMasterService
>> (StreamingAppMasterService.java:serviceInit(564)) - Starting application
>> with 55 operators in 54 containers
>> 2017-05-18 11:25:49,686 INFO  [main] impl.NMClientAsyncImpl
>> (NMClientAsyncImpl.java:serviceInit(107)) - Upper bound of the thread
>> pool
>> size is 500
>> 2017-05-18 11:25:49,722 INFO  [main] loaders.ChainedPluginLocator
>> (ChainedPluginLocator.java:discoverPlugins(54)) - Loader
>> org.apache.apex.engine.plugin.loaders.ServiceLoaderBasedPluginLocator
>> detected 0 plugins
>> 2017-05-18 11:25:49,758 WARN  [main] conf.Configuration
>> (Configuration.java:loadProperty(2681)) -
>> org.apache.hadoop.hdfs.client.HdfsDataInputStream@3d798e76:
>> org.apache.hadoop.hdfs.DFSInputStream@763b0996:an attempt to override
>> final
>> parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> 2017-05-18 11:25:49,760 WARN  [main] conf.Configuration
>> (Configuration.java:loadProperty(2681)) -
>> org.apache.hadoop.hdfs.client.HdfsDataInputStream@3d798e76:
>> org.apache.hadoop.hdfs.DFSInputStream@763b0996:an attempt to override
>> final
>> parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
>> 2017-05-18 11:25:49,761 INFO  [main] loaders.ChainedPluginLocator
>> (ChainedPluginLocator.java:discoverPlugins(54)) - Loader
>> org.apache.apex.engine.plugin.loaders.PropertyBasedPluginLocator
>> detected 0
>> plugins
>> 2017-05-18 11:25:49,769 INFO  [main] client.RMProxy
>> (RMProxy.java:createRMProxy(123)) - Connecting to ResourceManager at
>> /0.0.0.0:8030
>> 2017-05-18 11:25:49,814 INFO  [main] stram.StreamingContainerParent
>> (StreamingContainerParent.java:startRpcServer(95)) - Config:
>> Configuration:
>> core-default.xml, core-site.xml, yarn-default.xml, yarn-site.xml,
>> mapred-default.xml, mapred-site.xml, hdfs-default.xml, hdfs-site.xml
>> 2017-05-18 11:25:49,815 INFO  [main] stram.StreamingContainerParent
>> (StreamingContainerParent.java:startRpcServer(96)) - Listener thread
>> count
>> 30
>> 2017-05-18 11:25:49,819 INFO  [main] ipc.CallQueueManager
>> (CallQueueManager.java:<init>(57)) - Using callQueue: class
>> java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000
>> 2017-05-18 11:25:49,824 INFO  [Socket Reader #1 for port 33999] ipc.Server
>> (Server.java:run(720)) - Starting Socket Reader #1 for port 33999
>> 2017-05-18 11:25:49,837 INFO  [IPC Server listener on 33999] ipc.Server
>> (Server.java:run(799)) - IPC Server listener on 33999: starting
>> 2017-05-18 11:25:49,837 INFO  [IPC Server Responder] ipc.Server
>> (Server.java:run(952)) - IPC Server Responder: starting
>> 2017-05-18 11:25:49,841 INFO  [main] stram.StreamingContainerParent
>> (StreamingContainerParent.java:startRpcServer(124)) - Container callback
>> server listening at ELKCDNHOST9/10.139.38.180:33999
>> 2017-05-18 11:25:49,880 INFO  [main] mortbay.log (Slf4jLog.java:info(67))
>> -
>> Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2017-05-18 11:25:49,958 INFO  [main] server.AuthenticationFilter
>> (AuthenticationFilter.java:constructSecretProvider(294)) - Unable to
>> initialize FileSignerSecretProvider, falling back to use random secrets.
>> 2017-05-18 11:25:49,966 INFO  [main] http.HttpRequestLog
>> (HttpRequestLog.java:getRequestLog(80)) - Http request log for
>> http.requests.stram is not defined
>> 2017-05-18 11:25:49,979 INFO  [main] http.HttpServer2
>> (HttpServer2.java:addGlobalFilter(766)) - Added global filter 'safety'
>> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> 2017-05-18 11:25:49,981 INFO  [main] http.HttpServer2
>> (HttpServer2.java:addFilter(744)) - Added filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
>> to
>> context stram
>> 2017-05-18 11:25:49,981 INFO  [main] http.HttpServer2
>> (HttpServer2.java:addFilter(751)) - Added filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
>> to
>> context logs
>> 2017-05-18 11:25:49,981 INFO  [main] http.HttpServer2
>> (HttpServer2.java:addFilter(751)) - Added filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
>> to
>> context static
>> 2017-05-18 11:25:49,989 INFO  [main] http.HttpServer2
>> (HttpServer2.java:initializeWebServer(443)) - adding path spec: /stram/*
>> 2017-05-18 11:25:49,989 INFO  [main] http.HttpServer2
>> (HttpServer2.java:initializeWebServer(443)) - adding path spec: /ws/*
>> 2017-05-18 11:25:49,997 INFO  [main] http.HttpServer2
>> (HttpServer2.java:openListeners(954)) - Jetty bound to port 54855
>> 2017-05-18 11:25:50,372 INFO  [main] webapp.WebApps
>> (WebApps.java:start(275)) - Web app /stram started at 54855
>> 2017-05-18 11:25:50,710 INFO  [main] webapp.WebApps
>> (WebApps.java:start(289)) - Registered webapp guice modules
>> 2017-05-18 11:25:50,711 INFO  [main] stram.StreamingAppMasterService
>> (StreamingAppMasterService.java:serviceStart(635)) - Started web service
>> at
>> port: 54855
>> 2017-05-18 11:25:50,711 INFO  [main] stram.StreamingAppMasterService
>> (StreamingAppMasterService.java:serviceStart(641)) - Setting tracking URL
>> to: ELKCDNHOST9:54855
>> 2017-05-18 11:25:50,721 INFO  [main] stram.StreamingAppMasterService
>> (StreamingAppMasterService.java:execute(685)) - Starting
>> ApplicationMaster
>> 2017-05-18 11:25:50,721 INFO  [main] stram.StreamingAppMasterService
>> (StreamingAppMasterService.java:execute(687)) - number of tokens: 1
>>
>>
>> 2017-05-18 11:28:32,814 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:28:33,815 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:28:34,817 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:28:35,818 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:28:36,819 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:28:37,821 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:28:38,822 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:28:39,823 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:28:40,825 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:11,829 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:12,830 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:13,832 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:14,833 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:15,834 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:16,835 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:17,837 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:18,839 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:19,840 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:20,841 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:51,845 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:52,847 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:53,848 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:54,849 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:55,851 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:56,852 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:57,853 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:58,855 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:29:59,856 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:30:00,858 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:30:31,861 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:30:32,863 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:30:33,864 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:30:34,866 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:30:35,867 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:30:36,869 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>> 2017-05-18 11:30:37,870 INFO  [main] ipc.Client
>> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
>> 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
>> MILLISECONDS)
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-apex-users-list.
>> 78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1632.html
>> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>>
>
>

Re: Hdfs + apex-core

Posted by AJAY GUPTA <aj...@gmail.com>.
Hi Rohit,

It seems there is a dependency conflict for one of the libraries httpclient.
Can you send us the output of mvn dependency:tree for your application?

Also, it would be great if we could also receive the application master
logs. You can find them at <hadoop-log-dir>/userlogs/<application-id> . You
could zip and the logs.


Ajay

On Thu, May 18, 2017 at 11:33 AM, userguy <ro...@gmail.com> wrote:

> How does -- Locality Affects the Error while submitting Job in YARN
>
> for cluster com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems,
> kafka partition 28
> 2017-05-18 11:25:46,680 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 29
> 2017-05-18 11:25:46,682 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 30
> 2017-05-18 11:25:46,684 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 31
> 2017-05-18 11:25:46,686 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 32
> 2017-05-18 11:25:46,688 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 33
> 2017-05-18 11:25:46,690 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 34
> 2017-05-18 11:25:46,691 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 35
> 2017-05-18 11:25:46,693 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 36
> 2017-05-18 11:25:46,695 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 37
> 2017-05-18 11:25:46,696 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 38
> 2017-05-18 11:25:46,698 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 39
> 2017-05-18 11:25:46,700 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 40
> 2017-05-18 11:25:46,702 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 41
> 2017-05-18 11:25:46,704 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 42
> 2017-05-18 11:25:46,706 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 43
> 2017-05-18 11:25:46,713 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 44
> 2017-05-18 11:25:46,715 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 45
> 2017-05-18 11:25:46,716 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 46
> 2017-05-18 11:25:46,718 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 47
> 2017-05-18 11:25:46,720 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 48
> 2017-05-18 11:25:46,722 INFO  [main] kafka.AbstractKafkaInputOperator
> (AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
> Create operator partition for cluster
> com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
> partition 49
> 2017-05-18 11:25:46,813 INFO  [main] util.AsyncFSStorageAgent
> (AsyncFSStorageAgent.java:save(91)) - using
> /var/data/yarn/local/usercache/apex/appcache/application_1495028624385_
> 0006/container_1495028624385_0006_02_000001/tmp/chkp8177464469451000346
> as the basepath for checkpointing.
> 2017-05-18 11:25:47,459 WARN  [DataStreamer for file
> /user/apex/datatorrent/apps/application_1495028624385_
> 0006/checkpoints/8/_tmp]
> hdfs.DFSClient (DFSOutputStream.java:closeResponder(953)) - Caught
> exception
> java.lang.InterruptedException
>         at java.lang.Object.wait(Native Method)
>         at java.lang.Thread.join(Thread.java:1245)
>         at java.lang.Thread.join(Thread.java:1319)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(
> DFSOutputStream.java:951)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> endBlock(DFSOutputStream.java:689)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> run(DFSOutputStream.java:878)
> 2017-05-18 11:25:49,615 INFO  [main] stram.FSRecoveryHandler
> (FSRecoveryHandler.java:rotateLog(103)) - Creating
> hdfs://10.139.39.54:9000/user/apex/datatorrent/apps/
> application_1495028624385_0006/recovery/log
> 2017-05-18 11:25:49,669 INFO  [main] stram.StreamingAppMasterService
> (StreamingAppMasterService.java:serviceInit(564)) - Starting application
> with 55 operators in 54 containers
> 2017-05-18 11:25:49,686 INFO  [main] impl.NMClientAsyncImpl
> (NMClientAsyncImpl.java:serviceInit(107)) - Upper bound of the thread pool
> size is 500
> 2017-05-18 11:25:49,722 INFO  [main] loaders.ChainedPluginLocator
> (ChainedPluginLocator.java:discoverPlugins(54)) - Loader
> org.apache.apex.engine.plugin.loaders.ServiceLoaderBasedPluginLocator
> detected 0 plugins
> 2017-05-18 11:25:49,758 WARN  [main] conf.Configuration
> (Configuration.java:loadProperty(2681)) -
> org.apache.hadoop.hdfs.client.HdfsDataInputStream@3d798e76:
> org.apache.hadoop.hdfs.DFSInputStream@763b0996:an attempt to override
> final
> parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2017-05-18 11:25:49,760 WARN  [main] conf.Configuration
> (Configuration.java:loadProperty(2681)) -
> org.apache.hadoop.hdfs.client.HdfsDataInputStream@3d798e76:
> org.apache.hadoop.hdfs.DFSInputStream@763b0996:an attempt to override
> final
> parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2017-05-18 11:25:49,761 INFO  [main] loaders.ChainedPluginLocator
> (ChainedPluginLocator.java:discoverPlugins(54)) - Loader
> org.apache.apex.engine.plugin.loaders.PropertyBasedPluginLocator detected
> 0
> plugins
> 2017-05-18 11:25:49,769 INFO  [main] client.RMProxy
> (RMProxy.java:createRMProxy(123)) - Connecting to ResourceManager at
> /0.0.0.0:8030
> 2017-05-18 11:25:49,814 INFO  [main] stram.StreamingContainerParent
> (StreamingContainerParent.java:startRpcServer(95)) - Config:
> Configuration:
> core-default.xml, core-site.xml, yarn-default.xml, yarn-site.xml,
> mapred-default.xml, mapred-site.xml, hdfs-default.xml, hdfs-site.xml
> 2017-05-18 11:25:49,815 INFO  [main] stram.StreamingContainerParent
> (StreamingContainerParent.java:startRpcServer(96)) - Listener thread count
> 30
> 2017-05-18 11:25:49,819 INFO  [main] ipc.CallQueueManager
> (CallQueueManager.java:<init>(57)) - Using callQueue: class
> java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000
> 2017-05-18 11:25:49,824 INFO  [Socket Reader #1 for port 33999] ipc.Server
> (Server.java:run(720)) - Starting Socket Reader #1 for port 33999
> 2017-05-18 11:25:49,837 INFO  [IPC Server listener on 33999] ipc.Server
> (Server.java:run(799)) - IPC Server listener on 33999: starting
> 2017-05-18 11:25:49,837 INFO  [IPC Server Responder] ipc.Server
> (Server.java:run(952)) - IPC Server Responder: starting
> 2017-05-18 11:25:49,841 INFO  [main] stram.StreamingContainerParent
> (StreamingContainerParent.java:startRpcServer(124)) - Container callback
> server listening at ELKCDNHOST9/10.139.38.180:33999
> 2017-05-18 11:25:49,880 INFO  [main] mortbay.log (Slf4jLog.java:info(67)) -
> Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2017-05-18 11:25:49,958 INFO  [main] server.AuthenticationFilter
> (AuthenticationFilter.java:constructSecretProvider(294)) - Unable to
> initialize FileSignerSecretProvider, falling back to use random secrets.
> 2017-05-18 11:25:49,966 INFO  [main] http.HttpRequestLog
> (HttpRequestLog.java:getRequestLog(80)) - Http request log for
> http.requests.stram is not defined
> 2017-05-18 11:25:49,979 INFO  [main] http.HttpServer2
> (HttpServer2.java:addGlobalFilter(766)) - Added global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2017-05-18 11:25:49,981 INFO  [main] http.HttpServer2
> (HttpServer2.java:addFilter(744)) - Added filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context stram
> 2017-05-18 11:25:49,981 INFO  [main] http.HttpServer2
> (HttpServer2.java:addFilter(751)) - Added filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2017-05-18 11:25:49,981 INFO  [main] http.HttpServer2
> (HttpServer2.java:addFilter(751)) - Added filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2017-05-18 11:25:49,989 INFO  [main] http.HttpServer2
> (HttpServer2.java:initializeWebServer(443)) - adding path spec: /stram/*
> 2017-05-18 11:25:49,989 INFO  [main] http.HttpServer2
> (HttpServer2.java:initializeWebServer(443)) - adding path spec: /ws/*
> 2017-05-18 11:25:49,997 INFO  [main] http.HttpServer2
> (HttpServer2.java:openListeners(954)) - Jetty bound to port 54855
> 2017-05-18 11:25:50,372 INFO  [main] webapp.WebApps
> (WebApps.java:start(275)) - Web app /stram started at 54855
> 2017-05-18 11:25:50,710 INFO  [main] webapp.WebApps
> (WebApps.java:start(289)) - Registered webapp guice modules
> 2017-05-18 11:25:50,711 INFO  [main] stram.StreamingAppMasterService
> (StreamingAppMasterService.java:serviceStart(635)) - Started web service
> at
> port: 54855
> 2017-05-18 11:25:50,711 INFO  [main] stram.StreamingAppMasterService
> (StreamingAppMasterService.java:serviceStart(641)) - Setting tracking URL
> to: ELKCDNHOST9:54855
> 2017-05-18 11:25:50,721 INFO  [main] stram.StreamingAppMasterService
> (StreamingAppMasterService.java:execute(685)) - Starting ApplicationMaster
> 2017-05-18 11:25:50,721 INFO  [main] stram.StreamingAppMasterService
> (StreamingAppMasterService.java:execute(687)) - number of tokens: 1
>
>
> 2017-05-18 11:28:32,814 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:28:33,815 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:28:34,817 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:28:35,818 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:28:36,819 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:28:37,821 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:28:38,822 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:28:39,823 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:28:40,825 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:11,829 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:12,830 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:13,832 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:14,833 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:15,834 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:16,835 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:17,837 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:18,839 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:19,840 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:20,841 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:51,845 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:52,847 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:53,848 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:54,849 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:55,851 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:56,852 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:57,853 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:58,855 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:29:59,856 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:30:00,858 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:30:31,861 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:30:32,863 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:30:33,864 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:30:34,866 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:30:35,867 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:30:36,869 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
> 2017-05-18 11:30:37,870 INFO  [main] ipc.Client
> (Client.java:handleConnectionFailure(868)) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
>
>
>
>
> --
> View this message in context: http://apache-apex-users-list.
> 78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1632.html
> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>

Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
How does -- Locality Affects the Error while submitting Job in YARN 

for cluster com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems,
kafka partition 28
2017-05-18 11:25:46,680 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 29
2017-05-18 11:25:46,682 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 30
2017-05-18 11:25:46,684 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 31
2017-05-18 11:25:46,686 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 32
2017-05-18 11:25:46,688 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 33
2017-05-18 11:25:46,690 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 34
2017-05-18 11:25:46,691 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 35
2017-05-18 11:25:46,693 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 36
2017-05-18 11:25:46,695 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 37
2017-05-18 11:25:46,696 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 38
2017-05-18 11:25:46,698 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 39
2017-05-18 11:25:46,700 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 40
2017-05-18 11:25:46,702 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 41
2017-05-18 11:25:46,704 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 42
2017-05-18 11:25:46,706 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 43
2017-05-18 11:25:46,713 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 44
2017-05-18 11:25:46,715 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 45
2017-05-18 11:25:46,716 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 46
2017-05-18 11:25:46,718 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 47
2017-05-18 11:25:46,720 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 48
2017-05-18 11:25:46,722 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 49
2017-05-18 11:25:46,813 INFO  [main] util.AsyncFSStorageAgent
(AsyncFSStorageAgent.java:save(91)) - using
/var/data/yarn/local/usercache/apex/appcache/application_1495028624385_0006/container_1495028624385_0006_02_000001/tmp/chkp8177464469451000346
as the basepath for checkpointing.
2017-05-18 11:25:47,459 WARN  [DataStreamer for file
/user/apex/datatorrent/apps/application_1495028624385_0006/checkpoints/8/_tmp]
hdfs.DFSClient (DFSOutputStream.java:closeResponder(953)) - Caught exception
java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at java.lang.Thread.join(Thread.java:1245)
        at java.lang.Thread.join(Thread.java:1319)
        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:951)
        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:689)
        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:878)
2017-05-18 11:25:49,615 INFO  [main] stram.FSRecoveryHandler
(FSRecoveryHandler.java:rotateLog(103)) - Creating
hdfs://10.139.39.54:9000/user/apex/datatorrent/apps/application_1495028624385_0006/recovery/log
2017-05-18 11:25:49,669 INFO  [main] stram.StreamingAppMasterService
(StreamingAppMasterService.java:serviceInit(564)) - Starting application
with 55 operators in 54 containers
2017-05-18 11:25:49,686 INFO  [main] impl.NMClientAsyncImpl
(NMClientAsyncImpl.java:serviceInit(107)) - Upper bound of the thread pool
size is 500
2017-05-18 11:25:49,722 INFO  [main] loaders.ChainedPluginLocator
(ChainedPluginLocator.java:discoverPlugins(54)) - Loader
org.apache.apex.engine.plugin.loaders.ServiceLoaderBasedPluginLocator
detected 0 plugins
2017-05-18 11:25:49,758 WARN  [main] conf.Configuration
(Configuration.java:loadProperty(2681)) -
org.apache.hadoop.hdfs.client.HdfsDataInputStream@3d798e76:
org.apache.hadoop.hdfs.DFSInputStream@763b0996:an attempt to override final
parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2017-05-18 11:25:49,760 WARN  [main] conf.Configuration
(Configuration.java:loadProperty(2681)) -
org.apache.hadoop.hdfs.client.HdfsDataInputStream@3d798e76:
org.apache.hadoop.hdfs.DFSInputStream@763b0996:an attempt to override final
parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2017-05-18 11:25:49,761 INFO  [main] loaders.ChainedPluginLocator
(ChainedPluginLocator.java:discoverPlugins(54)) - Loader
org.apache.apex.engine.plugin.loaders.PropertyBasedPluginLocator detected 0
plugins
2017-05-18 11:25:49,769 INFO  [main] client.RMProxy
(RMProxy.java:createRMProxy(123)) - Connecting to ResourceManager at
/0.0.0.0:8030
2017-05-18 11:25:49,814 INFO  [main] stram.StreamingContainerParent
(StreamingContainerParent.java:startRpcServer(95)) - Config: Configuration:
core-default.xml, core-site.xml, yarn-default.xml, yarn-site.xml,
mapred-default.xml, mapred-site.xml, hdfs-default.xml, hdfs-site.xml
2017-05-18 11:25:49,815 INFO  [main] stram.StreamingContainerParent
(StreamingContainerParent.java:startRpcServer(96)) - Listener thread count
30
2017-05-18 11:25:49,819 INFO  [main] ipc.CallQueueManager
(CallQueueManager.java:<init>(57)) - Using callQueue: class
java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000
2017-05-18 11:25:49,824 INFO  [Socket Reader #1 for port 33999] ipc.Server
(Server.java:run(720)) - Starting Socket Reader #1 for port 33999
2017-05-18 11:25:49,837 INFO  [IPC Server listener on 33999] ipc.Server
(Server.java:run(799)) - IPC Server listener on 33999: starting
2017-05-18 11:25:49,837 INFO  [IPC Server Responder] ipc.Server
(Server.java:run(952)) - IPC Server Responder: starting
2017-05-18 11:25:49,841 INFO  [main] stram.StreamingContainerParent
(StreamingContainerParent.java:startRpcServer(124)) - Container callback
server listening at ELKCDNHOST9/10.139.38.180:33999
2017-05-18 11:25:49,880 INFO  [main] mortbay.log (Slf4jLog.java:info(67)) -
Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2017-05-18 11:25:49,958 INFO  [main] server.AuthenticationFilter
(AuthenticationFilter.java:constructSecretProvider(294)) - Unable to
initialize FileSignerSecretProvider, falling back to use random secrets.
2017-05-18 11:25:49,966 INFO  [main] http.HttpRequestLog
(HttpRequestLog.java:getRequestLog(80)) - Http request log for
http.requests.stram is not defined
2017-05-18 11:25:49,979 INFO  [main] http.HttpServer2
(HttpServer2.java:addGlobalFilter(766)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-05-18 11:25:49,981 INFO  [main] http.HttpServer2
(HttpServer2.java:addFilter(744)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context stram
2017-05-18 11:25:49,981 INFO  [main] http.HttpServer2
(HttpServer2.java:addFilter(751)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2017-05-18 11:25:49,981 INFO  [main] http.HttpServer2
(HttpServer2.java:addFilter(751)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2017-05-18 11:25:49,989 INFO  [main] http.HttpServer2
(HttpServer2.java:initializeWebServer(443)) - adding path spec: /stram/*
2017-05-18 11:25:49,989 INFO  [main] http.HttpServer2
(HttpServer2.java:initializeWebServer(443)) - adding path spec: /ws/*
2017-05-18 11:25:49,997 INFO  [main] http.HttpServer2
(HttpServer2.java:openListeners(954)) - Jetty bound to port 54855
2017-05-18 11:25:50,372 INFO  [main] webapp.WebApps
(WebApps.java:start(275)) - Web app /stram started at 54855
2017-05-18 11:25:50,710 INFO  [main] webapp.WebApps
(WebApps.java:start(289)) - Registered webapp guice modules
2017-05-18 11:25:50,711 INFO  [main] stram.StreamingAppMasterService
(StreamingAppMasterService.java:serviceStart(635)) - Started web service at
port: 54855
2017-05-18 11:25:50,711 INFO  [main] stram.StreamingAppMasterService
(StreamingAppMasterService.java:serviceStart(641)) - Setting tracking URL
to: ELKCDNHOST9:54855
2017-05-18 11:25:50,721 INFO  [main] stram.StreamingAppMasterService
(StreamingAppMasterService.java:execute(685)) - Starting ApplicationMaster
2017-05-18 11:25:50,721 INFO  [main] stram.StreamingAppMasterService
(StreamingAppMasterService.java:execute(687)) - number of tokens: 1


2017-05-18 11:28:32,814 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:28:33,815 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:28:34,817 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:28:35,818 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:28:36,819 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:28:37,821 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:28:38,822 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:28:39,823 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:28:40,825 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:11,829 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:12,830 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:13,832 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:14,833 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:15,834 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:16,835 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:17,837 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:18,839 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:19,840 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:20,841 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:51,845 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:52,847 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:53,848 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:54,849 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:55,851 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:56,852 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:57,853 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:58,855 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:29:59,856 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:30:00,858 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 9 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:30:31,861 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:30:32,863 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:30:33,864 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:30:34,866 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:30:35,867 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:30:36,869 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
2017-05-18 11:30:37,870 INFO  [main] ipc.Client
(Client.java:handleConnectionFailure(868)) - Retrying connect to server:
0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)




--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1632.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
This Error is in Hadoop logs 

for cluster com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems,
kafka partition 48
2017-05-18 10:43:49,670 INFO  [main] kafka.AbstractKafkaInputOperator
(AbstractKafkaInputOperator.java:definePartitions(548)) - [ONE_TO_ONE]:
Create operator partition for cluster
com.datatorrent.contrib.kafka.defaultcluster, topic jiovodems, kafka
partition 49
2017-05-18 10:43:49,711 INFO  [main] util.AsyncFSStorageAgent
(AsyncFSStorageAgent.java:save(91)) - using
/var/data/yarn/local/usercache/apex/appcache/application_1495028624385_0004/container_1495028624385_0004_01_000001/tmp/chkp7782336962410843784
as the basepath for checkpointing.
2017-05-18 10:43:51,290 ERROR [main] stram.StreamingAppMaster
(StreamingAppMaster.java:main(106)) - Exiting Application Master
java.lang.NoSuchFieldError: INSTANCE
        at
org.apache.http.impl.conn.HttpClientConnectionOperator.<init>(HttpClientConnectionOperator.java:74)
        at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.<init>(PoolingHttpClientConnectionManager.java:151)
        at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.<init>(PoolingHttpClientConnectionManager.java:138)
        at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.<init>(PoolingHttpClientConnectionManager.java:114)
        at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.<init>(PoolingHttpClientConnectionManager.java:105)
        at
com.datatorrent.stram.util.WebServicesClient.<clinit>(WebServicesClient.java:95)
        at
com.datatorrent.stram.StreamingContainerManager.getAppMasterContainerInfo(StreamingContainerManager.java:488)
        at
com.datatorrent.stram.StreamingContainerManager.init(StreamingContainerManager.java:455)
        at
com.datatorrent.stram.StreamingContainerManager.<init>(StreamingContainerManager.java:427)
        at
com.datatorrent.stram.StreamingContainerManager.getInstance(StreamingContainerManager.java:3135)
        at
com.datatorrent.stram.StreamingAppMasterService.serviceInit(StreamingAppMasterService.java:557)
        at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at
com.datatorrent.stram.StreamingAppMaster.main(StreamingAppMaster.java:102)





--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1631.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
application_with_vvvv.txt
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1630/application_with_vvvv.txt>      
apex_with_vvvv.txt
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1630/apex_with_vvvv.txt>  


Please find the output .. 




--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1630.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by AJAY GUPTA <aj...@gmail.com>.
Hi Rohit,

Seems you forgot to attach the job output.


Ajay

On Tue, May 16, 2017 at 2:18 PM, userguy <ro...@gmail.com> wrote:

> Please find the Pom.xml and apex -vvvv ouput also find the job output which
> we run from the server
>
> pom.xml
> <http://apache-apex-users-list.78494.x6.nabble.com/file/n1618/pom.xml>
> apex-vvvv.txt
> <http://apache-apex-users-list.78494.x6.nabble.com/file/
> n1618/apex-vvvv.txt>
>
>
>
>
>
>
> --
> View this message in context: http://apache-apex-users-list.
> 78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1618.html
> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>

Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
Please find the Pom.xml and apex -vvvv ouput also find the job output which
we run from the server 

pom.xml
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1618/pom.xml>  
apex-vvvv.txt
<http://apache-apex-users-list.78494.x6.nabble.com/file/n1618/apex-vvvv.txt>  






--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1618.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by AJAY GUPTA <aj...@gmail.com>.
Hi Rohit,

Can you send us the pom.xml of your application as well.


Ajay


On Mon, May 15, 2017 at 6:21 PM, Vikram Patil <vi...@datatorrent.com>
wrote:

> Hi Rohit,
>
> Is this correct IP address 10.0.0.0 <http://10.0.0.0:8030/> ? Normally IP
> addresses ending with 0 are network addresses.
>
> Thanks & Regards,
> Vikram
>
>
> On Mon, May 15, 2017 at 4:20 PM, userguy <ro...@gmail.com> wrote:
>
>> Hello ,
>>
>> Are you launching an application using apex-cli?
>> Ans - Yes
>>
>> 2) If you are then is it same machine as before which you used WordCount
>> example to launch from? Is it an example from Apache Malhar ? or Hadoop
>> example?
>>
>> Ans - WordCount Example is from Hadoop Test Jar - ( I means Hadoop Map
>> reduce Job for WordCount works Properly on Same Cluster )
>>
>> 3) Is your yarn-site.xml is same on all the nodes?
>> yes
>>
>> 4) If possible can you provide yarn-site.xml from all nodes in the email
>> thread?
>> Sharing
>>
>> -----------YARN SITE----------
>>
>> -->
>> <configuration>
>>     <property>
>>         <name>yarn.nodemanager.aux-services</name>
>>         <value>mapreduce_shuffle</value>
>>     </property>
>>
>>     <property>
>>         <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class<
>> /name>
>>         <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>     </property>
>>     <property>
>>        <name>yarn.resourcemanager.address</name>
>>           <value>10.0.0.0:8032</value>
>>      </property>
>>       <property>
>>      <name>yarn.resourcemanager.resource-tracker.address</name>
>>     <value>10.0.0.0:8031</value>
>>     </property>
>>       <property>
>>       <name>yarn.resourcemanager.scheduler.address</name>
>>        <value>10.0.0.0:8030</value>
>>       </property>
>>     <property>
>>         <name>yarn.nodemanager.local-dirs</name>
>>         <value>file:///var/data/yarn/local</value>
>>     </property>
>>
>>     <property>
>>         <name>yarn.nodemanager.log-dirs</name>
>>         <value>file:///var/data/yarn/logs</value>
>>     </property>
>>
>>     <property>
>>         <name>yarn.log.aggregation-enable</name>
>>         <value>true</value>
>>     </property>
>>
>>     <property>
>>         <description>Where to aggregate logs</description>
>>         <name>yarn.nodemanager.remote-app-log-dir</name>
>>         <value>hdfs://10.0.0.0:9000/var/log/hadoop_yarn/apps</value>
>>     </property>
>>
>>     <property>
>>         <name>yarn.nodemanager.resource.memory-mb</name>
>>         <value>40960</value>
>>     </property>
>>
>>     <property>
>>         <name>yarn.scheduler.minimum-allocation-mb</name>
>>         <value>1024</value>
>>     </property>
>>
>>     <property>
>>         <name>yarn.scheduler.maximum-allocation-mb</name>
>>         <value>40960</value>
>>     </property>
>>
>>     <property>
>>         <name>yarn.log-aggregation-enable</name>
>>         <value>True</value>
>>     </property>
>>
>>     <property>
>>         <name>yarn.log-aggregation.retain-seconds</name>
>>         <value>604800</value>
>>     </property>
>>
>>   <property>
>>         <name>yarn.application.classpath</name>
>>         <value>
>>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/etc/hadoop/*,
>>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/common/
>> *,
>>
>> /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/common/lib/*,
>>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/hdfs/*,
>>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/hdfs/li
>> b/*,
>>
>> /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/mapreduce/*,
>>
>> /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/mapreduce/lib/*,
>>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/yarn/*,
>>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/yarn/li
>> b/*
>>         </value>
>>     </property>
>>
>> </configuration>
>>
>> It is excatly same for all the servers
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-apex-users-list.
>> 78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1614.html
>> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>>
>
>

Re: Hdfs + apex-core

Posted by Vikram Patil <vi...@datatorrent.com>.
Hi Rohit,

Is this correct IP address 10.0.0.0 <http://10.0.0.0:8030/> ? Normally IP
addresses ending with 0 are network addresses.

Thanks & Regards,
Vikram


On Mon, May 15, 2017 at 4:20 PM, userguy <ro...@gmail.com> wrote:

> Hello ,
>
> Are you launching an application using apex-cli?
> Ans - Yes
>
> 2) If you are then is it same machine as before which you used WordCount
> example to launch from? Is it an example from Apache Malhar ? or Hadoop
> example?
>
> Ans - WordCount Example is from Hadoop Test Jar - ( I means Hadoop Map
> reduce Job for WordCount works Properly on Same Cluster )
>
> 3) Is your yarn-site.xml is same on all the nodes?
> yes
>
> 4) If possible can you provide yarn-site.xml from all nodes in the email
> thread?
> Sharing
>
> -----------YARN SITE----------
>
> -->
> <configuration>
>     <property>
>         <name>yarn.nodemanager.aux-services</name>
>         <value>mapreduce_shuffle</value>
>     </property>
>
>     <property>
>         <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>         <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>     </property>
>     <property>
>        <name>yarn.resourcemanager.address</name>
>           <value>10.0.0.0:8032</value>
>      </property>
>       <property>
>      <name>yarn.resourcemanager.resource-tracker.address</name>
>     <value>10.0.0.0:8031</value>
>     </property>
>       <property>
>       <name>yarn.resourcemanager.scheduler.address</name>
>        <value>10.0.0.0:8030</value>
>       </property>
>     <property>
>         <name>yarn.nodemanager.local-dirs</name>
>         <value>file:///var/data/yarn/local</value>
>     </property>
>
>     <property>
>         <name>yarn.nodemanager.log-dirs</name>
>         <value>file:///var/data/yarn/logs</value>
>     </property>
>
>     <property>
>         <name>yarn.log.aggregation-enable</name>
>         <value>true</value>
>     </property>
>
>     <property>
>         <description>Where to aggregate logs</description>
>         <name>yarn.nodemanager.remote-app-log-dir</name>
>         <value>hdfs://10.0.0.0:9000/var/log/hadoop_yarn/apps</value>
>     </property>
>
>     <property>
>         <name>yarn.nodemanager.resource.memory-mb</name>
>         <value>40960</value>
>     </property>
>
>     <property>
>         <name>yarn.scheduler.minimum-allocation-mb</name>
>         <value>1024</value>
>     </property>
>
>     <property>
>         <name>yarn.scheduler.maximum-allocation-mb</name>
>         <value>40960</value>
>     </property>
>
>     <property>
>         <name>yarn.log-aggregation-enable</name>
>         <value>True</value>
>     </property>
>
>     <property>
>         <name>yarn.log-aggregation.retain-seconds</name>
>         <value>604800</value>
>     </property>
>
>   <property>
>         <name>yarn.application.classpath</name>
>         <value>
>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/etc/hadoop/*,
>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/common/*,
>
> /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/common/lib/*,
>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/hdfs/*,
>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/hdfs/
> lib/*,
>
> /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/mapreduce/*,
>
> /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/mapreduce/lib/*,
>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/yarn/*,
>             /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/yarn/
> lib/*
>         </value>
>     </property>
>
> </configuration>
>
> It is excatly same for all the servers
>
>
>
>
> --
> View this message in context: http://apache-apex-users-list.
> 78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1614.html
> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>

Re: Hdfs + apex-core

Posted by userguy <ro...@gmail.com>.
Hello , 

Are you launching an application using apex-cli?
Ans - Yes 

2) If you are then is it same machine as before which you used WordCount
example to launch from? Is it an example from Apache Malhar ? or Hadoop
example?

Ans - WordCount Example is from Hadoop Test Jar - ( I means Hadoop Map
reduce Job for WordCount works Properly on Same Cluster ) 

3) Is your yarn-site.xml is same on all the nodes?
yes 

4) If possible can you provide yarn-site.xml from all nodes in the email
thread?
Sharing

-----------YARN SITE----------

-->
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
       <name>yarn.resourcemanager.address</name>
          <value>10.0.0.0:8032</value>
     </property>
      <property>
     <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>10.0.0.0:8031</value>
    </property>
      <property>
      <name>yarn.resourcemanager.scheduler.address</name>
       <value>10.0.0.0:8030</value>
      </property>
    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>file:///var/data/yarn/local</value>
    </property>

    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>file:///var/data/yarn/logs</value>
    </property>

    <property>
        <name>yarn.log.aggregation-enable</name>
        <value>true</value>
    </property>

    <property>
        <description>Where to aggregate logs</description>
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>hdfs://10.0.0.0:9000/var/log/hadoop_yarn/apps</value>
    </property>

    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>40960</value>
    </property>

    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>1024</value>
    </property>

    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>40960</value>
    </property>

    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>True</value>
    </property>

    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property>

  <property>
        <name>yarn.application.classpath</name>
        <value>
            /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/etc/hadoop/*,
            /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/common/*,
           
/var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/common/lib/*,
            /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/hdfs/*,
            /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/hdfs/lib/*,
           
/var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/mapreduce/*,
           
/var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/mapreduce/lib/*,
            /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/yarn/*,
            /var/data/hadoop/hadoop-2.6.0-cdh5.11.0/share/hadoop/yarn/lib/*
        </value>
    </property>

</configuration>

It is excatly same for all the servers 




--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/Fwd-Hdfs-apex-core-tp1608p1614.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: Hdfs + apex-core

Posted by Vikram Patil <vi...@datatorrent.com>.
Hi Rohit,

1) Are you launching an application using apex-cli?
2) If you are then is it same machine as before which you used WordCount
example to launch from? Is it an example from Apache Malhar ? or Hadoop
example?
3) Is your yarn-site.xml is same on all the nodes?
4) If possible can you provide yarn-site.xml from all nodes in the email
thread?

Thanks & Regards,
Vikram

On Sat, May 13, 2017 at 11:05 AM, rohit garg <ro...@gmail.com> wrote:

> Hello Yes HAdoop Componenet Is running fine i tested it with running a
> WordCound Jar on Hadoop cluster and it ran perfectly fine
>
> These are the error ...
>
> server1
> 6722 ResourceManager
> 6438 SecondaryNameNode
> 45846 Jps
> 6159 NameNode
> -------------
> server 2
> 22356 Jps
> 8533 DataNode
> 8685 NodeManager
> --------------------
> server 3
> 42116 NodeManager
> 41964 DataNode
> 19343 Jps
> server4
> -------------------
> server 4
> 42116 NodeManager
> 41964 DataNode
> 19343 Jps
> ------------------
> server 5
> 28181 NodeManager
> 28028 DataNode
> 5453 Jps
> --------------------
> server 6
> 46722 DataNode
> 12196 Jps
> 46875 NodeManager
>
> we are using apex-core 3.5.0
> cloudera hadoop
> Hadoop 2.6.0-cdh5.11.0
> Subversion http://github.com/cloudera/hadoop -r
> 91a488f2c5abb3de0e6ee74080dbc439c7576fb4
> Compiled by jenkins on 2017-04-06T03:07Z
> Compiled with protoc 2.5.0
> From source with checksum 1d879599e1ae47be77ed9f8b55ce9dbc
> This command was run using /var/data/hadoop/hadoop-2.6.0-
> cdh5.11.0/share/hadoop/common/hadoop-common-2.6.0-cdh5.11.0.jar
>
> java version "1.8.0_92"
> Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
>
> ----------------------yarn-site.xml--------------------
>
> <configuration>
>     <property>
>         <name>yarn.nodemanager.aux-services</name>
>         <value>mapreduce_shuffle</value>
>     </property>
>     <property>
>         <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>         <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>     </property>
>     <property>
>         <name>yarn.resourcemanager.hostname</name>
>         <value>ipv4</value>
>     </property>
>     <property>
>        <name>yarn.resourcemanager.address</name>
>           <value>ipv4:8032</value>
>      </property>
>       <property>
>       <name>yarn.resourcemanager.scheduler.address</name>
>        <value>ipv4:8030</value>
>       </property>
>     <property>
> ......
> </configuration>
>
>
> /var/data/yarn/local/usercache/apex/appcache/application_
> 1494584453454_0006/container_1494584453454_0006_02_000001/tmp/chkp3971982956962675237
> as the basepath for checkpointing.
> 2017-05-12 17:48:58,725 ERROR [main] stram.StreamingAppMaster
> (StreamingAppMaster.java:main(106)) - Exiting Application Master
> java.lang.NoClassDefFoundError: org/apache/http/conn/HttpClien
> tConnectionManager
>         at com.datatorrent.stram.StreamingContainerManager.getAppMaster
> ContainerInfo(StreamingContainerManager.java:481)
>         at com.datatorrent.stram.StreamingContainerManager.init(Streami
> ngContainerManager.java:448)
>         at com.datatorrent.stram.StreamingContainerManager.<init>(Strea
> mingContainerManager.java:420)
>         at com.datatorrent.stram.StreamingContainerManager.getInstance(
> StreamingContainerManager.java:3065)
>         at com.datatorrent.stram.StreamingAppMasterService.serviceInit(
> StreamingAppMasterService.java:552)
>         at org.apache.hadoop.service.AbstractService.init(AbstractServi
> ce.java:163)
>         at com.datatorrent.stram.StreamingAppMaster.main(StreamingAppMa
> ster.java:102)
> Caused by: java.lang.ClassNotFoundException:
> org.apache.http.conn.HttpClientConnectionManager
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>         ... 7 more
>
> Also Some Times the error is that it tries to connect to Resource Manager
> 0.0.0.0:8030 instead of ip.address:8030
> It does not get to resourcemanager
>
>
> On Fri, May 12, 2017 at 5:37 PM, Mohit Jotwani <mo...@datatorrent.com>
> wrote:
>
>> Dear Rohit,
>>
>> Can you check whether you have correctly started Yarn on your machine? It
>> looks like it is not able to connect to the Resource Manager.
>>
>> Regards,
>> Mohit
>>
>> On Fri, May 12, 2017 at 5:30 PM, rohit garg <ro...@gmail.com>
>> wrote:
>>
>>>
>>> ---------- Forwarded message ----------
>>> From: "rohit garg" <ro...@gmail.com>
>>> Date: 12 May 2017 14:02
>>> Subject: Hdfs + apex-core
>>> To: <Us...@apex.apache.org>
>>> Cc:
>>>
>>> I have installed apache apex core but when I submit a app to run on yarn
>>> it tries to connect to 0.0.0.0:8032
>>>
>>>
>>>
>>
>>
>> --
>>
>> Regards,
>>
>> ___________________________________________________
>>
>> *Mohit Jotwani*
>>
>> Product Manager
>>
>> E: mohit@datatorrent.com | M: +91 97699 62740
>>
>> www.datatorrent.com  |  apex.apache.org
>>
>>
>>
>
>
> --
>
>
>    ---------------RohitGarg.
>
>
>
>
>

Re: Hdfs + apex-core

Posted by rohit garg <ro...@gmail.com>.
Hello Yes HAdoop Componenet Is running fine i tested it with running a
WordCound Jar on Hadoop cluster and it ran perfectly fine

These are the error ...

server1
6722 ResourceManager
6438 SecondaryNameNode
45846 Jps
6159 NameNode
-------------
server 2
22356 Jps
8533 DataNode
8685 NodeManager
--------------------
server 3
42116 NodeManager
41964 DataNode
19343 Jps
server4
-------------------
server 4
42116 NodeManager
41964 DataNode
19343 Jps
------------------
server 5
28181 NodeManager
28028 DataNode
5453 Jps
--------------------
server 6
46722 DataNode
12196 Jps
46875 NodeManager

we are using apex-core 3.5.0
cloudera hadoop
Hadoop 2.6.0-cdh5.11.0
Subversion http://github.com/cloudera/hadoop -r
91a488f2c5abb3de0e6ee74080dbc439c7576fb4
Compiled by jenkins on 2017-04-06T03:07Z
Compiled with protoc 2.5.0
From source with checksum 1d879599e1ae47be77ed9f8b55ce9dbc
This command was run using /var/data/hadoop/hadoop-2.6.0-
cdh5.11.0/share/hadoop/common/hadoop-common-2.6.0-cdh5.11.0.jar

java version "1.8.0_92"
Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)

----------------------yarn-site.xml--------------------

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>ipv4</value>
    </property>
    <property>
       <name>yarn.resourcemanager.address</name>
          <value>ipv4:8032</value>
     </property>
      <property>
      <name>yarn.resourcemanager.scheduler.address</name>
       <value>ipv4:8030</value>
      </property>
    <property>
......
</configuration>


/var/data/yarn/local/usercache/apex/appcache/application_1494584453454_
0006/container_1494584453454_0006_02_000001/tmp/chkp3971982956962675237 as
the basepath for checkpointing.
2017-05-12 17:48:58,725 ERROR [main] stram.StreamingAppMaster
(StreamingAppMaster.java:main(106)) - Exiting Application Master
java.lang.NoClassDefFoundError: org/apache/http/conn/
HttpClientConnectionManager
        at com.datatorrent.stram.StreamingContainerManager.
getAppMasterContainerInfo(StreamingContainerManager.java:481)
        at com.datatorrent.stram.StreamingContainerManager.init(
StreamingContainerManager.java:448)
        at com.datatorrent.stram.StreamingContainerManager.<init>(
StreamingContainerManager.java:420)
        at com.datatorrent.stram.StreamingContainerManager.getInstance(
StreamingContainerManager.java:3065)
        at com.datatorrent.stram.StreamingAppMasterService.serviceInit(
StreamingAppMasterService.java:552)
        at org.apache.hadoop.service.AbstractService.init(
AbstractService.java:163)
        at com.datatorrent.stram.StreamingAppMaster.main(
StreamingAppMaster.java:102)
Caused by: java.lang.ClassNotFoundException: org.apache.http.conn.
HttpClientConnectionManager
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 7 more

Also Some Times the error is that it tries to connect to Resource Manager
0.0.0.0:8030 instead of ip.address:8030
It does not get to resourcemanager


On Fri, May 12, 2017 at 5:37 PM, Mohit Jotwani <mo...@datatorrent.com>
wrote:

> Dear Rohit,
>
> Can you check whether you have correctly started Yarn on your machine? It
> looks like it is not able to connect to the Resource Manager.
>
> Regards,
> Mohit
>
> On Fri, May 12, 2017 at 5:30 PM, rohit garg <ro...@gmail.com>
> wrote:
>
>>
>> ---------- Forwarded message ----------
>> From: "rohit garg" <ro...@gmail.com>
>> Date: 12 May 2017 14:02
>> Subject: Hdfs + apex-core
>> To: <Us...@apex.apache.org>
>> Cc:
>>
>> I have installed apache apex core but when I submit a app to run on yarn
>> it tries to connect to 0.0.0.0:8032
>>
>>
>>
>
>
> --
>
> Regards,
>
> ___________________________________________________
>
> *Mohit Jotwani*
>
> Product Manager
>
> E: mohit@datatorrent.com | M: +91 97699 62740
>
> www.datatorrent.com  |  apex.apache.org
>
>
>


-- 


   ---------------RohitGarg.

Re: Hdfs + apex-core

Posted by Mohit Jotwani <mo...@datatorrent.com>.
Dear Rohit,

Can you check whether you have correctly started Yarn on your machine? It
looks like it is not able to connect to the Resource Manager.

Regards,
Mohit

On Fri, May 12, 2017 at 5:30 PM, rohit garg <ro...@gmail.com> wrote:

>
> ---------- Forwarded message ----------
> From: "rohit garg" <ro...@gmail.com>
> Date: 12 May 2017 14:02
> Subject: Hdfs + apex-core
> To: <Us...@apex.apache.org>
> Cc:
>
> I have installed apache apex core but when I submit a app to run on yarn
> it tries to connect to 0.0.0.0:8032
>
>
>


-- 

Regards,

___________________________________________________

*Mohit Jotwani*

Product Manager

E: mohit@datatorrent.com | M: +91 97699 62740

www.datatorrent.com  |  apex.apache.org