You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@falcon.apache.org by Ed Kohlwey <ek...@gmail.com> on 2014/08/07 18:00:56 UTC

Issue adding a cluster to falcon

I'm attempting to deploy a cluster entity through an ambari-managed falcon.
This is a clean install from 1.6.1.

When submitting the cluster definition, I get the following message:

$ sudo -u admin falcon entity -submit -file cluster.xml -type cluster

Error: Invalid Execute server or port: <hostname redacted>:8050

Cannot initialize Cluster. Please check your configuration for
mapreduce.framework.name and the correspond server addresses.


mapreduce.framework.name is set yarn (per ambari deployment defaults). My
configuration line looks like this:

<interface type="execute" endpoint="<hostname redacted>:8050"
version="2.4.0" />

Enabling debugging in the hadoop packages in the log4j.xml shows that the
connection is being considered but not established.

2014-08-07 15:23:03,513 DEBUG -
[1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Trying ClientProtocolProvider :
org.apache.hadoop.mapred.YarnClientProtocolProvider (Cluster:90)

at
org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)

at
org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)

at
org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)

2014-08-07 15:23:03,971 INFO  -
[1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Failed to use
org.apache.hadoop.mapred.YarnClientProtocolProvider due to error: null
(Cluster:113)


Any ideas of what could be wrong are appreciated. I am able to launch other
yarn jobs on this cluster successfully via Hive.

Re: Issue adding a cluster to falcon

Posted by Ed Kohlwey <ek...@gmail.com>.
Looking in yarn-site.xml, this does not seem to be true. The port, as
configured by ambari, is 8050.

Re: Issue adding a cluster to falcon

Posted by Ed Kohlwey <ek...@gmail.com>.
So I checked out the Falcon server source and saw that the changes between
these revisions are pretty significant. I will see if I can hunt down the
stack trace in the server release code.

Here's the output for the server installed by ambari 1.6.1:

########################################################################################

                               Falcon Server (STARTUP)


        vc.source.url:  scm:git:
https://git-wip-us.apache.org/repos/asf/incubator-falcon.git/falcon-webapp

        build.epoch:    1403823951964

        project.version:        0.5.0.2.1.3.0-563

        build.user:     jenkins

        vc.revision:    e2e41d74ee91fd04879c4d2fd057e369fceb32e2

        domain: all

        build.version:
0.5.0.2.1.3.0-563-re2e41d74ee91fd04879c4d2fd057e369fceb32e2

########################################################################################


The following shows up in the Falcon logs:

org.apache.falcon.entity.parser.ValidationException: Invalid Execute server
or port: <host redacted>:8050

at
org.apache.falcon.entity.parser.ClusterEntityParser.validateExecuteInterface(ClusterEntityParser.java:131)

at
org.apache.falcon.entity.parser.ClusterEntityParser.validate(ClusterEntityParser.java:73)

at
org.apache.falcon.entity.parser.ClusterEntityParser.validate(ClusterEntityParser.java:47)

at
org.apache.falcon.resource.AbstractEntityManager.validate(AbstractEntityManager.java:364)

at
org.apache.falcon.resource.AbstractEntityManager.submitInternal(AbstractEntityManager.java:331)

at
org.apache.falcon.resource.AbstractEntityManager.submit(AbstractEntityManager.java:153)

at
org.apache.falcon.resource.ConfigSyncService.submit(ConfigSyncService.java:44)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.falcon.resource.channel.IPCChannel.invoke(IPCChannel.java:48)

at
org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$1.doExecute(SchedulableEntityManagerProxy.java:118)

at
org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$EntityProxy.execute(SchedulableEntityManagerProxy.java:410)

at
org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy.submit_aroundBody0(SchedulableEntityManagerProxy.java:120)

at
org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$AjcClosure1.run(SchedulableEntityManagerProxy.java:1)

at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:149)

at
org.apache.falcon.aspect.AbstractFalconAspect.logAround(AbstractFalconAspect.java:50)

at
org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy.submit(SchedulableEntityManagerProxy.java:107)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)

at
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)

at
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)

at
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)

at
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)

at
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)

at
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)

at
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)

at
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)

at
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)

at
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)

at
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)

at
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)

at
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)

at
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)

at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)

at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)

at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)

at
org.apache.falcon.security.BasicAuthFilter$2.doFilter(BasicAuthFilter.java:183)

at
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:392)

at
org.apache.falcon.security.BasicAuthFilter.doFilter(BasicAuthFilter.java:221)

at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)

at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)

at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)

at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)

at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)

at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)

at org.mortbay.jetty.Server.handle(Server.java:326)

at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)

at
org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)

at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)

at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)

at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)

at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)

at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

Caused by: java.io.IOException: Cannot initialize Cluster. Please check
your configuration for mapreduce.framework.name and the correspond server
addresses.

at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)

at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)

at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)

at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470)

at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:449)

at
org.apache.falcon.hadoop.HadoopClientFactory$2.run(HadoopClientFactory.java:193)

at
org.apache.falcon.hadoop.HadoopClientFactory$2.run(HadoopClientFactory.java:191)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)

at
org.apache.falcon.hadoop.HadoopClientFactory.validateJobClient(HadoopClientFactory.java:191)

at
org.apache.falcon.entity.parser.ClusterEntityParser.validateExecuteInterface(ClusterEntityParser.java:129)

 ... 58 more


On Fri, Aug 8, 2014 at 4:58 PM, Tyler D <td...@gmail.com> wrote:

> On Thu, Aug 7, 2014 at 5:30 PM, Ed Kohlwey <ek...@gmail.com> wrote:
>
> > I don't have a copy of the sandbox handy. If you look at the top of the
> > logs falcon prints a commit id in a banner of stars when it starts. Can
> you
> > check the version in the sandbox? It may provide a hint.
> >
> > Sent from my mobile device. Please excuse any typos or shorthand.
>
>
> From the HDP Sandbox:
>
> [root@sandbox ~]# falcon admin -version
> Falcon server build version:
>
> {"properties":[{"key":"Version","value":"0.5.0.2.1.1.0-385-rcf16c31b9a36ac8fdfd5dbb1651dc9bb6747a2ff"},{"key":"Mode","value":"embedded"}]}
>
>
> From /var/log/falcon/falcon.application.log:
>
>
> ########################################################################################
>                                Falcon Server (STARTUP)
>
>         vc.source.url:  scm:git:
> https://git-wip-us.apache.org/repos/asf/incubator-falcon.git/falcon-webapp
>         build.epoch:    1397693230464
>         project.version:        0.5.0.2.1.1.0-385
>         build.user:     jenkins
>         vc.revision:    cf16c31b9a36ac8fdfd5dbb1651dc9bb6747a2ff
>         domain: all
>         build.version:
> 0.5.0.2.1.1.0-385-rcf16c31b9a36ac8fdfd5dbb1651dc9bb6747a2ff
>
> ########################################################################################
>

Re: Issue adding a cluster to falcon

Posted by Tyler D <td...@gmail.com>.
On Thu, Aug 7, 2014 at 5:30 PM, Ed Kohlwey <ek...@gmail.com> wrote:

> I don't have a copy of the sandbox handy. If you look at the top of the
> logs falcon prints a commit id in a banner of stars when it starts. Can
you
> check the version in the sandbox? It may provide a hint.
>
> Sent from my mobile device. Please excuse any typos or shorthand.


>From the HDP Sandbox:

[root@sandbox ~]# falcon admin -version
Falcon server build version:
{"properties":[{"key":"Version","value":"0.5.0.2.1.1.0-385-rcf16c31b9a36ac8fdfd5dbb1651dc9bb6747a2ff"},{"key":"Mode","value":"embedded"}]}


>From /var/log/falcon/falcon.application.log:

########################################################################################
                               Falcon Server (STARTUP)

        vc.source.url:  scm:git:
https://git-wip-us.apache.org/repos/asf/incubator-falcon.git/falcon-webapp
        build.epoch:    1397693230464
        project.version:        0.5.0.2.1.1.0-385
        build.user:     jenkins
        vc.revision:    cf16c31b9a36ac8fdfd5dbb1651dc9bb6747a2ff
        domain: all
        build.version:
0.5.0.2.1.1.0-385-rcf16c31b9a36ac8fdfd5dbb1651dc9bb6747a2ff
########################################################################################

Re: Issue adding a cluster to falcon

Posted by Seetharam Venkatesh <ve...@innerzeal.com>.
I think you have the wrong port.

8050 is the port in HDP Sandbox. But Ambari uses 8052 as default. Please
check your install and configure cluster entity appropriately. Its
available in yarn-site.xml and the property is yarn.resourcemanager.address.

Thanks!


On Thu, Aug 7, 2014 at 5:30 PM, Ed Kohlwey <ek...@gmail.com> wrote:

> I don't have a copy of the sandbox handy. If you look at the top of the
> logs falcon prints a commit id in a banner of stars when it starts. Can you
> check the version in the sandbox? It may provide a hint.
>
> Sent from my mobile device. Please excuse any typos or shorthand.
> On Aug 7, 2014 6:16 PM, "Tyler D" <td...@gmail.com> wrote:
>
> > On Thu, Aug 7, 2014 at 12:00 PM, Ed Kohlwey <ek...@gmail.com> wrote:
> >
> > > I'm attempting to deploy a cluster entity through an ambari-managed
> > falcon.
> > > This is a clean install from 1.6.1.
> > >
> > > When submitting the cluster definition, I get the following message:
> > >
> > > $ sudo -u admin falcon entity -submit -file cluster.xml -type cluster
> > >
> > > Error: Invalid Execute server or port: <hostname redacted>:8050
> > >
> > > Cannot initialize Cluster. Please check your configuration for
> > > mapreduce.framework.name and the correspond server addresses.
> > >
> > >
> > > mapreduce.framework.name is set yarn (per ambari deployment defaults).
> > My
> > > configuration line looks like this:
> > >
> > > <interface type="execute" endpoint="<hostname redacted>:8050"
> > > version="2.4.0" />
> > >
> > > Enabling debugging in the hadoop packages in the log4j.xml shows that
> the
> > > connection is being considered but not established.
> > >
> > > 2014-08-07 15:23:03,513 DEBUG -
> > > [1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
> > > a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Trying ClientProtocolProvider :
> > > org.apache.hadoop.mapred.YarnClientProtocolProvider (Cluster:90)
> > >
> > > at
> > >
> > >
> >
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
> > >
> > > at
> > >
> > >
> >
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
> > >
> > > at
> > >
> > >
> >
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
> > >
> > > 2014-08-07 15:23:03,971 INFO  -
> > > [1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
> > > a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Failed to use
> > > org.apache.hadoop.mapred.YarnClientProtocolProvider due to error: null
> > > (Cluster:113)
> > >
> > >
> > > Any ideas of what could be wrong are appreciated. I am able to launch
> > other
> > > yarn jobs on this cluster successfully via Hive.
> > >
> >
> >
> >
> > I am having this same exact issue.  In my case, Falcon is installed
> through
> > HDP 2.1 distribution, which uses Ambari 1.6.1.
> >
> > At first I thought the problem was a version or port misconfiguration in
> my
> > cluster entity spec that I was trying to use.
> >
> > I tried giving Falcon various different cluster xml configurations.  All
> > attempts failed.
> >
> > I included the xml I tried in this mail.  Also posted to
> > http://pastebin.com/Ce07vyGv
> >
> > Hopefully this helps people isolate the problem/eliminate non-problems.
> >
> > All of these interface of type=execute lines failed to register with the
> > cluster (which makes me think this xml probably isn't the problem):
> >
> > <?xml version="1.0"?>
> > <cluster colo="USWestOregon" description="oregonHadoopCluster"
> > name="primaryCluster" xmlns="uri:falcon:cluster:0.1">
> >     <interfaces>
> >         <interface type="readonly"
> > endpoint="hftp://vm-centos6-hdp21-a1.hdp.hadoop:50070" version="2.4.0" />
> >         <interface type="write"
> > endpoint="hdfs://vm-centos6-hdp21-a1.hdp.hadoop:8020" version="2.4.0" />
> >         <!--<interface type="execute"
> > endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="2.2.0" />-->
> >         <!-- ^^FAILS: gives error like...
> >              $ falcon entity -type cluster -submit -file
> > /home/ambari-qa/falconChurnDemo/oregonCluster.xml`
> >              Error:Invalid Execute server or port:
> > vm-centos6-hdp21-a1.hdp.hadoop:8050
> >              Cannot initialize Cluster. Please check your configuration
> for
> > mapreduce.framework.name and the correspond server addresses.
> > (FalconWebException:66)
> >         -->
> >         <!--<interface type="execute"
> > endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8088" version="2.2.0" /> --><!--
> > ALSO FAILS -->
> >         <!--<interface type="execute"
> > endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="2.4.0" /> --><!--
> > ALSO FAILS -->
> >         <!--<interface type="execute"
> > endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="0.20.2" />
> --><!--
> > ALSO FAILS -->
> >         <!--<interface type="execute"
> > endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8030" version="2.4.0" /> --><!--
> > ALSO FAILS -->
> >         <!--<interface type="execute"
> > endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8025" version="2.4.0" />--><!--
> > ALSO FAILS -->
> >         <!--<interface type="execute"
> > endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8141" version="2.4.0" />--><!--
> > ALSO FAILS -->
> >         <!--<interface type="execute" endpoint="rm:8050"
>  version="0.20.2"
> > />--><!-- ALSO FAILS -->
> >         <!--<interface type="execute" endpoint="localhost:8050"
> > version="0.20.2" />--><!-- ALSO FAILS -->
> >         <interface type="execute"
> > endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8021" version="2.4.0" /><!--
> ALSO
> > FAILS -->
> >         <interface type="workflow" endpoint="
> > http://vm-centos6-hdp21-a1.hdp.hadoop:11000/oozie/" version="4.0.0" />
> >         <interface type="messaging"
> > endpoint="tcp://vm-centos6-hdp21-a1.hdp.hadoop:61616?daemon=true"
> > version="5.1.6" />
> >     </interfaces>
> >     <locations>
> >         <location name="staging"
> path="/apps/falcon/primaryCluster/staging"
> > />
> >         <location name="temp" path="/tmp" />
> >         <location name="working"
> path="/apps/falcon/primaryCluster/working"
> > />
> >     </locations>
> > </cluster>
> >
> > And of course here's the error:
> > Error:Invalid Execute server or port: vm-centos6-hdp21-a1.hdp.hadoop:8050
> > Cannot initialize Cluster. Please check your configuration for
> > mapreduce.framework.name and the correspond server addresses.
> > (FalconWebException:66)
> >
> >
> >
> > Something I thought was weird is when I execute the bin/falcon-status.sh
> > and/or bin/service-status shell scripts: they always come back saying
> that
> > the Falcon server is not running.  This conflicts, however, with the
> > situation because the Falcon process is running, the web interface at
> > http://vm-centos6-hdp21-a1.hdp.hadoop:15000/  is up, and Ambari reports
> > the
> > service as being up.  Here is me running those commands:
> >
> > [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/falcon-status
> > Hadoop is installed, adding hadoop classpath to falcon classpath
> > falcon is not running.
> > [ambari-qa@vm-centos6-hdp21-a1 falcon]$ echo $?
> > 255
> > [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh
> > Invalid option for app: . Valid choices are falcon and prism
> > [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh falcon
> > Hadoop is installed, adding hadoop classpath to falcon classpath
> > falcon is not running.
> > [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh prism
> > Hadoop is installed, adding hadoop classpath to falcon classpath
> > mkdir: cannot create directory `/usr/lib/falcon/server/webapp/prism':
> > Permission denied
> > /usr/lib/falcon/bin/falcon-config.sh: line 99: cd:
> > /usr/lib/falcon/server/webapp/prism: No such file or directory
> > java.io.FileNotFoundException: /usr/lib/falcon/server/webapp/prism.war
> (No
> > such file or directory)
> >         at java.io.FileInputStream.open(Native Method)
> >         at java.io.FileInputStream.<init>(FileInputStream.java:146)
> >         at java.io.FileInputStream.<init>(FileInputStream.java:101)
> >         at sun.tools.jar.Main.run(Main.java:259)
> >         at sun.tools.jar.Main.main(Main.java:1177)
> > /usr/lib/falcon/bin/falcon-config.sh: line 101: cd: OLDPWD not set
> > prism is not running.
> >
> >
> >
> > Also want to mention that I tried Hortonworks's latest Sandbox vm (
> >
> http://hortonassets.s3.amazonaws.com/2.1/vmware/Hortonworks_Sandbox_2.1.ova
> > ), and when I execute the bin/falcon-status.sh and/or bin/service-status
> > shell scripts: they come back as expected, saying that the Falcon server
> is
> > up and running and gives the url.
> >
> >
> > Thanks Ed for sending out this mail... I guess for me, whats next could
> be
> > to investigate the Falcon version differences between the HDP2.1 cluster
> I
> > installed (with the problems) and the Falcon version in the Sandbox vm
> > since it appears ... better.
> >
>



-- 
Regards,
Venkatesh

“Perfection (in design) is achieved not when there is nothing more to add,
but rather when there is nothing more to take away.”
- Antoine de Saint-Exupéry

Re: Issue adding a cluster to falcon

Posted by Ed Kohlwey <ek...@gmail.com>.
I don't have a copy of the sandbox handy. If you look at the top of the
logs falcon prints a commit id in a banner of stars when it starts. Can you
check the version in the sandbox? It may provide a hint.

Sent from my mobile device. Please excuse any typos or shorthand.
On Aug 7, 2014 6:16 PM, "Tyler D" <td...@gmail.com> wrote:

> On Thu, Aug 7, 2014 at 12:00 PM, Ed Kohlwey <ek...@gmail.com> wrote:
>
> > I'm attempting to deploy a cluster entity through an ambari-managed
> falcon.
> > This is a clean install from 1.6.1.
> >
> > When submitting the cluster definition, I get the following message:
> >
> > $ sudo -u admin falcon entity -submit -file cluster.xml -type cluster
> >
> > Error: Invalid Execute server or port: <hostname redacted>:8050
> >
> > Cannot initialize Cluster. Please check your configuration for
> > mapreduce.framework.name and the correspond server addresses.
> >
> >
> > mapreduce.framework.name is set yarn (per ambari deployment defaults).
> My
> > configuration line looks like this:
> >
> > <interface type="execute" endpoint="<hostname redacted>:8050"
> > version="2.4.0" />
> >
> > Enabling debugging in the hadoop packages in the log4j.xml shows that the
> > connection is being considered but not established.
> >
> > 2014-08-07 15:23:03,513 DEBUG -
> > [1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
> > a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Trying ClientProtocolProvider :
> > org.apache.hadoop.mapred.YarnClientProtocolProvider (Cluster:90)
> >
> > at
> >
> >
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
> >
> > at
> >
> >
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
> >
> > at
> >
> >
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
> >
> > 2014-08-07 15:23:03,971 INFO  -
> > [1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
> > a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Failed to use
> > org.apache.hadoop.mapred.YarnClientProtocolProvider due to error: null
> > (Cluster:113)
> >
> >
> > Any ideas of what could be wrong are appreciated. I am able to launch
> other
> > yarn jobs on this cluster successfully via Hive.
> >
>
>
>
> I am having this same exact issue.  In my case, Falcon is installed through
> HDP 2.1 distribution, which uses Ambari 1.6.1.
>
> At first I thought the problem was a version or port misconfiguration in my
> cluster entity spec that I was trying to use.
>
> I tried giving Falcon various different cluster xml configurations.  All
> attempts failed.
>
> I included the xml I tried in this mail.  Also posted to
> http://pastebin.com/Ce07vyGv
>
> Hopefully this helps people isolate the problem/eliminate non-problems.
>
> All of these interface of type=execute lines failed to register with the
> cluster (which makes me think this xml probably isn't the problem):
>
> <?xml version="1.0"?>
> <cluster colo="USWestOregon" description="oregonHadoopCluster"
> name="primaryCluster" xmlns="uri:falcon:cluster:0.1">
>     <interfaces>
>         <interface type="readonly"
> endpoint="hftp://vm-centos6-hdp21-a1.hdp.hadoop:50070" version="2.4.0" />
>         <interface type="write"
> endpoint="hdfs://vm-centos6-hdp21-a1.hdp.hadoop:8020" version="2.4.0" />
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="2.2.0" />-->
>         <!-- ^^FAILS: gives error like...
>              $ falcon entity -type cluster -submit -file
> /home/ambari-qa/falconChurnDemo/oregonCluster.xml`
>              Error:Invalid Execute server or port:
> vm-centos6-hdp21-a1.hdp.hadoop:8050
>              Cannot initialize Cluster. Please check your configuration for
> mapreduce.framework.name and the correspond server addresses.
> (FalconWebException:66)
>         -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8088" version="2.2.0" /> --><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="2.4.0" /> --><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="0.20.2" /> --><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8030" version="2.4.0" /> --><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8025" version="2.4.0" />--><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8141" version="2.4.0" />--><!--
> ALSO FAILS -->
>         <!--<interface type="execute" endpoint="rm:8050"  version="0.20.2"
> />--><!-- ALSO FAILS -->
>         <!--<interface type="execute" endpoint="localhost:8050"
> version="0.20.2" />--><!-- ALSO FAILS -->
>         <interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8021" version="2.4.0" /><!-- ALSO
> FAILS -->
>         <interface type="workflow" endpoint="
> http://vm-centos6-hdp21-a1.hdp.hadoop:11000/oozie/" version="4.0.0" />
>         <interface type="messaging"
> endpoint="tcp://vm-centos6-hdp21-a1.hdp.hadoop:61616?daemon=true"
> version="5.1.6" />
>     </interfaces>
>     <locations>
>         <location name="staging" path="/apps/falcon/primaryCluster/staging"
> />
>         <location name="temp" path="/tmp" />
>         <location name="working" path="/apps/falcon/primaryCluster/working"
> />
>     </locations>
> </cluster>
>
> And of course here's the error:
> Error:Invalid Execute server or port: vm-centos6-hdp21-a1.hdp.hadoop:8050
> Cannot initialize Cluster. Please check your configuration for
> mapreduce.framework.name and the correspond server addresses.
> (FalconWebException:66)
>
>
>
> Something I thought was weird is when I execute the bin/falcon-status.sh
> and/or bin/service-status shell scripts: they always come back saying that
> the Falcon server is not running.  This conflicts, however, with the
> situation because the Falcon process is running, the web interface at
> http://vm-centos6-hdp21-a1.hdp.hadoop:15000/  is up, and Ambari reports
> the
> service as being up.  Here is me running those commands:
>
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/falcon-status
> Hadoop is installed, adding hadoop classpath to falcon classpath
> falcon is not running.
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ echo $?
> 255
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh
> Invalid option for app: . Valid choices are falcon and prism
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh falcon
> Hadoop is installed, adding hadoop classpath to falcon classpath
> falcon is not running.
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh prism
> Hadoop is installed, adding hadoop classpath to falcon classpath
> mkdir: cannot create directory `/usr/lib/falcon/server/webapp/prism':
> Permission denied
> /usr/lib/falcon/bin/falcon-config.sh: line 99: cd:
> /usr/lib/falcon/server/webapp/prism: No such file or directory
> java.io.FileNotFoundException: /usr/lib/falcon/server/webapp/prism.war (No
> such file or directory)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(FileInputStream.java:146)
>         at java.io.FileInputStream.<init>(FileInputStream.java:101)
>         at sun.tools.jar.Main.run(Main.java:259)
>         at sun.tools.jar.Main.main(Main.java:1177)
> /usr/lib/falcon/bin/falcon-config.sh: line 101: cd: OLDPWD not set
> prism is not running.
>
>
>
> Also want to mention that I tried Hortonworks's latest Sandbox vm (
> http://hortonassets.s3.amazonaws.com/2.1/vmware/Hortonworks_Sandbox_2.1.ova
> ), and when I execute the bin/falcon-status.sh and/or bin/service-status
> shell scripts: they come back as expected, saying that the Falcon server is
> up and running and gives the url.
>
>
> Thanks Ed for sending out this mail... I guess for me, whats next could be
> to investigate the Falcon version differences between the HDP2.1 cluster I
> installed (with the problems) and the Falcon version in the Sandbox vm
> since it appears ... better.
>

Re: Issue adding a cluster to falcon

Posted by Tyler D <td...@gmail.com>.
On Thu, Aug 7, 2014 at 12:00 PM, Ed Kohlwey <ek...@gmail.com> wrote:

> I'm attempting to deploy a cluster entity through an ambari-managed falcon.
> This is a clean install from 1.6.1.
>
> When submitting the cluster definition, I get the following message:
>
> $ sudo -u admin falcon entity -submit -file cluster.xml -type cluster
>
> Error: Invalid Execute server or port: <hostname redacted>:8050
>
> Cannot initialize Cluster. Please check your configuration for
> mapreduce.framework.name and the correspond server addresses.
>
>
> mapreduce.framework.name is set yarn (per ambari deployment defaults). My
> configuration line looks like this:
>
> <interface type="execute" endpoint="<hostname redacted>:8050"
> version="2.4.0" />
>
> Enabling debugging in the hadoop packages in the log4j.xml shows that the
> connection is being considered but not established.
>
> 2014-08-07 15:23:03,513 DEBUG -
> [1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
> a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Trying ClientProtocolProvider :
> org.apache.hadoop.mapred.YarnClientProtocolProvider (Cluster:90)
>
> at
>
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
>
> at
>
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
>
> at
>
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
>
> 2014-08-07 15:23:03,971 INFO  -
> [1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
> a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Failed to use
> org.apache.hadoop.mapred.YarnClientProtocolProvider due to error: null
> (Cluster:113)
>
>
> Any ideas of what could be wrong are appreciated. I am able to launch other
> yarn jobs on this cluster successfully via Hive.
>



I am having this same exact issue.  In my case, Falcon is installed through
HDP 2.1 distribution, which uses Ambari 1.6.1.

At first I thought the problem was a version or port misconfiguration in my
cluster entity spec that I was trying to use.

I tried giving Falcon various different cluster xml configurations.  All
attempts failed.

I included the xml I tried in this mail.  Also posted to
http://pastebin.com/Ce07vyGv

Hopefully this helps people isolate the problem/eliminate non-problems.

All of these interface of type=execute lines failed to register with the
cluster (which makes me think this xml probably isn't the problem):

<?xml version="1.0"?>
<cluster colo="USWestOregon" description="oregonHadoopCluster"
name="primaryCluster" xmlns="uri:falcon:cluster:0.1">
    <interfaces>
        <interface type="readonly"
endpoint="hftp://vm-centos6-hdp21-a1.hdp.hadoop:50070" version="2.4.0" />
        <interface type="write"
endpoint="hdfs://vm-centos6-hdp21-a1.hdp.hadoop:8020" version="2.4.0" />
        <!--<interface type="execute"
endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="2.2.0" />-->
        <!-- ^^FAILS: gives error like...
             $ falcon entity -type cluster -submit -file
/home/ambari-qa/falconChurnDemo/oregonCluster.xml`
             Error:Invalid Execute server or port:
vm-centos6-hdp21-a1.hdp.hadoop:8050
             Cannot initialize Cluster. Please check your configuration for
mapreduce.framework.name and the correspond server addresses.
(FalconWebException:66)
        -->
        <!--<interface type="execute"
endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8088" version="2.2.0" /> --><!--
ALSO FAILS -->
        <!--<interface type="execute"
endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="2.4.0" /> --><!--
ALSO FAILS -->
        <!--<interface type="execute"
endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="0.20.2" /> --><!--
ALSO FAILS -->
        <!--<interface type="execute"
endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8030" version="2.4.0" /> --><!--
ALSO FAILS -->
        <!--<interface type="execute"
endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8025" version="2.4.0" />--><!--
ALSO FAILS -->
        <!--<interface type="execute"
endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8141" version="2.4.0" />--><!--
ALSO FAILS -->
        <!--<interface type="execute" endpoint="rm:8050"  version="0.20.2"
/>--><!-- ALSO FAILS -->
        <!--<interface type="execute" endpoint="localhost:8050"
version="0.20.2" />--><!-- ALSO FAILS -->
        <interface type="execute"
endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8021" version="2.4.0" /><!-- ALSO
FAILS -->
        <interface type="workflow" endpoint="
http://vm-centos6-hdp21-a1.hdp.hadoop:11000/oozie/" version="4.0.0" />
        <interface type="messaging"
endpoint="tcp://vm-centos6-hdp21-a1.hdp.hadoop:61616?daemon=true"
version="5.1.6" />
    </interfaces>
    <locations>
        <location name="staging" path="/apps/falcon/primaryCluster/staging"
/>
        <location name="temp" path="/tmp" />
        <location name="working" path="/apps/falcon/primaryCluster/working"
/>
    </locations>
</cluster>

And of course here's the error:
Error:Invalid Execute server or port: vm-centos6-hdp21-a1.hdp.hadoop:8050
Cannot initialize Cluster. Please check your configuration for
mapreduce.framework.name and the correspond server addresses.
(FalconWebException:66)



Something I thought was weird is when I execute the bin/falcon-status.sh
and/or bin/service-status shell scripts: they always come back saying that
the Falcon server is not running.  This conflicts, however, with the
situation because the Falcon process is running, the web interface at
http://vm-centos6-hdp21-a1.hdp.hadoop:15000/  is up, and Ambari reports the
service as being up.  Here is me running those commands:

[ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/falcon-status
Hadoop is installed, adding hadoop classpath to falcon classpath
falcon is not running.
[ambari-qa@vm-centos6-hdp21-a1 falcon]$ echo $?
255
[ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh
Invalid option for app: . Valid choices are falcon and prism
[ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh falcon
Hadoop is installed, adding hadoop classpath to falcon classpath
falcon is not running.
[ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh prism
Hadoop is installed, adding hadoop classpath to falcon classpath
mkdir: cannot create directory `/usr/lib/falcon/server/webapp/prism':
Permission denied
/usr/lib/falcon/bin/falcon-config.sh: line 99: cd:
/usr/lib/falcon/server/webapp/prism: No such file or directory
java.io.FileNotFoundException: /usr/lib/falcon/server/webapp/prism.war (No
such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:146)
        at java.io.FileInputStream.<init>(FileInputStream.java:101)
        at sun.tools.jar.Main.run(Main.java:259)
        at sun.tools.jar.Main.main(Main.java:1177)
/usr/lib/falcon/bin/falcon-config.sh: line 101: cd: OLDPWD not set
prism is not running.



Also want to mention that I tried Hortonworks's latest Sandbox vm (
http://hortonassets.s3.amazonaws.com/2.1/vmware/Hortonworks_Sandbox_2.1.ova
), and when I execute the bin/falcon-status.sh and/or bin/service-status
shell scripts: they come back as expected, saying that the Falcon server is
up and running and gives the url.


Thanks Ed for sending out this mail... I guess for me, whats next could be
to investigate the Falcon version differences between the HDP2.1 cluster I
installed (with the problems) and the Falcon version in the Sandbox vm
since it appears ... better.