You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by ashish pok <as...@yahoo.com> on 2018/03/21 15:11:11 UTC

Error running on Hadoop 2.7

Hi All,
We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime incompatibility. 
Anyone has seen this and have pointers?
Thanks, Ashish

Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
            at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
            at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
            at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
            at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
            at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
            at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
            at org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
            at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
            at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
            at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)
   
<!--#yiv6057829147 _filtered #yiv6057829147 {font-family:Helvetica;panose-1:0 0 0 0 0 0 0 0 0 0;} _filtered #yiv6057829147 {font-family:"Cambria Math";panose-1:2 4 5 3 5 4 6 3 2 4;} _filtered #yiv6057829147 {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;}#yiv6057829147 #yiv6057829147 p.yiv6057829147MsoNormal, #yiv6057829147 li.yiv6057829147MsoNormal, #yiv6057829147 div.yiv6057829147MsoNormal {margin:0in;margin-bottom:.0001pt;font-size:12.0pt;font-family:"Calibri", sans-serif;}#yiv6057829147 a:link, #yiv6057829147 span.yiv6057829147MsoHyperlink {color:#0563C1;text-decoration:underline;}#yiv6057829147 a:visited, #yiv6057829147 span.yiv6057829147MsoHyperlinkFollowed {color:#954F72;text-decoration:underline;}#yiv6057829147 p.yiv6057829147msonormal0, #yiv6057829147 li.yiv6057829147msonormal0, #yiv6057829147 div.yiv6057829147msonormal0 {margin-right:0in;margin-left:0in;font-size:11.0pt;font-family:"Calibri", sans-serif;}#yiv6057829147 span.yiv6057829147EmailStyle18 {font-family:"Calibri", sans-serif;color:windowtext;}#yiv6057829147 span.yiv6057829147apple-tab-span {}#yiv6057829147 span.yiv6057829147EmailStyle20 {font-family:"Calibri", sans-serif;color:windowtext;}#yiv6057829147 .yiv6057829147MsoChpDefault {font-size:10.0pt;} _filtered #yiv6057829147 {margin:1.0in 1.0in 1.0in 1.0in;}#yiv6057829147 div.yiv6057829147WordSection1 {}-->

Re: Error running on Hadoop 2.7

Posted by Stephan Ewen <se...@apache.org>.
Thanks, in that case it sounds like it is more related to Hadoop classpath
mixups, rather than class loading.

On Mon, Mar 26, 2018 at 3:03 PM, ashish pok <as...@yahoo.com> wrote:

> Stephan, we are in 1.4.2.
>
> Thanks,
>
> -- Ashish
>
> On Mon, Mar 26, 2018 at 7:38 AM, Stephan Ewen
> <se...@apache.org> wrote:
> If you are on Flink 1.4.0 or 1.4.1, please check if you accidentally have
> Hadoop in your application jar. That can mess up things with child-first
> classloading. 1.4.2 should handle Hadoop properly in any case.
>
> On Sun, Mar 25, 2018 at 3:26 PM, Ashish Pokharel <as...@yahoo.com>
> wrote:
>
> Hi Ken,
>
> Yes - we are on 1.4. Thanks for that link - it certainly now explains how
> things are working :)
>
> We currently don’t have HADOOP_CLASSPATH env var setup and “hadoop class
> path” command basically points to HDP2.6 locations (HDP = Hortonworks Data
> Platform). Best guess I have for this right now is HDP2.6 back ported some
> 2.9 changes into their distro. This is on my list to get to the bottom of
> (hopefully no hiccups till prod) - we double checked our Salt Orchestration
> packages which were used to built the cluster but couldn’t find a reference
> to hadoop 2.9. For now, we are moving on with our testing to prepare for
> deployment with hadoop free version which is using hadoop classpath as
> described in FLINK-7477.
>
> Thanks, Ashish
>
> On Mar 23, 2018, at 12:31 AM, Ken Krugler <kk...@transpac.com>
> wrote:
>
> Hi Ashish,
>
> Are you using Flink 1.4? If so, what does the “hadoop classpath” command
> return from the command line where you’re trying to start the job?
>
> Asking because I’d run into issues with https://issues.apache.
> org/jira/browse/FLINK-7477
> <https://issues.apache.org/jira/browse/FLINK-7477>, where I had a old
> version of Hadoop being referenced by the “hadoop" command.
>
> — Ken
>
>
> On Mar 22, 2018, at 7:05 PM, Ashish Pokharel <as...@yahoo.com> wrote:
>
> Hi All,
>
> Looks like we are out of the woods for now (so we think) - we went with
> Hadoop free version and relied on client libraries on edge node.
>
> However, I am still not very confident as I started digging into that
> stack as well and realized what Till pointed out (traces leads to a class
> that is part of 2.9). I did dig around env variables and nothing was set.
> This is a brand new clustered installed a week back and our team is
> literally the first hands on deck. I will fish around and see if
> Hortonworks back-ported something for HDP (dots are still not completely
> connected but nonetheless, we have a test session and app running in our
> brand new Prod)
>
> Thanks, Ashish
>
> On Mar 22, 2018, at 4:47 AM, Till Rohrmann <tr...@apache.org> wrote:
>
> Hi Ashish,
>
> the class ` RequestHedgingRMFailoverProxyP rovider` was only introduced
> with Hadoop 2.9.0. My suspicion is thus that you start the client with some
> Hadoop 2.9.0 dependencies on the class path. Could you please check the
> logs of the client what's on its class path? Maybe you could also share the
> logs with us. Please also check whether HADOOP_CLASSPATH is set to
> something suspicious.
>
> Thanks a lot!
>
> Cheers,
> Till
>
> On Wed, Mar 21, 2018 at 6:25 PM, ashish pok <as...@yahoo.com> wrote:
>
> Hi Piotrek,
>
> At this point we are simply trying to start a YARN session.
>
> BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has
> experienced similar issues.
>
> We actually pulled 2.6 binaries for the heck of it and ran into same
> issues.
>
> I guess we are left with getting non-hadoop binaries and set
> HADOOP_CLASSPATH then?
>
> -- Ashish
>
> On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski
> <pi...@data-artisans.com> wrote:
> Hi,
>
> > Does some simple word count example works on the cluster after the
> upgrade?
>
> If not, maybe your job is pulling some dependency that’s causing this
> version conflict?
>
> Piotrek
>
> On 21 Mar 2018, at 16:52, ashish pok <as...@yahoo.com> wrote:
>
> Hi Piotrek,
>
> Yes, this is a brand new Prod environment. 2.6 was in our lab.
>
> Thanks,
>
> -- Ashish
>
> On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
> <pi...@data-artisans.com> wrote:
> Hi,
>
> Have you replaced all of your old Flink binaries with freshly downloaded
> <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you
> sure that something hasn't mix in the process?
>
> Does some simple word count example works on the cluster after the upgrade?
>
> Piotrek
>
> On 21 Mar 2018, at 16:11, ashish pok <as...@yahoo.com> wrote:
>
> Hi All,
>
> We ran into a roadblock in our new Hadoop environment, migrating from 2.6
> to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt
> seem like :) We definitely are using 2.7 binaries but it looks like there
> is a call here to a private methos which screams runtime incompatibility.
>
> Anyone has seen this and have pointers?
>
> Thanks, Ashish
>
> Exception in thread "main" java.lang.IllegalAccessError: tried to access
> method org.apache.hadoop.yarn.client. ConfiguredRMFailoverProxyProvi
> der.getProxyInternal()Ljava/la ng/Object; from class
> org.apache.hadoop.yarn.client. RequestHedgingRMFailoverProxyP rovider
>             at org.apache.hadoop.yarn.client.
> RequestHedgingRMFailoverProxyP rovider.init(RequestHedgingRMF
> ailoverProxyProvider.java:75)
>             at org.apache.hadoop.yarn.client.
> RMProxy.createRMFailoverProxyP rovider(RMProxy.java:163)
>             at org.apache.hadoop.yarn.client.
> RMProxy.createRMProxy(RMProxy. java:94)
>             at org.apache.hadoop.yarn.client.
> ClientRMProxy.createRMProxy(Cl ientRMProxy.java:72)
>             at org.apache.hadoop.yarn.client.
> api.impl.YarnClientImpl.servic eStart(YarnClientImpl.java: 187)
>             at org.apache.hadoop.service.Abst
> ractService.start(AbstractServ ice.java:193)
>             at org.apache.flink.yarn.Abstract
> YarnClusterDescriptor.getYarnC lient(AbstractYarnClusterDescr
> iptor.java:314)
>             at org.apache.flink.yarn.Abstract
> YarnClusterDescriptor.deployIn ternal(AbstractYarnClusterDesc
> riptor.java:417)
>             at org.apache.flink.yarn.Abstract
> YarnClusterDescriptor.deploySe ssionCluster(AbstractYarnClust
> erDescriptor.java:367)
>             at org.apache.flink.yarn.cli.Flin
> kYarnSessionCli.run(FlinkYarnS essionCli.java:679)
>             at org.apache.flink.yarn.cli.Flin
> kYarnSessionCli$1.call(FlinkYa rnSessionCli.java:514)
>             at org.apache.flink.yarn.cli.Flin
> kYarnSessionCli$1.call(FlinkYa rnSessionCli.java:511)
>             at java.security.AccessController .doPrivileged(Native Method)
>             at javax.security.auth.Subject.do As(Subject.java:422)
>             at org.apache.hadoop.security.Use
> rGroupInformation.doAs(UserGro upInformation.java:1698)
>             at org.apache.flink.runtime.secur
> ity.HadoopSecurityContext.runS ecured(HadoopSecurityContext. java:41)
>             at org.apache.flink.yarn.cli.Flin
> kYarnSessionCli.main(FlinkYarn SessionCli.java:511)
>
>
>
>
>
>
> ------------------------------ --------------
> http://about.me/kkrugler
> +1 530-210-6378
>
>
>
>

Re: Error running on Hadoop 2.7

Posted by ashish pok <as...@yahoo.com>.
Stephan, we are in 1.4.2.
Thanks,

-- Ashish 
 
  On Mon, Mar 26, 2018 at 7:38 AM, Stephan Ewen<se...@apache.org> wrote:   If you are on Flink 1.4.0 or 1.4.1, please check if you accidentally have Hadoop in your application jar. That can mess up things with child-first classloading. 1.4.2 should handle Hadoop properly in any case.
On Sun, Mar 25, 2018 at 3:26 PM, Ashish Pokharel <as...@yahoo.com> wrote:

Hi Ken,
Yes - we are on 1.4. Thanks for that link - it certainly now explains how things are working :) 
We currently don’t have HADOOP_CLASSPATH env var setup and “hadoop class path” command basically points to HDP2.6 locations (HDP = Hortonworks Data Platform). Best guess I have for this right now is HDP2.6 back ported some 2.9 changes into their distro. This is on my list to get to the bottom of (hopefully no hiccups till prod) - we double checked our Salt Orchestration packages which were used to built the cluster but couldn’t find a reference to hadoop 2.9. For now, we are moving on with our testing to prepare for deployment with hadoop free version which is using hadoop classpath as described in FLINK-7477.  
Thanks, Ashish

On Mar 23, 2018, at 12:31 AM, Ken Krugler <kk...@transpac.com> wrote:
Hi Ashish,
Are you using Flink 1.4? If so, what does the “hadoop classpath” command return from the command line where you’re trying to start the job?
Asking because I’d run into issues with https://issues.apache. org/jira/browse/FLINK-7477, where I had a old version of Hadoop being referenced by the “hadoop" command.
— Ken


On Mar 22, 2018, at 7:05 PM, Ashish Pokharel <as...@yahoo.com> wrote:
Hi All,
Looks like we are out of the woods for now (so we think) - we went with Hadoop free version and relied on client libraries on edge node. 
However, I am still not very confident as I started digging into that stack as well and realized what Till pointed out (traces leads to a class that is part of 2.9). I did dig around env variables and nothing was set. This is a brand new clustered installed a week back and our team is literally the first hands on deck. I will fish around and see if Hortonworks back-ported something for HDP (dots are still not completely connected but nonetheless, we have a test session and app running in our brand new Prod)
Thanks, Ashish

On Mar 22, 2018, at 4:47 AM, Till Rohrmann <tr...@apache.org> wrote:
Hi Ashish,
the class ` RequestHedgingRMFailoverProxyP rovider` was only introduced with Hadoop 2.9.0. My suspicion is thus that you start the client with some Hadoop 2.9.0 dependencies on the class path. Could you please check the logs of the client what's on its class path? Maybe you could also share the logs with us. Please also check whether HADOOP_CLASSPATH is set to something suspicious.
Thanks a lot!
Cheers,Till
On Wed, Mar 21, 2018 at 6:25 PM, ashish pok <as...@yahoo.com> wrote:

Hi Piotrek,
At this point we are simply trying to start a YARN session. 
BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has experienced similar issues. 
We actually pulled 2.6 binaries for the heck of it and ran into same issues. 
I guess we are left with getting non-hadoop binaries and set HADOOP_CLASSPATH then?

-- Ashish 
 
  On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski<pi...@data-artisans.com> wrote:   Hi,
> Does some simple word count example works on the cluster after the upgrade?
If not, maybe your job is pulling some dependency that’s causing this version conflict?
Piotrek


On 21 Mar 2018, at 16:52, ashish pok <as...@yahoo.com> wrote:
Hi Piotrek,
Yes, this is a brand new Prod environment. 2.6 was in our lab.
Thanks,

-- Ashish 
 
  On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski<pi...@data-artisans.com> wrote:   Hi,
Have you replaced all of your old Flink binaries with freshly downloaded Hadoop 2.7 versions? Are you sure that something hasn't mix in the process?
Does some simple word count example works on the cluster after the upgrade?
Piotrek


On 21 Mar 2018, at 16:11, ashish pok <as...@yahoo.com> wrote:
Hi All,
We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime incompatibility. 
Anyone has seen this and have pointers?
Thanks, Ashish

Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client. ConfiguredRMFailoverProxyProvi der.getProxyInternal()Ljava/la ng/Object; from class org.apache.hadoop.yarn.client. RequestHedgingRMFailoverProxyP rovider
            at org.apache.hadoop.yarn.client. RequestHedgingRMFailoverProxyP rovider.init(RequestHedgingRMF ailoverProxyProvider.java:75)
            at org.apache.hadoop.yarn.client. RMProxy.createRMFailoverProxyP rovider(RMProxy.java:163)
            at org.apache.hadoop.yarn.client. RMProxy.createRMProxy(RMProxy. java:94)
            at org.apache.hadoop.yarn.client. ClientRMProxy.createRMProxy(Cl ientRMProxy.java:72)
            at org.apache.hadoop.yarn.client. api.impl.YarnClientImpl.servic eStart(YarnClientImpl.java: 187)
            at org.apache.hadoop.service.Abst ractService.start(AbstractServ ice.java:193)
            at org.apache.flink.yarn.Abstract YarnClusterDescriptor.getYarnC lient(AbstractYarnClusterDescr iptor.java:314)
            at org.apache.flink.yarn.Abstract YarnClusterDescriptor.deployIn ternal(AbstractYarnClusterDesc riptor.java:417)
            at org.apache.flink.yarn.Abstract YarnClusterDescriptor.deploySe ssionCluster(AbstractYarnClust erDescriptor.java:367)
            at org.apache.flink.yarn.cli.Flin kYarnSessionCli.run(FlinkYarnS essionCli.java:679)
            at org.apache.flink.yarn.cli.Flin kYarnSessionCli$1.call(FlinkYa rnSessionCli.java:514)
            at org.apache.flink.yarn.cli.Flin kYarnSessionCli$1.call(FlinkYa rnSessionCli.java:511)
            at java.security.AccessController .doPrivileged(Native Method)
            at javax.security.auth.Subject.do As(Subject.java:422)
            at org.apache.hadoop.security.Use rGroupInformation.doAs(UserGro upInformation.java:1698)
            at org.apache.flink.runtime.secur ity.HadoopSecurityContext.runS ecured(HadoopSecurityContext. java:41)
            at org.apache.flink.yarn.cli.Flin kYarnSessionCli.main(FlinkYarn SessionCli.java:511)



  


  






------------------------------ --------------http://about.me/kkrugler+1 530-210-6378




  

Re: Error running on Hadoop 2.7

Posted by Stephan Ewen <se...@apache.org>.
If you are on Flink 1.4.0 or 1.4.1, please check if you accidentally have
Hadoop in your application jar. That can mess up things with child-first
classloading. 1.4.2 should handle Hadoop properly in any case.

On Sun, Mar 25, 2018 at 3:26 PM, Ashish Pokharel <as...@yahoo.com>
wrote:

> Hi Ken,
>
> Yes - we are on 1.4. Thanks for that link - it certainly now explains how
> things are working :)
>
> We currently don’t have HADOOP_CLASSPATH env var setup and “hadoop class
> path” command basically points to HDP2.6 locations (HDP = Hortonworks Data
> Platform). Best guess I have for this right now is HDP2.6 back ported some
> 2.9 changes into their distro. This is on my list to get to the bottom of
> (hopefully no hiccups till prod) - we double checked our Salt Orchestration
> packages which were used to built the cluster but couldn’t find a reference
> to hadoop 2.9. For now, we are moving on with our testing to prepare for
> deployment with hadoop free version which is using hadoop classpath as
> described in FLINK-7477.
>
> Thanks, Ashish
>
> On Mar 23, 2018, at 12:31 AM, Ken Krugler <kk...@transpac.com>
> wrote:
>
> Hi Ashish,
>
> Are you using Flink 1.4? If so, what does the “hadoop classpath” command
> return from the command line where you’re trying to start the job?
>
> Asking because I’d run into issues with https://issues.apache.
> org/jira/browse/FLINK-7477, where I had a old version of Hadoop being
> referenced by the “hadoop" command.
>
> — Ken
>
>
> On Mar 22, 2018, at 7:05 PM, Ashish Pokharel <as...@yahoo.com> wrote:
>
> Hi All,
>
> Looks like we are out of the woods for now (so we think) - we went with
> Hadoop free version and relied on client libraries on edge node.
>
> However, I am still not very confident as I started digging into that
> stack as well and realized what Till pointed out (traces leads to a class
> that is part of 2.9). I did dig around env variables and nothing was set.
> This is a brand new clustered installed a week back and our team is
> literally the first hands on deck. I will fish around and see if
> Hortonworks back-ported something for HDP (dots are still not completely
> connected but nonetheless, we have a test session and app running in our
> brand new Prod)
>
> Thanks, Ashish
>
> On Mar 22, 2018, at 4:47 AM, Till Rohrmann <tr...@apache.org> wrote:
>
> Hi Ashish,
>
> the class `RequestHedgingRMFailoverProxyProvider` was only introduced
> with Hadoop 2.9.0. My suspicion is thus that you start the client with some
> Hadoop 2.9.0 dependencies on the class path. Could you please check the
> logs of the client what's on its class path? Maybe you could also share the
> logs with us. Please also check whether HADOOP_CLASSPATH is set to
> something suspicious.
>
> Thanks a lot!
>
> Cheers,
> Till
>
> On Wed, Mar 21, 2018 at 6:25 PM, ashish pok <as...@yahoo.com> wrote:
>
>> Hi Piotrek,
>>
>> At this point we are simply trying to start a YARN session.
>>
>> BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has
>> experienced similar issues.
>>
>> We actually pulled 2.6 binaries for the heck of it and ran into same
>> issues.
>>
>> I guess we are left with getting non-hadoop binaries and set
>> HADOOP_CLASSPATH then?
>>
>> -- Ashish
>>
>> On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski
>> <pi...@data-artisans.com> wrote:
>> Hi,
>>
>> > Does some simple word count example works on the cluster after the
>> upgrade?
>>
>> If not, maybe your job is pulling some dependency that’s causing this
>> version conflict?
>>
>> Piotrek
>>
>> On 21 Mar 2018, at 16:52, ashish pok <as...@yahoo.com> wrote:
>>
>> Hi Piotrek,
>>
>> Yes, this is a brand new Prod environment. 2.6 was in our lab.
>>
>> Thanks,
>>
>> -- Ashish
>>
>> On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
>> <pi...@data-artisans.com> wrote:
>> Hi,
>>
>> Have you replaced all of your old Flink binaries with freshly downloaded
>> <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you
>> sure that something hasn't mix in the process?
>>
>> Does some simple word count example works on the cluster after the
>> upgrade?
>>
>> Piotrek
>>
>> On 21 Mar 2018, at 16:11, ashish pok <as...@yahoo.com> wrote:
>>
>> Hi All,
>>
>> We ran into a roadblock in our new Hadoop environment, migrating from 2.6
>> to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt
>> seem like :) We definitely are using 2.7 binaries but it looks like there
>> is a call here to a private methos which screams runtime incompatibility.
>>
>> Anyone has seen this and have pointers?
>>
>> Thanks, Ashish
>>
>> Exception in thread "main" java.lang.IllegalAccessError: tried to access
>> method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvi
>> der.getProxyInternal()Ljava/lang/Object; from class
>> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>>             at org.apache.hadoop.yarn.client.
>> RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMF
>> ailoverProxyProvider.java:75)
>>             at org.apache.hadoop.yarn.client.
>> RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>>             at org.apache.hadoop.yarn.client.
>> RMProxy.createRMProxy(RMProxy.java:94)
>>             at org.apache.hadoop.yarn.client.
>> ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>>             at org.apache.hadoop.yarn.client.
>> api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>>             at org.apache.hadoop.service.Abst
>> ractService.start(AbstractService.java:193)
>>             at org.apache.flink.yarn.Abstract
>> YarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescr
>> iptor.java:314)
>>             at org.apache.flink.yarn.Abstract
>> YarnClusterDescriptor.deployInternal(AbstractYarnClusterDesc
>> riptor.java:417)
>>             at org.apache.flink.yarn.Abstract
>> YarnClusterDescriptor.deploySessionCluster(AbstractYarnClust
>> erDescriptor.java:367)
>>             at org.apache.flink.yarn.cli.Flin
>> kYarnSessionCli.run(FlinkYarnSessionCli.java:679)
>>             at org.apache.flink.yarn.cli.Flin
>> kYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
>>             at org.apache.flink.yarn.cli.Flin
>> kYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
>>             at java.security.AccessController.doPrivileged(Native Method)
>>             at javax.security.auth.Subject.doAs(Subject.java:422)
>>             at org.apache.hadoop.security.Use
>> rGroupInformation.doAs(UserGroupInformation.java:1698)
>>             at org.apache.flink.runtime.secur
>> ity.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>             at org.apache.flink.yarn.cli.Flin
>> kYarnSessionCli.main(FlinkYarnSessionCli.java:511)
>>
>>
>>
>>
>
>
> --------------------------------------------
> http://about.me/kkrugler
> +1 530-210-6378 <+1%20530-210-6378>
>
>
>

Re: Error running on Hadoop 2.7

Posted by Ashish Pokharel <as...@yahoo.com>.
Hi Ken,

Yes - we are on 1.4. Thanks for that link - it certainly now explains how things are working :) 

We currently don’t have HADOOP_CLASSPATH env var setup and “hadoop class path” command basically points to HDP2.6 locations (HDP = Hortonworks Data Platform). Best guess I have for this right now is HDP2.6 back ported some 2.9 changes into their distro. This is on my list to get to the bottom of (hopefully no hiccups till prod) - we double checked our Salt Orchestration packages which were used to built the cluster but couldn’t find a reference to hadoop 2.9. For now, we are moving on with our testing to prepare for deployment with hadoop free version which is using hadoop classpath as described in FLINK-7477.  

Thanks, Ashish

> On Mar 23, 2018, at 12:31 AM, Ken Krugler <kk...@transpac.com> wrote:
> 
> Hi Ashish,
> 
> Are you using Flink 1.4? If so, what does the “hadoop classpath” command return from the command line where you’re trying to start the job?
> 
> Asking because I’d run into issues with https://issues.apache.org/jira/browse/FLINK-7477 <https://issues.apache.org/jira/browse/FLINK-7477>, where I had a old version of Hadoop being referenced by the “hadoop" command.
> 
> — Ken
> 
> 
>> On Mar 22, 2018, at 7:05 PM, Ashish Pokharel <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>> 
>> Hi All,
>> 
>> Looks like we are out of the woods for now (so we think) - we went with Hadoop free version and relied on client libraries on edge node. 
>> 
>> However, I am still not very confident as I started digging into that stack as well and realized what Till pointed out (traces leads to a class that is part of 2.9). I did dig around env variables and nothing was set. This is a brand new clustered installed a week back and our team is literally the first hands on deck. I will fish around and see if Hortonworks back-ported something for HDP (dots are still not completely connected but nonetheless, we have a test session and app running in our brand new Prod)
>> 
>> Thanks, Ashish
>> 
>>> On Mar 22, 2018, at 4:47 AM, Till Rohrmann <trohrmann@apache.org <ma...@apache.org>> wrote:
>>> 
>>> Hi Ashish,
>>> 
>>> the class `RequestHedgingRMFailoverProxyProvider` was only introduced with Hadoop 2.9.0. My suspicion is thus that you start the client with some Hadoop 2.9.0 dependencies on the class path. Could you please check the logs of the client what's on its class path? Maybe you could also share the logs with us. Please also check whether HADOOP_CLASSPATH is set to something suspicious.
>>> 
>>> Thanks a lot!
>>> 
>>> Cheers,
>>> Till
>>> 
>>> On Wed, Mar 21, 2018 at 6:25 PM, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>>> Hi Piotrek,
>>> 
>>> At this point we are simply trying to start a YARN session. 
>>> 
>>> BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has experienced similar issues. 
>>> 
>>> We actually pulled 2.6 binaries for the heck of it and ran into same issues. 
>>> 
>>> I guess we are left with getting non-hadoop binaries and set HADOOP_CLASSPATH then?
>>> 
>>> -- Ashish
>>> 
>>> On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski
>>> <piotr@data-artisans.com <ma...@data-artisans.com>> wrote:
>>> Hi,
>>> 
>>> > Does some simple word count example works on the cluster after the upgrade?
>>> 
>>> If not, maybe your job is pulling some dependency that’s causing this version conflict?
>>> 
>>> Piotrek
>>> 
>>>> On 21 Mar 2018, at 16:52, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>>>> 
>>>> Hi Piotrek,
>>>> 
>>>> Yes, this is a brand new Prod environment. 2.6 was in our lab.
>>>> 
>>>> Thanks,
>>>> 
>>>> -- Ashish
>>>> 
>>>> On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
>>>> <piotr@data-artisans.com <ma...@data-artisans.com>> wrote:
>>>> Hi,
>>>> 
>>>> Have you replaced all of your old Flink binaries with freshly downloaded <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you sure that something hasn't mix in the process?
>>>> 
>>>> Does some simple word count example works on the cluster after the upgrade?
>>>> 
>>>> Piotrek
>>>> 
>>>>> On 21 Mar 2018, at 16:11, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>>>>> 
>>>>> Hi All,
>>>>> 
>>>>> We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime incompatibility. 
>>>>> 
>>>>> Anyone has seen this and have pointers?
>>>>> 
>>>>> Thanks, Ashish
>>>>> Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider <>
>>>>>             at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>>>>>             at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>>>>>             at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
>>>>>             at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>>>>>             at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>>>>>             at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>>>>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
>>>>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
>>>>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
>>>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
>>>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
>>>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
>>>>>             at java.security.AccessController.doPrivileged(Native Method)
>>>>>             at javax.security.auth.Subject.doAs(Subject.java:422)
>>>>>             at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>>>>>             at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)
>>>>> 
>>>> 
>>> 
>>> 
>> 
> 
> --------------------------------------------
> http://about.me/kkrugler <http://about.me/kkrugler>
> +1 530-210-6378
> 


Re: Error running on Hadoop 2.7

Posted by Ken Krugler <kk...@transpac.com>.
Hi Ashish,

Are you using Flink 1.4? If so, what does the “hadoop classpath” command return from the command line where you’re trying to start the job?

Asking because I’d run into issues with https://issues.apache.org/jira/browse/FLINK-7477 <https://issues.apache.org/jira/browse/FLINK-7477>, where I had a old version of Hadoop being referenced by the “hadoop" command.

— Ken


> On Mar 22, 2018, at 7:05 PM, Ashish Pokharel <as...@yahoo.com> wrote:
> 
> Hi All,
> 
> Looks like we are out of the woods for now (so we think) - we went with Hadoop free version and relied on client libraries on edge node. 
> 
> However, I am still not very confident as I started digging into that stack as well and realized what Till pointed out (traces leads to a class that is part of 2.9). I did dig around env variables and nothing was set. This is a brand new clustered installed a week back and our team is literally the first hands on deck. I will fish around and see if Hortonworks back-ported something for HDP (dots are still not completely connected but nonetheless, we have a test session and app running in our brand new Prod)
> 
> Thanks, Ashish
> 
>> On Mar 22, 2018, at 4:47 AM, Till Rohrmann <trohrmann@apache.org <ma...@apache.org>> wrote:
>> 
>> Hi Ashish,
>> 
>> the class `RequestHedgingRMFailoverProxyProvider` was only introduced with Hadoop 2.9.0. My suspicion is thus that you start the client with some Hadoop 2.9.0 dependencies on the class path. Could you please check the logs of the client what's on its class path? Maybe you could also share the logs with us. Please also check whether HADOOP_CLASSPATH is set to something suspicious.
>> 
>> Thanks a lot!
>> 
>> Cheers,
>> Till
>> 
>> On Wed, Mar 21, 2018 at 6:25 PM, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>> Hi Piotrek,
>> 
>> At this point we are simply trying to start a YARN session. 
>> 
>> BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has experienced similar issues. 
>> 
>> We actually pulled 2.6 binaries for the heck of it and ran into same issues. 
>> 
>> I guess we are left with getting non-hadoop binaries and set HADOOP_CLASSPATH then?
>> 
>> -- Ashish
>> 
>> On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski
>> <piotr@data-artisans.com <ma...@data-artisans.com>> wrote:
>> Hi,
>> 
>> > Does some simple word count example works on the cluster after the upgrade?
>> 
>> If not, maybe your job is pulling some dependency that’s causing this version conflict?
>> 
>> Piotrek
>> 
>>> On 21 Mar 2018, at 16:52, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>>> 
>>> Hi Piotrek,
>>> 
>>> Yes, this is a brand new Prod environment. 2.6 was in our lab.
>>> 
>>> Thanks,
>>> 
>>> -- Ashish
>>> 
>>> On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
>>> <piotr@data-artisans.com <ma...@data-artisans.com>> wrote:
>>> Hi,
>>> 
>>> Have you replaced all of your old Flink binaries with freshly downloaded <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you sure that something hasn't mix in the process?
>>> 
>>> Does some simple word count example works on the cluster after the upgrade?
>>> 
>>> Piotrek
>>> 
>>>> On 21 Mar 2018, at 16:11, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>>>> 
>>>> Hi All,
>>>> 
>>>> We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime incompatibility. 
>>>> 
>>>> Anyone has seen this and have pointers?
>>>> 
>>>> Thanks, Ashish
>>>> Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider <>
>>>>             at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>>>>             at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>>>>             at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
>>>>             at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>>>>             at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>>>>             at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>>>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
>>>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
>>>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
>>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
>>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
>>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
>>>>             at java.security.AccessController.doPrivileged(Native Method)
>>>>             at javax.security.auth.Subject.doAs(Subject.java:422)
>>>>             at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>>>>             at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)
>>>> 
>>> 
>> 
>> 
> 

--------------------------------------------
http://about.me/kkrugler
+1 530-210-6378


Re: Error running on Hadoop 2.7

Posted by Ashish Pokharel <as...@yahoo.com>.
Hi All,

Looks like we are out of the woods for now (so we think) - we went with Hadoop free version and relied on client libraries on edge node. 

However, I am still not very confident as I started digging into that stack as well and realized what Till pointed out (traces leads to a class that is part of 2.9). I did dig around env variables and nothing was set. This is a brand new clustered installed a week back and our team is literally the first hands on deck. I will fish around and see if Hortonworks back-ported something for HDP (dots are still not completely connected but nonetheless, we have a test session and app running in our brand new Prod)

Thanks, Ashish

> On Mar 22, 2018, at 4:47 AM, Till Rohrmann <tr...@apache.org> wrote:
> 
> Hi Ashish,
> 
> the class `RequestHedgingRMFailoverProxyProvider` was only introduced with Hadoop 2.9.0. My suspicion is thus that you start the client with some Hadoop 2.9.0 dependencies on the class path. Could you please check the logs of the client what's on its class path? Maybe you could also share the logs with us. Please also check whether HADOOP_CLASSPATH is set to something suspicious.
> 
> Thanks a lot!
> 
> Cheers,
> Till
> 
> On Wed, Mar 21, 2018 at 6:25 PM, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
> Hi Piotrek,
> 
> At this point we are simply trying to start a YARN session. 
> 
> BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has experienced similar issues. 
> 
> We actually pulled 2.6 binaries for the heck of it and ran into same issues. 
> 
> I guess we are left with getting non-hadoop binaries and set HADOOP_CLASSPATH then?
> 
> -- Ashish
> 
> On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski
> <piotr@data-artisans.com <ma...@data-artisans.com>> wrote:
> Hi,
> 
> > Does some simple word count example works on the cluster after the upgrade?
> 
> If not, maybe your job is pulling some dependency that’s causing this version conflict?
> 
> Piotrek
> 
>> On 21 Mar 2018, at 16:52, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>> 
>> Hi Piotrek,
>> 
>> Yes, this is a brand new Prod environment. 2.6 was in our lab.
>> 
>> Thanks,
>> 
>> -- Ashish
>> 
>> On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
>> <piotr@data-artisans.com <ma...@data-artisans.com>> wrote:
>> Hi,
>> 
>> Have you replaced all of your old Flink binaries with freshly downloaded <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you sure that something hasn't mix in the process?
>> 
>> Does some simple word count example works on the cluster after the upgrade?
>> 
>> Piotrek
>> 
>>> On 21 Mar 2018, at 16:11, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>>> 
>>> Hi All,
>>> 
>>> We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime incompatibility. 
>>> 
>>> Anyone has seen this and have pointers?
>>> 
>>> Thanks, Ashish
>>> Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider <>
>>>             at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>>>             at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>>>             at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
>>>             at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>>>             at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>>>             at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
>>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
>>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
>>>             at java.security.AccessController.doPrivileged(Native Method)
>>>             at javax.security.auth.Subject.doAs(Subject.java:422)
>>>             at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>>>             at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)
>>> 
>> 
> 
> 


Re: Error running on Hadoop 2.7

Posted by Till Rohrmann <tr...@apache.org>.
Hi Ashish,

the class `RequestHedgingRMFailoverProxyProvider` was only introduced with
Hadoop 2.9.0. My suspicion is thus that you start the client with some
Hadoop 2.9.0 dependencies on the class path. Could you please check the
logs of the client what's on its class path? Maybe you could also share the
logs with us. Please also check whether HADOOP_CLASSPATH is set to
something suspicious.

Thanks a lot!

Cheers,
Till

On Wed, Mar 21, 2018 at 6:25 PM, ashish pok <as...@yahoo.com> wrote:

> Hi Piotrek,
>
> At this point we are simply trying to start a YARN session.
>
> BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has
> experienced similar issues.
>
> We actually pulled 2.6 binaries for the heck of it and ran into same
> issues.
>
> I guess we are left with getting non-hadoop binaries and set
> HADOOP_CLASSPATH then?
>
> -- Ashish
>
> On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski
> <pi...@data-artisans.com> wrote:
> Hi,
>
> > Does some simple word count example works on the cluster after the
> upgrade?
>
> If not, maybe your job is pulling some dependency that’s causing this
> version conflict?
>
> Piotrek
>
> On 21 Mar 2018, at 16:52, ashish pok <as...@yahoo.com> wrote:
>
> Hi Piotrek,
>
> Yes, this is a brand new Prod environment. 2.6 was in our lab.
>
> Thanks,
>
> -- Ashish
>
> On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
> <pi...@data-artisans.com> wrote:
> Hi,
>
> Have you replaced all of your old Flink binaries with freshly downloaded
> <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you
> sure that something hasn't mix in the process?
>
> Does some simple word count example works on the cluster after the upgrade?
>
> Piotrek
>
> On 21 Mar 2018, at 16:11, ashish pok <as...@yahoo.com> wrote:
>
> Hi All,
>
> We ran into a roadblock in our new Hadoop environment, migrating from 2.6
> to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt
> seem like :) We definitely are using 2.7 binaries but it looks like there
> is a call here to a private methos which screams runtime incompatibility.
>
> Anyone has seen this and have pointers?
>
> Thanks, Ashish
>
> Exception in thread "main" java.lang.IllegalAccessError: tried to access
> method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvi
> der.getProxyInternal()Ljava/lang/Object; from class
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>             at org.apache.hadoop.yarn.client.
> RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyP
> rovider.java:75)
>             at org.apache.hadoop.yarn.client.RMProxy.
> createRMFailoverProxyProvider(RMProxy.java:163)
>             at org.apache.hadoop.yarn.client.
> RMProxy.createRMProxy(RMProxy.java:94)
>             at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(
> ClientRMProxy.java:72)
>             at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.
> serviceStart(YarnClientImpl.java:187)
>             at org.apache.hadoop.service.AbstractService.start(
> AbstractService.java:193)
>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.
> getYarnClient(AbstractYarnClusterDescriptor.java:314)
>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.
> deployInternal(AbstractYarnClusterDescriptor.java:417)
>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.
> deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(
> FlinkYarnSessionCli.java:679)
>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(
> FlinkYarnSessionCli.java:514)
>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(
> FlinkYarnSessionCli.java:511)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at javax.security.auth.Subject.doAs(Subject.java:422)
>             at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1698)
>             at org.apache.flink.runtime.security.HadoopSecurityContext.
> runSecured(HadoopSecurityContext.java:41)
>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(
> FlinkYarnSessionCli.java:511)
>
>
>
>

Re: Error running on Hadoop 2.7

Posted by Kien Truong <du...@gmail.com>.
Hi Ashish,


Yeah, we also had this problem before.

It can be solved by recompiling Flink with HDP version of Hadoop 
according to instruction here:

https://ci.apache.org/projects/flink/flink-docs-release-1.4/start/building.html#vendor-specific-versions


Regards,

Kien

On 3/22/2018 12:25 AM, ashish pok wrote:
> Hi Piotrek,
>
> At this point we are simply trying to start a YARN session.
>
> BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone 
> has experienced similar issues.
>
> We actually pulled 2.6 binaries for the heck of it and ran into same 
> issues.
>
> I guess we are left with getting non-hadoop binaries and set 
> HADOOP_CLASSPATH then?
>
> -- Ashish
>
>     On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski
>     <pi...@data-artisans.com> wrote:
>     Hi,
>
>     > Does some simple word count example works on the cluster after
>     the upgrade?
>
>     If not, maybe your job is pulling some dependency that’s causing
>     this version conflict?
>
>     Piotrek
>
>>     On 21 Mar 2018, at 16:52, ashish pok <ashishpok@yahoo.com
>>     <ma...@yahoo.com>> wrote:
>>
>>     Hi Piotrek,
>>
>>     Yes, this is a brand new Prod environment. 2.6 was in our lab.
>>
>>     Thanks,
>>
>>     -- Ashish
>>
>>         On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
>>         <piotr@data-artisans.com <ma...@data-artisans.com>> wrote:
>>         Hi,
>>
>>         Have you replaced all of your old Flink binaries with freshly
>>         downloaded <https://flink.apache.org/downloads.html> Hadoop
>>         2.7 versions? Are you sure that something hasn't mix in the
>>         process?
>>
>>         Does some simple word count example works on the cluster
>>         after the upgrade?
>>
>>         Piotrek
>>
>>>         On 21 Mar 2018, at 16:11, ashish pok <ashishpok@yahoo.com
>>>         <ma...@yahoo.com>> wrote:
>>>
>>>         Hi All,
>>>
>>>         We ran into a roadblock in our new Hadoop environment,
>>>         migrating from 2.6 to 2.7. It was supposed to be an easy
>>>         lift to get a YARN session but doesnt seem like :) We
>>>         definitely are using 2.7 binaries but it looks like there is
>>>         a call here to a private methos which screams runtime
>>>         incompatibility.
>>>
>>>         Anyone has seen this and have pointers?
>>>
>>>         Thanks, Ashish
>>>
>>>             Exception in thread "main" java.lang.IllegalAccessError:
>>>             tried to access method
>>>             org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object;
>>>             from class
>>>             org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>>>             at
>>>             org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>>>             at
>>>             org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>>>             at
>>>             org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
>>>             at
>>>             org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>>>             at
>>>             org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>>>             at
>>>             org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>>>             at
>>>             org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
>>>             at
>>>             org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
>>>             at
>>>             org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
>>>             at
>>>             org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
>>>             at
>>>             org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
>>>             at
>>>             org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
>>>             at java.security.AccessController.doPrivileged(Native
>>>             Method)
>>>             at javax.security.auth.Subject.doAs(Subject.java:422)
>>>             at
>>>             org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>>>             at
>>>             org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>                         at
>>>             org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)
>>>
>>
>

Re: Error running on Hadoop 2.7

Posted by ashish pok <as...@yahoo.com>.
Hi Piotrek,
At this point we are simply trying to start a YARN session. 
BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has experienced similar issues. 
We actually pulled 2.6 binaries for the heck of it and ran into same issues. 
I guess we are left with getting non-hadoop binaries and set HADOOP_CLASSPATH then?

-- Ashish 
 
  On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski<pi...@data-artisans.com> wrote:   Hi,
> Does some simple word count example works on the cluster after the upgrade?
If not, maybe your job is pulling some dependency that’s causing this version conflict?
Piotrek


On 21 Mar 2018, at 16:52, ashish pok <as...@yahoo.com> wrote:
Hi Piotrek,
Yes, this is a brand new Prod environment. 2.6 was in our lab.
Thanks,

-- Ashish 
 
  On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski<pi...@data-artisans.com> wrote:   Hi,
Have you replaced all of your old Flink binaries with freshly downloaded Hadoop 2.7 versions? Are you sure that something hasn't mix in the process?
Does some simple word count example works on the cluster after the upgrade?
Piotrek


On 21 Mar 2018, at 16:11, ashish pok <as...@yahoo.com> wrote:
Hi All,
We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime incompatibility. 
Anyone has seen this and have pointers?
Thanks, Ashish

Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
            at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
            at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
            at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
            at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
            at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
            at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
            at org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
            at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
            at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
            at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)



  


  

Re: Error running on Hadoop 2.7

Posted by Piotr Nowojski <pi...@data-artisans.com>.
Hi,

> Does some simple word count example works on the cluster after the upgrade?

If not, maybe your job is pulling some dependency that’s causing this version conflict?

Piotrek

> On 21 Mar 2018, at 16:52, ashish pok <as...@yahoo.com> wrote:
> 
> Hi Piotrek,
> 
> Yes, this is a brand new Prod environment. 2.6 was in our lab.
> 
> Thanks,
> 
> -- Ashish
> 
> On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
> <pi...@data-artisans.com> wrote:
> Hi,
> 
> Have you replaced all of your old Flink binaries with freshly downloaded <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you sure that something hasn't mix in the process?
> 
> Does some simple word count example works on the cluster after the upgrade?
> 
> Piotrek
> 
>> On 21 Mar 2018, at 16:11, ashish pok <ashishpok@yahoo.com <ma...@yahoo.com>> wrote:
>> 
>> Hi All,
>> 
>> We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime incompatibility. 
>> 
>> Anyone has seen this and have pointers?
>> 
>> Thanks, Ashish
>> Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider <>
>>             at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>>             at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>>             at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
>>             at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>>             at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>>             at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
>>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
>>             at java.security.AccessController.doPrivileged(Native Method)
>>             at javax.security.auth.Subject.doAs(Subject.java:422)
>>             at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>>             at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)
>> 
> 


Re: Error running on Hadoop 2.7

Posted by ashish pok <as...@yahoo.com>.
Hi Piotrek,
Yes, this is a brand new Prod environment. 2.6 was in our lab.
Thanks,

-- Ashish 
 
  On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski<pi...@data-artisans.com> wrote:   Hi,
Have you replaced all of your old Flink binaries with freshly downloaded Hadoop 2.7 versions? Are you sure that something hasn't mix in the process?
Does some simple word count example works on the cluster after the upgrade?
Piotrek


On 21 Mar 2018, at 16:11, ashish pok <as...@yahoo.com> wrote:
Hi All,
We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime incompatibility. 
Anyone has seen this and have pointers?
Thanks, Ashish

Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
            at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
            at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
            at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
            at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
            at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
            at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
            at org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
            at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
            at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
            at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
            at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)



  

Re: Error running on Hadoop 2.7

Posted by Piotr Nowojski <pi...@data-artisans.com>.
Hi,

Have you replaced all of your old Flink binaries with freshly downloaded <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you sure that something hasn't mix in the process?

Does some simple word count example works on the cluster after the upgrade?

Piotrek

> On 21 Mar 2018, at 16:11, ashish pok <as...@yahoo.com> wrote:
> 
> Hi All,
> 
> We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime incompatibility. 
> 
> Anyone has seen this and have pointers?
> 
> Thanks, Ashish
> Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider <>
>             at org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>             at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>             at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
>             at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>             at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>             at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
>             at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
>             at java.security.AccessController.doPrivileged(Native Method)
>             at javax.security.auth.Subject.doAs(Subject.java:422)
>             at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>             at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>             at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)
>