You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Andy M." <aj...@gmail.com> on 2017/10/02 13:52:02 UTC

Unable to write snapshots to S3 on EMR

Hello,

I am about to deploy my first Flink projects  to production, but I am
running into a very big hurdle.  I am unable to launch my project so it can
write to an S3 bucket.  My project is running on an EMR cluster, where I
have installed Flink 1.3.2.  I am using Yarn to launch the application, and
it seems to run fine unless I am trying to enable check pointing(with a S3
target).  I am looking to use RocksDB as my check-pointing backend.  I have
asked a few places, and I am still unable to find a solution to this
problem.  Here are my steps for creating a cluster, and launching my
application, perhaps I am missing a step.  I'd be happy to provide any
additional information if needed.

AWS Portal:

    1) EMR -> Create Cluster
    2) Advanced Options
    3) Release = emr-5.8.0
    4) Only select Hadoop 2.7.3
    5) Next -> Next -> Next -> Create Cluster ( I do fill out
names/keys/etc)

Once the cluster is up I ssh into the Master and do the following:

    1  wget
http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-hadoop27-scala_2.11.tgz
    2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
    3  cd flink-1.3.2
    4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
    5  Change conf/flink-conf.yaml
    6  ./bin/flink run -m yarn-cluster -yn 1 ~/flink-consumer.jar

My conf/flink-conf.yaml I add the following fields:

    state.backend: rocksdb
    state.backend.fs.checkpointdir: s3:/bucket/location
    state.checkpoints.dir: s3:/bucket/location

My program's checkpointing setup:


env.enableCheckpointing(getCheckpointRate,CheckpointingMode.EXACTLY_ONCE)

env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)

env.getCheckpointConfig.setMinPauseBetweenCheckpoints(getCheckpointMinPause)
    env.getCheckpointConfig.setCheckpointTimeout(getCheckpointTimeout)
    env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
    env.setStateBackend(new RocksDBStateBackend("s3://bucket/location",
true))

Re: Unable to write snapshots to S3 on EMR

Posted by Aljoscha Krettek <al...@apache.org>.
Hi,

I can't spot anything obviously wrong with your config. I've been quite busy lately with traveling and release preparations. I will get back to this and try it myself, though, once I have time again.

Best,
Aljoscha

> On 12. Oct 2017, at 17:07, Andy M. <aj...@gmail.com> wrote:
> 
> Hi Ajoscha,
> 
> That didn't seem to do the trick either.  Do the following look correct?
> 
> I see 5.9.0 is released with Flink 1.3.2, so I tried that, and got the same
> problem, all I did was upload my Scala .jar to the master, updated my
> flink-conf.yaml, set my env variables, and ran it with the following
> command: HADOOP_CONF_DIR=/etc/hadoop/conf flink run -m yarn-cluster -yn 5
> -ys 2 ~/flink-consumer.jar
> 
> I am beginning to think something else may be wrong with my configuration.
> Does the following look correct?
> 
> In flink-conf.yaml:
> state.backend: rocksdb
> state.backend.fs.checkpointdir: s3://org/flink-project/state
> state.checkpoints.dir: s3://org/flink-project/state
> 
> In the code:
> env.enableCheckpointing(getCheckpointRate,CheckpointingMode.EXACTLY_ONCE)
> env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> env.getCheckpointConfig.setMinPauseBetweenCheckpoints(1000)
> env.getCheckpointConfig.setCheckpointTimeout(5000)
> env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
> env.setStateBackend(new RocksDBStateBackend("s3://org/flink-project/state",
> true))
> 
> Thank you
> 
> 
> 
> On Wed, Oct 11, 2017 at 5:19 AM, Aljoscha Krettek <al...@apache.org>
> wrote:
> 
>> Hi Andy,
>> 
>> I remember that I was testing a job with almost exactly the same setup as
>> part of the Flink 1.3.2 release testing. The command I used to start my job
>> is roughly this:
>> 
>> HADOOP_CONF_DIR=/etc/hadoop/conf bin/flink run -c my.main.Class -m
>> yarn-cluster -yn 5 -ys 2 ...
>> 
>> i.e. I export the proper hadoop config dir and I run a per-job YARN
>> cluster. I think I also exported the result of "hadoop classpath" as
>> HADOOP_CLASSPATH.
>> 
>> Best,
>> Aljoscha
>> 
>>> On 10. Oct 2017, at 16:43, Andy M. <aj...@gmail.com> wrote:
>>> 
>>> Hello,
>>> 
>>> Bowen:  Unless I am missing something, it says there needs to be no setup
>>> on EMR, Each topic says: "You don’t have to configure this manually if
>> you
>>> are running Flink on EMR."  S3 access from CLI works fine on my clusters.
>>> 
>>> Chen: Thank you for this, I will look into this if I am unable to get
>> this
>>> running on YARN successfully.
>>> 
>>> Stephan:  Removing the said library causes the flink
>>> (flink-1.3.2/bin/flink) bash script to fail.  The underlying Java needs
>>> this to work.  I tried explicitly setting the classpath for the java call
>>> as well to point to the hadoop library jars.  This is the original java
>>> command that I was trying to run:
>>> 
>>> java
>>> -Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-
>> ip-172-31-19-27.log
>>> -Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/
>> log4j-cli.properties
>>> -Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/
>> conf/logback.xml
>>> -classpath
>>> /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
>> home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
>> 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
>> hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
>> hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:
>>> org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
>>> /home/hadoop/flink-consumer.jar
>>> 
>>> 
>>> This is what I changed it too(removing the shadded-hadoop2-uber jar and
>>> adding in the hadoop folder):
>>> 
>>> java
>>> -Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-
>> ip-172-31-19-27.log
>>> -Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/
>> log4j-cli.properties
>>> -Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/
>> conf/logback.xml
>>> -classpath
>>> /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
>> home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/hadoop/
>> flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop/flink-
>> 1.3.2/lib/flink-dist_2.11-1.3.2.jar:/usr/lib/hadoop/lib/
>> activation-1.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:
>> /usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/
>> lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/
>> apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
>> lang-2.6.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/
>> usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/
>> lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/
>> lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/java-
>> xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.
>> jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/
>> lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/
>> jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/
>> usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
>> lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.
>> 3-1.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
>> lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/curator-
>> client-2.7.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.
>> jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/
>> hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/curator-
>> framework-2.7.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
>> jar:/usr/lib/hadoop/lib/slf4j-api-1.7.10.jar:/usr/lib/
>> hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/
>> curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/jersey-
>> server-1.9.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.
>> 10.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
>> jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/
>> jets3t-0.9.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
>> 1.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/
>> hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jettison-1.
>> 1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/
>> hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
>> hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jetty-6.1.26-
>> emr.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-
>> collections-3.2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.
>> 1.0-incubating.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-
>> emr.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/
>> lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/
>> httpclient-4.5.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/
>> usr/lib/hadoop/lib/zookeeper-3.4.10.jar:/usr/lib/hadoop/
>> lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/
>> httpcore-4.4.4.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/
>> usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/
>> hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/
>> jsr305-3.0.0.jar:/usr/lib/hadoop/lib/commons-httpclient-
>> 3.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/
>> lib/hadoop/lib/junit-4.11.jar:/etc/hadoop/conf
>>> org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
>>> /home/hadoop/flink-consumer.jar
>>> 
>>> The later throws the following error:
>>> 
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.
>> jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/
>> slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> Exception in thread "main" java.lang.NoClassDefFoundError:
>>> org/apache/hadoop/util/VersionInfo
>>>       at
>>> org.apache.flink.runtime.util.EnvironmentInformation.logEnvironmentInfo(
>> EnvironmentInformation.java:283)
>>>       at org.apache.flink.client.CliFrontend.main(CliFrontend.
>> java:1124)
>>> Caused by: java.lang.ClassNotFoundException:
>>> org.apache.hadoop.util.VersionInfo
>>>       at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>>       at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>>       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>>>       at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>>       ... 2 more
>>> 
>>> Simply removing the .jar from the folder causes the same error.
>>> 
>>> Thank you
>>> 
>>> 
>>> 
>>> 
>>> On Mon, Oct 9, 2017 at 5:46 AM, Stephan Ewen <se...@apache.org> wrote:
>>> 
>>>> Hi!
>>>> 
>>>> It looks like multiple Hadoop versions are in the classpath. Flink's
>> hadoop
>>>> jar and the EMR Hadoop jars.
>>>> I would simply drop Flink's own Hadoop dependency and only use the EMR
>>>> Hadoop jars.
>>>> 
>>>> Delete the 'flink-shaded-h‌​adoop2-uber' jar from Flink, and make sure
>> the
>>>> setup is such that the Hadoop lib environment variable is set. Then it
>>>> should not have conflicts any more.
>>>> 
>>>> 
>>>> 
>>>> On Sun, Oct 8, 2017 at 12:08 AM, Chen Qin <qi...@gmail.com> wrote:
>>>> 
>>>>> Attached my side project verified working to deploy jobmanager and
>>>>> taskmanager as stateless service(non yarn/mesos), configuration here
>>>>> 
>>>>> https://github.com/chenqin/flink-jar/tree/master/config/hadoop
>>>>> 
>>>>> more detail here
>>>>> https://github.com/chenqin/flink-jar/blob/master/src/
>>>>> main/java/FlinkBootstrap.java#L49
>>>>> 
>>>>> On Fri, Oct 6, 2017 at 10:26 PM, Bowen Li <bo...@offerupnow.com>
>>>> wrote:
>>>>> 
>>>>>> Hi Andy,
>>>>>> 
>>>>>> I believe it's because you didn't set your s3 impl correctly. Try to
>>>> set
>>>>>> your core-site.xml by following https://ci.apache.org/
>>>>>> projects/flink/flink-docs-release-1.4/ops/deployment/
>>>>>> aws.html#s3afilesystem-
>>>>>> recommended
>>>>>> 
>>>>>> Bowen
>>>>>> 
>>>>>> On Fri, Oct 6, 2017 at 7:59 AM, Andy M. <aj...@gmail.com> wrote:
>>>>>> 
>>>>>>> Hi Till,
>>>>>>> 
>>>>>>> Seems like everything is in line there.  hadoop-common.jar ->
>>>>>>> hadoop-common-2.7.3-amzn-3.jar
>>>>>>> 
>>>>>>> And when i decompiled that jar I see  public void
>>>>>> addResource(Configuration
>>>>>>> conf) in org/apache/hadoop/conf/Configuration.java
>>>>>>> 
>>>>>>> I agree that an incorrect version of the jar is probably being run,
>>>> is
>>>>>>> there a way to limit the classpath for the TaskManager when starting
>>>>> the
>>>>>>> job?
>>>>>>> 
>>>>>>> Thank you
>>>>>>> 
>>>>>>> On Fri, Oct 6, 2017 at 6:49 AM, Till Rohrmann <tr...@apache.org>
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> Hi Andy,
>>>>>>>> 
>>>>>>>> could you check which Hadoop version this jar
>>>>>>>> /usr/lib/hadoop/hadoop-common.jar is? Maybe also checking whether
>>>>> the
>>>>>>>> contained hadoop Configuration class has the method
>>>>>>>> Configuration.addResource(Lorg/apache/hadoop/conf/
>>>> Configuration;)V.
>>>>>>> Maybe
>>>>>>>> this jar is the culprit because it comes from a different Hadoop
>>>>>> version.
>>>>>>>> 
>>>>>>>> Cheers,
>>>>>>>> Till
>>>>>>>> ​
>>>>>>>> 
>>>>>>>> On Thu, Oct 5, 2017 at 4:22 PM, Andy M. <aj...@gmail.com> wrote:
>>>>>>>> 
>>>>>>>>> Hi Till,
>>>>>>>>> 
>>>>>>>>> I believe this is what you are looking for, classpath is much
>>>>> bigger
>>>>>>> for
>>>>>>>>> the task manager.  I can also post the whole log file if needed:
>>>>>>>>> 
>>>>>>>>> 2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.
>>>>>>>> YarnTaskManagerRunner
>>>>>>>>>                -  Classpath:
>>>>>>>>> flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-
>>>>>>>>> python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.
>>>>>>>>> 2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:
>>>>>>>>> log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/
>>>>>>>>> etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-
>>>>>>>>> amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.
>>>>>>>>> 3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/
>>>>>>>>> hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
>>>>>>>>> nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-
>>>>>>>>> amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/
>>>>>>>>> usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>>>> hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-
>>>>>>>>> streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/
>>>>>>>>> hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/
>>>>>>>>> usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>>>> hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.
>>>>>>>>> 3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/
>>>>>>>>> hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/
>>>>>>>>> hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.
>>>>>>>>> 7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-
>>>>>>>>> amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/
>>>>>>>>> hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.
>>>>>>>>> jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/
>>>>>>>>> hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
>>>>>>>>> datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
>>>>>>>>> archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/
>>>>>>>>> hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-
>>>>>>>>> amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/
>>>>>>>>> usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>>>> hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.
>>>>>>>>> 2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.
>>>>>>>>> jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
>>>>>>>>> lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/
>>>>>>>>> httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
>>>>>>>>> 1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
>>>>>>>>> jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/
>>>>>>>>> lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-
>>>>>>>>> 1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/
>>>>>>>>> usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/
>>>>>>>>> gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.
>>>>>>>>> 8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/
>>>>>>>>> lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/
>>>>>>>>> apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
>>>>>>>>> httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.
>>>>>>>>> 1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/
>>>>>>>>> lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
>>>>>>>>> commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/
>>>>>>>>> usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/
>>>>>>>>> xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
>>>>>>>>> jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/
>>>>>>>>> commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.
>>>>>>>>> jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/
>>>>>>>>> usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/
>>>>>>>>> jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.
>>>>>>>>> Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/
>>>>>>>>> lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-
>>>>>>>>> api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-
>>>>>>>>> 1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:
>>>>>>>>> /usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/
>>>>>>>>> jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-
>>>>>>>>> cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/
>>>>>>>>> lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/
>>>>>>>>> apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/
>>>>>>>>> commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.
>>>>>>>>> 11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:
>>>>>>>>> /usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
>>>>>>>>> lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.
>>>>>>>>> 10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/
>>>>>>>>> usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/
>>>>>>>>> lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.
>>>>>>>>> jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/
>>>>>>>>> hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/
>>>>>>>>> curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-
>>>>>>>>> jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.
>>>>>>>>> 4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/
>>>>>>>>> usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/
>>>>>>>>> usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/
>>>>>>>>> usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/
>>>>>>>>> hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-
>>>>>>>>> amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-
>>>>>>>>> incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-
>>>>>>>>> 2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.
>>>>>>>>> jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/
>>>>>>>>> lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/
>>>>>>>>> lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/
>>>>>>>>> commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.
>>>>>>>>> jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/
>>>>>>>>> usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/
>>>>>>>>> hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/
>>>>>>>>> netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-
>>>>>>>>> 1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/
>>>>>>>>> hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/
>>>>>>>>> lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/
>>>>>>>>> hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
>>>>>>>>> hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/
>>>>>>>>> lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-
>>>>>>>>> api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:
>>>>>>>>> /usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/
>>>>>>>>> hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/
>>>>>>>>> lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-
>>>>>>>>> logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.
>>>>>>>>> jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/
>>>>>>>> hadoop-mapreduce/hadoop-
>>>>>>>>> mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-
>>>>>>>>> mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.
>>>>>>>>> jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-
>>>>>>>>> 3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.
>>>>>>>> 8.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-
>>>>>>>>> 3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.
>>>>>>>>> jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-
>>>>>>>>> net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-
>>>>>>>>> 1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-
>>>>>>>>> 0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:
>>>>>>>>> /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.
>>>>>>>>> 7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.
>>>>>>>>> 2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-
>>>>>>>>> client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.
>>>>>>>>> jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
>>>>>>>>> common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.
>>>>>>>>> 3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
>>>>>>>>> jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
>>>>>>>>> archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.
>>>>>>>>> 9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.
>>>>>>>>> jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-
>>>>>>>>> kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.
>>>>>>>>> jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:
>>>>>>>>> /usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:
>>>>>>>>> /usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-
>>>>>> mapreduce/commons-
>>>>>>>>> configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-
>>>>>>>>> 1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-
>>>>>>>>> 2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:
>>>>>>>>> /usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/
>>>>>>>>> lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/
>>>>>>>> lib/hadoop-mapreduce/lib/
>>>>>>>>> snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/
>>>>>>>>> jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/
>>>>>>>>> paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-
>>>>>>>>> guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.
>>>>>>>>> jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/
>>>>>>>>> lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-
>>>>>>>>> mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/
>>>>>>>>> guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.
>>>>>>>>> 0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/
>>>>>>>>> usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/
>>>>>>>>> hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-
>>>>>>>>> mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/
>>>>>>>>> hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-
>>>>>> yarn-server-
>>>>>>>>> sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
>>>>>>>>> server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
>>>>>>>>> hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
>>>>>>>>> hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>>>> yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/
>>>>>>>>> hadoop-yarn/hadoop-yarn-applications-distributedshell-
>>>>>>>>> 2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
>>>>>>>>> server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-
>>>>> yarn-server-
>>>>>>>>> sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
>>>>>>>>> hadoop-yarn-server-applicationhistoryservice.jar:
>>>>>>>>> /usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/
>>>>>>>>> lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-
>>>>>>>>> 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-
>>>>>>>>> 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
>>>>>>>>> applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>>>> yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/
>>>>>>>>> lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-
>>>>>>>>> yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>>>> yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.
>>>>>>>>> jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
>>>>>>>>> unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-
>>>>>>>>> yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
>>>>>>>>> hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
>>>>>>>>> common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-
>>>>>>>>> proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
>>>>>>>>> nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.
>>>>>>>>> jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
>>>>>>>>> unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>>>> yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-
>>>>>>>>> api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.
>>>>>>>>> jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/
>>>>>>>>> lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/
>>>>>>>>> hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/
>>>>>>>>> lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.
>>>>>>>>> 9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/
>>>>>>>>> usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/
>>>>>>>>> hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/
>>>>>>>>> lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-
>>>>>>>>> client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.
>>>>>>>>> jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/
>>>>>>>>> usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-
>>>>>>>>> yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/
>>>>>>>>> netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/
>>>>>>>>> aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/
>>>>>>>>> usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/
>>>>>>>>> usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:
>>>>>>>>> /usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/
>>>>>>>>> lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/
>>>>>>>>> hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-
>>>>>>>>> 3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.
>>>>>>>>> jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/
>>>>>>>>> hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/
>>>>>>>>> lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-
>>>>>>>>> collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-
>>>>>>>>> 1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/
>>>>>>>>> usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/
>>>>>>>>> hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/
>>>>>>>>> lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-
>>>>>>>>> jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-
>>>>>>>>> logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:
>>>>>>>>> /usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/
>>>>>>>>> share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/
>>>>>>>>> jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/
>>>>>>>>> bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-
>>>>>>>>> over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-
>>>>>>>>> java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.
>>>>>>>>> 21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/
>>>>>>>>> usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/
>>>>>>>>> share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/
>>>>>>>>> share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/
>>>>>>>>> usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/
>>>>>>>>> lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/
>>>>>>>>> share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/
>>>>>>>>> aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/
>>>>>>>>> aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/
>>>>>>>>> usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/
>>>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-
>>>>>>>>> gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>>>> java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-
>>>>>>>>> elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-
>>>> mechanicalturkrequester-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/
>>>>>>>>> aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/
>>>>>>>>> usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/
>>>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/
>>>>>>>>> aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>>>> java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-
>>>>>>>>> sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-
>>>>>>>>> sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/
>>>>>>>>> aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/
>>>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>> stepfunctions-1.11.160.jar:/
>>>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/
>>>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>>>> java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-
>>>>>>>> sdk/aws-java-sdk-
>>>>>>>>> cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/
>>>>>>>>> aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/
>>>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> elastictranscoder-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>> organizations-1.11.160.jar:/
>>>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-
>>>>>>>> sdk/aws-java-sdk-
>>>>>>>>> applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/
>>>>>>>>> aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/
>>>>>>>>> aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/
>>>>>>>>> aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:
>>>>>>>>> /usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>>>> java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:
>>>>>>>>> /usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>>>> java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-
>>>> resourcegroupstaggingapi-1.11.
>>>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-
>>>>>>>>> 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>>>> java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/
>>>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:
>>>>>>>>> /usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.
>>>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.
>>>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-
>>>>>>>>> java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/
>>>>>>>>> aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/
>>>>>>>>> aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/
>>>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-
>>>>>>>>> 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>>>> elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>>>> aws-java-sdk-iot-1.11.160.jar
>>>>>>>>> 
>>>>>>>>> Thank you
>>>>>>>>> 
>>>>>>>>> On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <
>>>>> trohrmann@apache.org>
>>>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> Hi Andy,
>>>>>>>>>> 
>>>>>>>>>> the CliFrontend is not executed via Yarn, thus, it is not
>>>>> affected
>>>>>> by
>>>>>>>>>> dependencies which are added due to the underlying Yarn
>>>> cluster.
>>>>>>>>> Therefore,
>>>>>>>>>> it would be helpful to look at the TaskManager logs. Either you
>>>>>> have
>>>>>>>>>> enabled log aggregation on your Yarn cluster, then you can
>>>> obtain
>>>>>> the
>>>>>>>>> logs
>>>>>>>>>> via `yarn logs -applicationId <application ID>` or you have to
>>>>>>> retrieve
>>>>>>>>>> them from the machines where they were running (either by going
>>>>>>>> directly
>>>>>>>>>> there or via the Yarn web interface).
>>>>>>>>>> 
>>>>>>>>>> Cheers,
>>>>>>>>>> Till
>>>>>>>>>> 
>>>>>>>>>> On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com>
>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>>> Hi Till,
>>>>>>>>>>> 
>>>>>>>>>>> That is actually the classpath used by the flink bash
>>>>> script(that
>>>>>>>>>> launches
>>>>>>>>>>> the jar using the java command).  I changed the execute to an
>>>>>> echo,
>>>>>>>> and
>>>>>>>>>>> grabbed that for the CLI arguments.
>>>>>>>>>>> 
>>>>>>>>>>> I believe this is the class path from the log file(although
>>>> it
>>>>>>> might
>>>>>>>>> not
>>>>>>>>>> be
>>>>>>>>>>> the taskmanager log, is that any different from what would be
>>>>> in
>>>>>> my
>>>>>>>>>>> flink-1.3.2/log folder?):
>>>>>>>>>>> 
>>>>>>>>>>> 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.
>>>>>> CliFrontend
>>>>>>>>>>>                -  Classpath:
>>>>>>>>>>> /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
>>>>>>>>>>> home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
>>>>>>>>>>> 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
>>>>>>>>>>> hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
>>>>>>>>>>> hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/
>>>>>>> hadoop/conf:
>>>>>>>>>>> 
>>>>>>>>>>> If that doesn't seem right, and you can point me in the right
>>>>>>>> direction
>>>>>>>>>> as
>>>>>>>>>>> to where the TaskManager logs would be, I would be happy to
>>>>> grab
>>>>>>> the
>>>>>>>>>>> information your looking for.
>>>>>>>>>>> 
>>>>>>>>>>> Thank you
>>>>>>>>>>> 
>>>>>>>>>>> On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <
>>>>>>> trohrmann@apache.org>
>>>>>>>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Hi Andy,
>>>>>>>>>>>> 
>>>>>>>>>>>> this looks to me indeed like a dependency problem. I assume
>>>>>> that
>>>>>>>> EMR
>>>>>>>>> or
>>>>>>>>>>>> something else is pulling in an incompatible version of
>>>>> Hadoop.
>>>>>>>>>>>> 
>>>>>>>>>>>> The classpath you've posted, is this the one logged in the
>>>>> log
>>>>>>>> files
>>>>>>>>>>>> (TaskManager log) or did you compile it yourself? In the
>>>>> latter
>>>>>>>> case,
>>>>>>>>>> it
>>>>>>>>>>>> would also be helpful to get access to the TaskManager
>>>> logs.
>>>>>>>>>>>> 
>>>>>>>>>>>> Cheers,
>>>>>>>>>>>> Till
>>>>>>>>>>>> 
>>>>>>>>>>>> On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <
>>>> ajm2444@gmail.com>
>>>>>>>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Fabian,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 1) I have looked at the linked docs, and from what I can
>>>>> tell
>>>>>>> no
>>>>>>>>>> setup
>>>>>>>>>>>>> should really need to be done to get Flink working(Other
>>>>> than
>>>>>>>>>>> downloading
>>>>>>>>>>>>> the correct binaries, which I believe I did)
>>>>>>>>>>>>> 2) I have downloaded the Flink 1.3.2
>>>>>> binaries(flink-1.3.2-bin-
>>>>>>>>>>>>> hadoop27-scala_2.11.tgz
>>>>>>>>>>>>> <http://apache.claz.org/flink/
>>>> flink-1.3.2/flink-1.3.2-bin-
>>>>>>>>>>>>> hadoop27-scala_2.11.tgz>)
>>>>>>>>>>>>> This is for hadoop 2.7.X, which matches EMR 5.8.0.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I appreciate any help or guidance you can provide me in
>>>>>> fixing
>>>>>>> my
>>>>>>>>>>>> problems,
>>>>>>>>>>>>> please let me know if there is anything else I can
>>>> provide
>>>>>> you.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thank you
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <
>>>>>>> fhueske@gmail.com
>>>>>>>>> 
>>>>>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Andy,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I'm not an AWS expert, so I'll just check on some
>>>> common
>>>>>>>> issues.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I guess you already had a look at the Flink docs for
>>>>>> AWS/EMR
>>>>>>>> but
>>>>>>>>>> I'll
>>>>>>>>>>>>> post
>>>>>>>>>>>>>> the link just be to sure [1].
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Since you are using Flink 1.3.2 (EMR 5.8.0 comes with
>>>>> Flink
>>>>>>>>> 1.3.1)
>>>>>>>>>>> did
>>>>>>>>>>>>> you
>>>>>>>>>>>>>> built Flink yourself or did you download the binaries?
>>>>>>>>>>>>>> Does the Hadoop version of the Flink build match the
>>>>> Hadoop
>>>>>>>>> version
>>>>>>>>>>> of
>>>>>>>>>>>>> EMR
>>>>>>>>>>>>>> 5.8.0, i.e., Hadoop 2.7.x?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Best, Fabian
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-
>>>>>>>>>>>>> release-1.3/setup/aws.html
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 2017-10-02 21:51 GMT+02:00 Andy M. <ajm2444@gmail.com
>>>>> :
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi Fabian,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Sorry, I just realized I forgot to include that part.
>>>>>> The
>>>>>>>>> error
>>>>>>>>>>>>> returned
>>>>>>>>>>>>>>> is:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> java.lang.NoSuchMethodError:
>>>>>>>>>>>>>>> org.apache.hadoop.conf.Configuration.addResource(
>>>>>>>>>>>>>> Lorg/apache/hadoop/conf/
>>>>>>>>>>>>>>> Configuration;)V
>>>>>>>>>>>>>>>   at com.amazon.ws.emr.hadoop.fs.
>>>>>>> EmrFileSystem.initialize(
>>>>>>>>>>>>>>> EmrFileSystem.java:93)
>>>>>>>>>>>>>>>   at org.apache.flink.runtime.fs.
>>>>>> hdfs.HadoopFileSystem.
>>>>>>>>>>>>>>> initialize(HadoopFileSystem.java:328)
>>>>>>>>>>>>>>>   at org.apache.flink.core.fs.FileSystem.
>>>>>>>>>> getUnguardedFileSystem(
>>>>>>>>>>>>>>> FileSystem.java:350)
>>>>>>>>>>>>>>>   at org.apache.flink.core.fs.
>>>>>> FileSystem.get(FileSystem.
>>>>>>>>>>> java:389)
>>>>>>>>>>>>>>>   at org.apache.flink.core.fs.Path.
>>>>>>>>>> getFileSystem(Path.java:293)
>>>>>>>>>>>>>>>   at org.apache.flink.runtime.state.filesystem.
>>>>>>>>>>>>>>> FsCheckpointStreamFactory.<init>(
>>>>>>> FsCheckpointStreamFactory.
>>>>>>>>>>> java:99)
>>>>>>>>>>>>>>>   at org.apache.flink.runtime.state.filesystem.
>>>>>>>>> FsStateBackend.
>>>>>>>>>>>>>>> createStreamFactory(FsStateBackend.java:282)
>>>>>>>>>>>>>>>   at org.apache.flink.contrib.streaming.state.
>>>>>>>>>>> RocksDBStateBackend.
>>>>>>>>>>>>>>> createStreamFactory(RocksDBStateBackend.java:273
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> I believe it has something to do with the classpath,
>>>>> but
>>>>>> I
>>>>>>> am
>>>>>>>>>>> unsure
>>>>>>>>>>>>> why
>>>>>>>>>>>>>> or
>>>>>>>>>>>>>>> how to fix it.  The classpath being used during the
>>>>>>> execution
>>>>>>>>> is:
>>>>>>>>>>>>>>> /home/hadoop/flink-1.3.2/lib/
>>>>>> flink-python_2.11-1.3.2.jar:/
>>>>>>>>>>>>>>> ho‌​me/hadoop/flink-1.3.‌​2/
>>>>> lib/flink-shaded-h‌​adoop2-
>>>>>>>>>>>>>>> uber-1.3.2.ja‌​r:/home/hadoop/
>>>>>>> flink‌​-1.3.2/lib/log4j-1.2‌​.
>>>>>>>>>>>>>>> 17.jar:/home/hadoop‌​/flink-1.
>>>>>>> 3.2/lib/slf‌​4j-log4j12-1.7.7.
>>>>>>>>>>>>>>> jar‌​:/home/hadoop/flink-‌​1.
>>>>>>> 3.2/lib/flink-dist‌​_2.11-1.3.
>>>>>>>>>>>>>>> 2.jar::/et‌​c/hadoop/conf:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r
>>>>> and
>>>>>>> it
>>>>>>>>>> seems
>>>>>>>>>>>> the
>>>>>>>>>>>>>>> addResource function does seem to be there.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Thank you
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <
>>>>>>>>> fhueske@gmail.com
>>>>>>>>>>> 
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Hi Andy,
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> can you describe in more detail what exactly isn't
>>>>>>> working?
>>>>>>>>>>>>>>>> Do you see error messages in the log files or on
>>>> the
>>>>>>>> console?
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Thanks, Fabian
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 2017-10-02 15:52 GMT+02:00 Andy M. <
>>>>> ajm2444@gmail.com
>>>>>>> :
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Hello,
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> I am about to deploy my first Flink projects  to
>>>>>>>>> production,
>>>>>>>>>>> but
>>>>>>>>>>>> I
>>>>>>>>>>>>> am
>>>>>>>>>>>>>>>>> running into a very big hurdle.  I am unable to
>>>>>> launch
>>>>>>> my
>>>>>>>>>>> project
>>>>>>>>>>>>> so
>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>> write to an S3 bucket.  My project is running on
>>>> an
>>>>>> EMR
>>>>>>>>>>> cluster,
>>>>>>>>>>>>>> where
>>>>>>>>>>>>>>> I
>>>>>>>>>>>>>>>>> have installed Flink 1.3.2.  I am using Yarn to
>>>>>> launch
>>>>>>>> the
>>>>>>>>>>>>>> application,
>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>> it seems to run fine unless I am trying to enable
>>>>>> check
>>>>>>>>>>>>>> pointing(with a
>>>>>>>>>>>>>>>> S3
>>>>>>>>>>>>>>>>> target).  I am looking to use RocksDB as my
>>>>>>>> check-pointing
>>>>>>>>>>>> backend.
>>>>>>>>>>>>>> I
>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>> asked a few places, and I am still unable to
>>>> find a
>>>>>>>>> solution
>>>>>>>>>> to
>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>> problem.  Here are my steps for creating a
>>>> cluster,
>>>>>> and
>>>>>>>>>>> launching
>>>>>>>>>>>>> my
>>>>>>>>>>>>>>>>> application, perhaps I am missing a step.  I'd be
>>>>>> happy
>>>>>>>> to
>>>>>>>>>>>> provide
>>>>>>>>>>>>>> any
>>>>>>>>>>>>>>>>> additional information if needed.
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> AWS Portal:
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>   1) EMR -> Create Cluster
>>>>>>>>>>>>>>>>>   2) Advanced Options
>>>>>>>>>>>>>>>>>   3) Release = emr-5.8.0
>>>>>>>>>>>>>>>>>   4) Only select Hadoop 2.7.3
>>>>>>>>>>>>>>>>>   5) Next -> Next -> Next -> Create Cluster ( I
>>>>> do
>>>>>>> fill
>>>>>>>>> out
>>>>>>>>>>>>>>>>> names/keys/etc)
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> Once the cluster is up I ssh into the Master and
>>>> do
>>>>>> the
>>>>>>>>>>>> following:
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>   1  wget
>>>>>>>>>>>>>>>>> http://apache.claz.org/flink/
>>>>>>>> flink-1.3.2/flink-1.3.2-bin-
>>>>>>>>>>>>>>>>> hadoop27-scala_2.11.tgz
>>>>>>>>>>>>>>>>>   2  tar -xzf flink-1.3.2-bin-hadoop27-
>>>>>>> scala_2.11.tgz
>>>>>>>>>>>>>>>>>   3  cd flink-1.3.2
>>>>>>>>>>>>>>>>>   4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4
>>>> -d
>>>>>>>>>>>>>>>>>   5  Change conf/flink-conf.yaml
>>>>>>>>>>>>>>>>>   6  ./bin/flink run -m yarn-cluster -yn 1
>>>>>>>>>>> ~/flink-consumer.jar
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> My conf/flink-conf.yaml I add the following
>>>> fields:
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>   state.backend: rocksdb
>>>>>>>>>>>>>>>>>   state.backend.fs.checkpointdir:
>>>>>>> s3:/bucket/location
>>>>>>>>>>>>>>>>>   state.checkpoints.dir: s3:/bucket/location
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> My program's checkpointing setup:
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> env.enableCheckpointing(getCheckpointRate,
>>>>>>>>>>>>> CheckpointingMode.EXACTLY_
>>>>>>>>>>>>>>>> ONCE)
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> env.getCheckpointConfig.
>>>>>> enableExternalizedCheckpoints(
>>>>>>>>>>>>>>>>> ExternalizedCheckpointCleanup.
>>>>>> RETAIN_ON_CANCELLATION)
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> env.getCheckpointConfig.
>>>>>> setMinPauseBetweenCheckpoints(
>>>>>>>>>>>>>>>>> getCheckpointMinPause)
>>>>>>>>>>>>>>>>>   env.getCheckpointConfig.
>>>> setCheckpointTimeout(
>>>>>>>>>>>>>> getCheckpointTimeout)
>>>>>>>>>>>>>>>>>   env.getCheckpointConfig.
>>>>>>>> setMaxConcurrentCheckpoints(1)
>>>>>>>>>>>>>>>>>   env.setStateBackend(new
>>>>>> RocksDBStateBackend("s3://
>>>>>>>>>>>>>>> bucket/location",
>>>>>>>>>>>>>>>>> true))
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>> 
>> 


Re: Unable to write snapshots to S3 on EMR

Posted by "Andy M." <aj...@gmail.com>.
Hi Ajoscha,

That didn't seem to do the trick either.  Do the following look correct?

I see 5.9.0 is released with Flink 1.3.2, so I tried that, and got the same
problem, all I did was upload my Scala .jar to the master, updated my
flink-conf.yaml, set my env variables, and ran it with the following
command: HADOOP_CONF_DIR=/etc/hadoop/conf flink run -m yarn-cluster -yn 5
-ys 2 ~/flink-consumer.jar

I am beginning to think something else may be wrong with my configuration.
 Does the following look correct?

In flink-conf.yaml:
state.backend: rocksdb
state.backend.fs.checkpointdir: s3://org/flink-project/state
state.checkpoints.dir: s3://org/flink-project/state

In the code:
env.enableCheckpointing(getCheckpointRate,CheckpointingMode.EXACTLY_ONCE)
env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(1000)
env.getCheckpointConfig.setCheckpointTimeout(5000)
env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
env.setStateBackend(new RocksDBStateBackend("s3://org/flink-project/state",
true))

Thank you



On Wed, Oct 11, 2017 at 5:19 AM, Aljoscha Krettek <al...@apache.org>
wrote:

> Hi Andy,
>
> I remember that I was testing a job with almost exactly the same setup as
> part of the Flink 1.3.2 release testing. The command I used to start my job
> is roughly this:
>
> HADOOP_CONF_DIR=/etc/hadoop/conf bin/flink run -c my.main.Class -m
> yarn-cluster -yn 5 -ys 2 ...
>
> i.e. I export the proper hadoop config dir and I run a per-job YARN
> cluster. I think I also exported the result of "hadoop classpath" as
> HADOOP_CLASSPATH.
>
> Best,
> Aljoscha
>
> > On 10. Oct 2017, at 16:43, Andy M. <aj...@gmail.com> wrote:
> >
> > Hello,
> >
> > Bowen:  Unless I am missing something, it says there needs to be no setup
> > on EMR, Each topic says: "You don’t have to configure this manually if
> you
> > are running Flink on EMR."  S3 access from CLI works fine on my clusters.
> >
> > Chen: Thank you for this, I will look into this if I am unable to get
> this
> > running on YARN successfully.
> >
> > Stephan:  Removing the said library causes the flink
> > (flink-1.3.2/bin/flink) bash script to fail.  The underlying Java needs
> > this to work.  I tried explicitly setting the classpath for the java call
> > as well to point to the hadoop library jars.  This is the original java
> > command that I was trying to run:
> >
> > java
> > -Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-
> ip-172-31-19-27.log
> > -Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/
> log4j-cli.properties
> > -Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/
> conf/logback.xml
> > -classpath
> > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:
> > org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
> > /home/hadoop/flink-consumer.jar
> >
> >
> > This is what I changed it too(removing the shadded-hadoop2-uber jar and
> > adding in the hadoop folder):
> >
> > java
> > -Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-
> ip-172-31-19-27.log
> > -Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/
> log4j-cli.properties
> > -Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/
> conf/logback.xml
> > -classpath
> > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/hadoop/
> flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop/flink-
> 1.3.2/lib/flink-dist_2.11-1.3.2.jar:/usr/lib/hadoop/lib/
> activation-1.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:
> /usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/
> lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/
> apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
> lang-2.6.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/
> usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/
> lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/
> lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/java-
> xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.
> jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/
> lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/
> jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/
> usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
> lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.
> 3-1.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
> lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/curator-
> client-2.7.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.
> jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/
> hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/curator-
> framework-2.7.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
> jar:/usr/lib/hadoop/lib/slf4j-api-1.7.10.jar:/usr/lib/
> hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/
> curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/jersey-
> server-1.9.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.
> 10.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
> jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/
> jets3t-0.9.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
> 1.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/
> hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jettison-1.
> 1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/
> hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
> hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jetty-6.1.26-
> emr.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-
> collections-3.2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.
> 1.0-incubating.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-
> emr.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/
> lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/
> httpclient-4.5.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/
> usr/lib/hadoop/lib/zookeeper-3.4.10.jar:/usr/lib/hadoop/
> lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/
> httpcore-4.4.4.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/
> usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/
> hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/
> jsr305-3.0.0.jar:/usr/lib/hadoop/lib/commons-httpclient-
> 3.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/
> lib/hadoop/lib/junit-4.11.jar:/etc/hadoop/conf
> > org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
> > /home/hadoop/flink-consumer.jar
> >
> > The later throws the following error:
> >
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> > [jar:file:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.
> jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> > [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/
> slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > Exception in thread "main" java.lang.NoClassDefFoundError:
> > org/apache/hadoop/util/VersionInfo
> >        at
> > org.apache.flink.runtime.util.EnvironmentInformation.logEnvironmentInfo(
> EnvironmentInformation.java:283)
> >        at org.apache.flink.client.CliFrontend.main(CliFrontend.
> java:1124)
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.hadoop.util.VersionInfo
> >        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> >        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >        ... 2 more
> >
> > Simply removing the .jar from the folder causes the same error.
> >
> > Thank you
> >
> >
> >
> >
> > On Mon, Oct 9, 2017 at 5:46 AM, Stephan Ewen <se...@apache.org> wrote:
> >
> >> Hi!
> >>
> >> It looks like multiple Hadoop versions are in the classpath. Flink's
> hadoop
> >> jar and the EMR Hadoop jars.
> >> I would simply drop Flink's own Hadoop dependency and only use the EMR
> >> Hadoop jars.
> >>
> >> Delete the 'flink-shaded-h‌​adoop2-uber' jar from Flink, and make sure
> the
> >> setup is such that the Hadoop lib environment variable is set. Then it
> >> should not have conflicts any more.
> >>
> >>
> >>
> >> On Sun, Oct 8, 2017 at 12:08 AM, Chen Qin <qi...@gmail.com> wrote:
> >>
> >>> Attached my side project verified working to deploy jobmanager and
> >>> taskmanager as stateless service(non yarn/mesos), configuration here
> >>>
> >>> https://github.com/chenqin/flink-jar/tree/master/config/hadoop
> >>>
> >>> more detail here
> >>> https://github.com/chenqin/flink-jar/blob/master/src/
> >>> main/java/FlinkBootstrap.java#L49
> >>>
> >>> On Fri, Oct 6, 2017 at 10:26 PM, Bowen Li <bo...@offerupnow.com>
> >> wrote:
> >>>
> >>>> Hi Andy,
> >>>>
> >>>> I believe it's because you didn't set your s3 impl correctly. Try to
> >> set
> >>>> your core-site.xml by following https://ci.apache.org/
> >>>> projects/flink/flink-docs-release-1.4/ops/deployment/
> >>>> aws.html#s3afilesystem-
> >>>> recommended
> >>>>
> >>>> Bowen
> >>>>
> >>>> On Fri, Oct 6, 2017 at 7:59 AM, Andy M. <aj...@gmail.com> wrote:
> >>>>
> >>>>> Hi Till,
> >>>>>
> >>>>> Seems like everything is in line there.  hadoop-common.jar ->
> >>>>> hadoop-common-2.7.3-amzn-3.jar
> >>>>>
> >>>>> And when i decompiled that jar I see  public void
> >>>> addResource(Configuration
> >>>>> conf) in org/apache/hadoop/conf/Configuration.java
> >>>>>
> >>>>> I agree that an incorrect version of the jar is probably being run,
> >> is
> >>>>> there a way to limit the classpath for the TaskManager when starting
> >>> the
> >>>>> job?
> >>>>>
> >>>>> Thank you
> >>>>>
> >>>>> On Fri, Oct 6, 2017 at 6:49 AM, Till Rohrmann <tr...@apache.org>
> >>>>> wrote:
> >>>>>
> >>>>>> Hi Andy,
> >>>>>>
> >>>>>> could you check which Hadoop version this jar
> >>>>>> /usr/lib/hadoop/hadoop-common.jar is? Maybe also checking whether
> >>> the
> >>>>>> contained hadoop Configuration class has the method
> >>>>>> Configuration.addResource(Lorg/apache/hadoop/conf/
> >> Configuration;)V.
> >>>>> Maybe
> >>>>>> this jar is the culprit because it comes from a different Hadoop
> >>>> version.
> >>>>>>
> >>>>>> Cheers,
> >>>>>> Till
> >>>>>> ​
> >>>>>>
> >>>>>> On Thu, Oct 5, 2017 at 4:22 PM, Andy M. <aj...@gmail.com> wrote:
> >>>>>>
> >>>>>>> Hi Till,
> >>>>>>>
> >>>>>>> I believe this is what you are looking for, classpath is much
> >>> bigger
> >>>>> for
> >>>>>>> the task manager.  I can also post the whole log file if needed:
> >>>>>>>
> >>>>>>> 2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.
> >>>>>> YarnTaskManagerRunner
> >>>>>>>                 -  Classpath:
> >>>>>>> flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-
> >>>>>>> python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.
> >>>>>>> 2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:
> >>>>>>> log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/
> >>>>>>> etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-
> >>>>>>> amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.
> >>>>>>> 3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/
> >>>>>>> hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> >>>>>>> nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-
> >>>>>>> amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/
> >>>>>>> usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/
> >>>>>>> hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-
> >>>>>>> streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/
> >>>>>>> hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/
> >>>>>>> usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> >>>>>>> hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.
> >>>>>>> 3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/
> >>>>>>> hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/
> >>>>>>> hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.
> >>>>>>> 7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-
> >>>>>>> amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/
> >>>>>>> hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.
> >>>>>>> jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/
> >>>>>>> hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> >>>>>>> datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> >>>>>>> archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/
> >>>>>>> hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-
> >>>>>>> amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/
> >>>>>>> usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/
> >>>>>>> hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.
> >>>>>>> 2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.
> >>>>>>> jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
> >>>>>>> lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/
> >>>>>>> httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
> >>>>>>> 1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
> >>>>>>> jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/
> >>>>>>> lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-
> >>>>>>> 1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/
> >>>>>>> usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/
> >>>>>>> gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.
> >>>>>>> 8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/
> >>>>>>> lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/
> >>>>>>> apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
> >>>>>>> httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.
> >>>>>>> 1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/
> >>>>>>> lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
> >>>>>>> commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/
> >>>>>>> usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/
> >>>>>>> xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
> >>>>>>> jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/
> >>>>>>> commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.
> >>>>>>> jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/
> >>>>>>> usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/
> >>>>>>> jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.
> >>>>>>> Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/
> >>>>>>> lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-
> >>>>>>> api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-
> >>>>>>> 1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:
> >>>>>>> /usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/
> >>>>>>> jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-
> >>>>>>> cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/
> >>>>>>> lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/
> >>>>>>> apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/
> >>>>>>> commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.
> >>>>>>> 11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:
> >>>>>>> /usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
> >>>>>>> lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.
> >>>>>>> 10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/
> >>>>>>> usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/
> >>>>>>> lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.
> >>>>>>> jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/
> >>>>>>> hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/
> >>>>>>> curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-
> >>>>>>> jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.
> >>>>>>> 4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/
> >>>>>>> usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/
> >>>>>>> usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/
> >>>>>>> usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/
> >>>>>>> hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-
> >>>>>>> amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-
> >>>>>>> incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-
> >>>>>>> 2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.
> >>>>>>> jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/
> >>>>>>> lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/
> >>>>>>> lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/
> >>>>>>> commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.
> >>>>>>> jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/
> >>>>>>> usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/
> >>>>>>> hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/
> >>>>>>> netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-
> >>>>>>> 1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/
> >>>>>>> hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/
> >>>>>>> lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/
> >>>>>>> hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> >>>>>>> hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/
> >>>>>>> lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-
> >>>>>>> api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:
> >>>>>>> /usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/
> >>>>>>> hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/
> >>>>>>> lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-
> >>>>>>> logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.
> >>>>>>> jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/
> >>>>>> hadoop-mapreduce/hadoop-
> >>>>>>> mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-
> >>>>>>> mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.
> >>>>>>> jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-
> >>>>>>> 3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.
> >>>>>> 8.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-
> >>>>>>> 3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.
> >>>>>>> jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-
> >>>>>>> net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-
> >>>>>>> 1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-
> >>>>>>> 0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:
> >>>>>>> /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.
> >>>>>>> 7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.
> >>>>>>> 2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-
> >>>>>>> client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.
> >>>>>>> jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> >>>>>>> common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.
> >>>>>>> 3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> >>>>>>> jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
> >>>>>>> archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.
> >>>>>>> 9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.
> >>>>>>> jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-
> >>>>>>> kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.
> >>>>>>> jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:
> >>>>>>> /usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:
> >>>>>>> /usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-
> >>>> mapreduce/commons-
> >>>>>>> configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-
> >>>>>>> 1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-
> >>>>>>> 2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:
> >>>>>>> /usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/
> >>>>>>> lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/
> >>>>>> lib/hadoop-mapreduce/lib/
> >>>>>>> snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/
> >>>>>>> jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/
> >>>>>>> paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-
> >>>>>>> guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.
> >>>>>>> jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/
> >>>>>>> lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-
> >>>>>>> mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/
> >>>>>>> guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.
> >>>>>>> 0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/
> >>>>>>> usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/
> >>>>>>> hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-
> >>>>>>> mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/
> >>>>>>> hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-
> >>>> yarn-server-
> >>>>>>> sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> >>>>>>> server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> >>>>>>> hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> >>>>>>> hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> >>>>>>> yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/
> >>>>>>> hadoop-yarn/hadoop-yarn-applications-distributedshell-
> >>>>>>> 2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> >>>>>>> server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-
> >>> yarn-server-
> >>>>>>> sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> >>>>>>> hadoop-yarn-server-applicationhistoryservice.jar:
> >>>>>>> /usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/
> >>>>>>> lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-
> >>>>>>> 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-
> >>>>>>> 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> >>>>>>> applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> >>>>>>> yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/
> >>>>>>> lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-
> >>>>>>> yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> >>>>>>> yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.
> >>>>>>> jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> >>>>>>> unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-
> >>>>>>> yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> >>>>>>> hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> >>>>>>> common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-
> >>>>>>> proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> >>>>>>> nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.
> >>>>>>> jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> >>>>>>> unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> >>>>>>> yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-
> >>>>>>> api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.
> >>>>>>> jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/
> >>>>>>> lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/
> >>>>>>> hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/
> >>>>>>> lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.
> >>>>>>> 9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/
> >>>>>>> usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/
> >>>>>>> hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/
> >>>>>>> lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-
> >>>>>>> client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.
> >>>>>>> jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/
> >>>>>>> usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-
> >>>>>>> yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/
> >>>>>>> netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/
> >>>>>>> aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/
> >>>>>>> usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/
> >>>>>>> usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:
> >>>>>>> /usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/
> >>>>>>> lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/
> >>>>>>> hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-
> >>>>>>> 3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.
> >>>>>>> jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/
> >>>>>>> hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/
> >>>>>>> lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-
> >>>>>>> collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-
> >>>>>>> 1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/
> >>>>>>> usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/
> >>>>>>> hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/
> >>>>>>> lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-
> >>>>>>> jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-
> >>>>>>> logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:
> >>>>>>> /usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/
> >>>>>>> share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/
> >>>>>>> jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/
> >>>>>>> bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-
> >>>>>>> over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-
> >>>>>>> java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.
> >>>>>>> 21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/
> >>>>>>> usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/
> >>>>>>> share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/
> >>>>>>> share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/
> >>>>>>> usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/
> >>>>>>> lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/
> >>>>>>> share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/
> >>>>>>> aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/
> >>>>>>> aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/
> >>>>>>> usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/
> >>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-
> >>>>>>> gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> >>>>>>> java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>> aws-java-sdk-
> >>>>>>> elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-
> >> mechanicalturkrequester-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/
> >>>>>>> aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/
> >>>>>>> usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/
> >>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/
> >>>>>>> aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> >>>>>>> java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-
> >>>>>>> sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-
> >>>>>>> sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/
> >>>>>>> aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/
> >>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>> stepfunctions-1.11.160.jar:/
> >>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/
> >>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> >>>>>>> java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-
> >>>>>> sdk/aws-java-sdk-
> >>>>>>> cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/
> >>>>>>> aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/
> >>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>> elastictranscoder-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>> organizations-1.11.160.jar:/
> >>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-
> >>>>>> sdk/aws-java-sdk-
> >>>>>>> applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/
> >>>>>>> aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/
> >>>>>>> aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/
> >>>>>>> aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:
> >>>>>>> /usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> >>>>>>> java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:
> >>>>>>> /usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> >>>>>>> java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-
> >> resourcegroupstaggingapi-1.11.
> >>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-
> >>>>>>> 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> >>>>>>> java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/
> >>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:
> >>>>>>> /usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.
> >>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.
> >>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-
> >>>>>>> java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/
> >>>>>>> aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/
> >>>>>>> aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/
> >>>>>>> share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-
> >>>>>>> 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> >>>>>>> elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> >>>>>>> aws-java-sdk-iot-1.11.160.jar
> >>>>>>>
> >>>>>>> Thank you
> >>>>>>>
> >>>>>>> On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <
> >>> trohrmann@apache.org>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> Hi Andy,
> >>>>>>>>
> >>>>>>>> the CliFrontend is not executed via Yarn, thus, it is not
> >>> affected
> >>>> by
> >>>>>>>> dependencies which are added due to the underlying Yarn
> >> cluster.
> >>>>>>> Therefore,
> >>>>>>>> it would be helpful to look at the TaskManager logs. Either you
> >>>> have
> >>>>>>>> enabled log aggregation on your Yarn cluster, then you can
> >> obtain
> >>>> the
> >>>>>>> logs
> >>>>>>>> via `yarn logs -applicationId <application ID>` or you have to
> >>>>> retrieve
> >>>>>>>> them from the machines where they were running (either by going
> >>>>>> directly
> >>>>>>>> there or via the Yarn web interface).
> >>>>>>>>
> >>>>>>>> Cheers,
> >>>>>>>> Till
> >>>>>>>>
> >>>>>>>> On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com>
> >>> wrote:
> >>>>>>>>
> >>>>>>>>> Hi Till,
> >>>>>>>>>
> >>>>>>>>> That is actually the classpath used by the flink bash
> >>> script(that
> >>>>>>>> launches
> >>>>>>>>> the jar using the java command).  I changed the execute to an
> >>>> echo,
> >>>>>> and
> >>>>>>>>> grabbed that for the CLI arguments.
> >>>>>>>>>
> >>>>>>>>> I believe this is the class path from the log file(although
> >> it
> >>>>> might
> >>>>>>> not
> >>>>>>>> be
> >>>>>>>>> the taskmanager log, is that any different from what would be
> >>> in
> >>>> my
> >>>>>>>>> flink-1.3.2/log folder?):
> >>>>>>>>>
> >>>>>>>>> 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.
> >>>> CliFrontend
> >>>>>>>>>                 -  Classpath:
> >>>>>>>>> /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> >>>>>>>>> home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> >>>>>>>>> 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> >>>>>>>>> hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> >>>>>>>>> hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/
> >>>>> hadoop/conf:
> >>>>>>>>>
> >>>>>>>>> If that doesn't seem right, and you can point me in the right
> >>>>>> direction
> >>>>>>>> as
> >>>>>>>>> to where the TaskManager logs would be, I would be happy to
> >>> grab
> >>>>> the
> >>>>>>>>> information your looking for.
> >>>>>>>>>
> >>>>>>>>> Thank you
> >>>>>>>>>
> >>>>>>>>> On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <
> >>>>> trohrmann@apache.org>
> >>>>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>>> Hi Andy,
> >>>>>>>>>>
> >>>>>>>>>> this looks to me indeed like a dependency problem. I assume
> >>>> that
> >>>>>> EMR
> >>>>>>> or
> >>>>>>>>>> something else is pulling in an incompatible version of
> >>> Hadoop.
> >>>>>>>>>>
> >>>>>>>>>> The classpath you've posted, is this the one logged in the
> >>> log
> >>>>>> files
> >>>>>>>>>> (TaskManager log) or did you compile it yourself? In the
> >>> latter
> >>>>>> case,
> >>>>>>>> it
> >>>>>>>>>> would also be helpful to get access to the TaskManager
> >> logs.
> >>>>>>>>>>
> >>>>>>>>>> Cheers,
> >>>>>>>>>> Till
> >>>>>>>>>>
> >>>>>>>>>> On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <
> >> ajm2444@gmail.com>
> >>>>>> wrote:
> >>>>>>>>>>
> >>>>>>>>>>> Hi Fabian,
> >>>>>>>>>>>
> >>>>>>>>>>> 1) I have looked at the linked docs, and from what I can
> >>> tell
> >>>>> no
> >>>>>>>> setup
> >>>>>>>>>>> should really need to be done to get Flink working(Other
> >>> than
> >>>>>>>>> downloading
> >>>>>>>>>>> the correct binaries, which I believe I did)
> >>>>>>>>>>> 2) I have downloaded the Flink 1.3.2
> >>>> binaries(flink-1.3.2-bin-
> >>>>>>>>>>> hadoop27-scala_2.11.tgz
> >>>>>>>>>>> <http://apache.claz.org/flink/
> >> flink-1.3.2/flink-1.3.2-bin-
> >>>>>>>>>>> hadoop27-scala_2.11.tgz>)
> >>>>>>>>>>> This is for hadoop 2.7.X, which matches EMR 5.8.0.
> >>>>>>>>>>>
> >>>>>>>>>>> I appreciate any help or guidance you can provide me in
> >>>> fixing
> >>>>> my
> >>>>>>>>>> problems,
> >>>>>>>>>>> please let me know if there is anything else I can
> >> provide
> >>>> you.
> >>>>>>>>>>>
> >>>>>>>>>>> Thank you
> >>>>>>>>>>>
> >>>>>>>>>>> On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <
> >>>>> fhueske@gmail.com
> >>>>>>>
> >>>>>>>>> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>> Hi Andy,
> >>>>>>>>>>>>
> >>>>>>>>>>>> I'm not an AWS expert, so I'll just check on some
> >> common
> >>>>>> issues.
> >>>>>>>>>>>>
> >>>>>>>>>>>> I guess you already had a look at the Flink docs for
> >>>> AWS/EMR
> >>>>>> but
> >>>>>>>> I'll
> >>>>>>>>>>> post
> >>>>>>>>>>>> the link just be to sure [1].
> >>>>>>>>>>>>
> >>>>>>>>>>>> Since you are using Flink 1.3.2 (EMR 5.8.0 comes with
> >>> Flink
> >>>>>>> 1.3.1)
> >>>>>>>>> did
> >>>>>>>>>>> you
> >>>>>>>>>>>> built Flink yourself or did you download the binaries?
> >>>>>>>>>>>> Does the Hadoop version of the Flink build match the
> >>> Hadoop
> >>>>>>> version
> >>>>>>>>> of
> >>>>>>>>>>> EMR
> >>>>>>>>>>>> 5.8.0, i.e., Hadoop 2.7.x?
> >>>>>>>>>>>>
> >>>>>>>>>>>> Best, Fabian
> >>>>>>>>>>>>
> >>>>>>>>>>>> [1]
> >>>>>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-
> >>>>>>>>>>> release-1.3/setup/aws.html
> >>>>>>>>>>>>
> >>>>>>>>>>>> 2017-10-02 21:51 GMT+02:00 Andy M. <ajm2444@gmail.com
> >>> :
> >>>>>>>>>>>>
> >>>>>>>>>>>>> Hi Fabian,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Sorry, I just realized I forgot to include that part.
> >>>> The
> >>>>>>> error
> >>>>>>>>>>> returned
> >>>>>>>>>>>>> is:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> java.lang.NoSuchMethodError:
> >>>>>>>>>>>>> org.apache.hadoop.conf.Configuration.addResource(
> >>>>>>>>>>>> Lorg/apache/hadoop/conf/
> >>>>>>>>>>>>> Configuration;)V
> >>>>>>>>>>>>>    at com.amazon.ws.emr.hadoop.fs.
> >>>>> EmrFileSystem.initialize(
> >>>>>>>>>>>>> EmrFileSystem.java:93)
> >>>>>>>>>>>>>    at org.apache.flink.runtime.fs.
> >>>> hdfs.HadoopFileSystem.
> >>>>>>>>>>>>> initialize(HadoopFileSystem.java:328)
> >>>>>>>>>>>>>    at org.apache.flink.core.fs.FileSystem.
> >>>>>>>> getUnguardedFileSystem(
> >>>>>>>>>>>>> FileSystem.java:350)
> >>>>>>>>>>>>>    at org.apache.flink.core.fs.
> >>>> FileSystem.get(FileSystem.
> >>>>>>>>> java:389)
> >>>>>>>>>>>>>    at org.apache.flink.core.fs.Path.
> >>>>>>>> getFileSystem(Path.java:293)
> >>>>>>>>>>>>>    at org.apache.flink.runtime.state.filesystem.
> >>>>>>>>>>>>> FsCheckpointStreamFactory.<init>(
> >>>>> FsCheckpointStreamFactory.
> >>>>>>>>> java:99)
> >>>>>>>>>>>>>    at org.apache.flink.runtime.state.filesystem.
> >>>>>>> FsStateBackend.
> >>>>>>>>>>>>> createStreamFactory(FsStateBackend.java:282)
> >>>>>>>>>>>>>    at org.apache.flink.contrib.streaming.state.
> >>>>>>>>> RocksDBStateBackend.
> >>>>>>>>>>>>> createStreamFactory(RocksDBStateBackend.java:273
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I believe it has something to do with the classpath,
> >>> but
> >>>> I
> >>>>> am
> >>>>>>>>> unsure
> >>>>>>>>>>> why
> >>>>>>>>>>>> or
> >>>>>>>>>>>>> how to fix it.  The classpath being used during the
> >>>>> execution
> >>>>>>> is:
> >>>>>>>>>>>>> /home/hadoop/flink-1.3.2/lib/
> >>>> flink-python_2.11-1.3.2.jar:/
> >>>>>>>>>>>>> ho‌​me/hadoop/flink-1.3.‌​2/
> >>> lib/flink-shaded-h‌​adoop2-
> >>>>>>>>>>>>> uber-1.3.2.ja‌​r:/home/hadoop/
> >>>>> flink‌​-1.3.2/lib/log4j-1.2‌​.
> >>>>>>>>>>>>> 17.jar:/home/hadoop‌​/flink-1.
> >>>>> 3.2/lib/slf‌​4j-log4j12-1.7.7.
> >>>>>>>>>>>>> jar‌​:/home/hadoop/flink-‌​1.
> >>>>> 3.2/lib/flink-dist‌​_2.11-1.3.
> >>>>>>>>>>>>> 2.jar::/et‌​c/hadoop/conf:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r
> >>> and
> >>>>> it
> >>>>>>>> seems
> >>>>>>>>>> the
> >>>>>>>>>>>>> addResource function does seem to be there.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Thank you
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <
> >>>>>>> fhueske@gmail.com
> >>>>>>>>>
> >>>>>>>>>>> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> Hi Andy,
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> can you describe in more detail what exactly isn't
> >>>>> working?
> >>>>>>>>>>>>>> Do you see error messages in the log files or on
> >> the
> >>>>>> console?
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Thanks, Fabian
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> 2017-10-02 15:52 GMT+02:00 Andy M. <
> >>> ajm2444@gmail.com
> >>>>> :
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Hello,
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> I am about to deploy my first Flink projects  to
> >>>>>>> production,
> >>>>>>>>> but
> >>>>>>>>>> I
> >>>>>>>>>>> am
> >>>>>>>>>>>>>>> running into a very big hurdle.  I am unable to
> >>>> launch
> >>>>> my
> >>>>>>>>> project
> >>>>>>>>>>> so
> >>>>>>>>>>>> it
> >>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>> write to an S3 bucket.  My project is running on
> >> an
> >>>> EMR
> >>>>>>>>> cluster,
> >>>>>>>>>>>> where
> >>>>>>>>>>>>> I
> >>>>>>>>>>>>>>> have installed Flink 1.3.2.  I am using Yarn to
> >>>> launch
> >>>>>> the
> >>>>>>>>>>>> application,
> >>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>> it seems to run fine unless I am trying to enable
> >>>> check
> >>>>>>>>>>>> pointing(with a
> >>>>>>>>>>>>>> S3
> >>>>>>>>>>>>>>> target).  I am looking to use RocksDB as my
> >>>>>> check-pointing
> >>>>>>>>>> backend.
> >>>>>>>>>>>> I
> >>>>>>>>>>>>>> have
> >>>>>>>>>>>>>>> asked a few places, and I am still unable to
> >> find a
> >>>>>>> solution
> >>>>>>>> to
> >>>>>>>>>>> this
> >>>>>>>>>>>>>>> problem.  Here are my steps for creating a
> >> cluster,
> >>>> and
> >>>>>>>>> launching
> >>>>>>>>>>> my
> >>>>>>>>>>>>>>> application, perhaps I am missing a step.  I'd be
> >>>> happy
> >>>>>> to
> >>>>>>>>>> provide
> >>>>>>>>>>>> any
> >>>>>>>>>>>>>>> additional information if needed.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> AWS Portal:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>    1) EMR -> Create Cluster
> >>>>>>>>>>>>>>>    2) Advanced Options
> >>>>>>>>>>>>>>>    3) Release = emr-5.8.0
> >>>>>>>>>>>>>>>    4) Only select Hadoop 2.7.3
> >>>>>>>>>>>>>>>    5) Next -> Next -> Next -> Create Cluster ( I
> >>> do
> >>>>> fill
> >>>>>>> out
> >>>>>>>>>>>>>>> names/keys/etc)
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Once the cluster is up I ssh into the Master and
> >> do
> >>>> the
> >>>>>>>>>> following:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>    1  wget
> >>>>>>>>>>>>>>> http://apache.claz.org/flink/
> >>>>>> flink-1.3.2/flink-1.3.2-bin-
> >>>>>>>>>>>>>>> hadoop27-scala_2.11.tgz
> >>>>>>>>>>>>>>>    2  tar -xzf flink-1.3.2-bin-hadoop27-
> >>>>> scala_2.11.tgz
> >>>>>>>>>>>>>>>    3  cd flink-1.3.2
> >>>>>>>>>>>>>>>    4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4
> >> -d
> >>>>>>>>>>>>>>>    5  Change conf/flink-conf.yaml
> >>>>>>>>>>>>>>>    6  ./bin/flink run -m yarn-cluster -yn 1
> >>>>>>>>> ~/flink-consumer.jar
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> My conf/flink-conf.yaml I add the following
> >> fields:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>    state.backend: rocksdb
> >>>>>>>>>>>>>>>    state.backend.fs.checkpointdir:
> >>>>> s3:/bucket/location
> >>>>>>>>>>>>>>>    state.checkpoints.dir: s3:/bucket/location
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> My program's checkpointing setup:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> env.enableCheckpointing(getCheckpointRate,
> >>>>>>>>>>> CheckpointingMode.EXACTLY_
> >>>>>>>>>>>>>> ONCE)
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> env.getCheckpointConfig.
> >>>> enableExternalizedCheckpoints(
> >>>>>>>>>>>>>>> ExternalizedCheckpointCleanup.
> >>>> RETAIN_ON_CANCELLATION)
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> env.getCheckpointConfig.
> >>>> setMinPauseBetweenCheckpoints(
> >>>>>>>>>>>>>>> getCheckpointMinPause)
> >>>>>>>>>>>>>>>    env.getCheckpointConfig.
> >> setCheckpointTimeout(
> >>>>>>>>>>>> getCheckpointTimeout)
> >>>>>>>>>>>>>>>    env.getCheckpointConfig.
> >>>>>> setMaxConcurrentCheckpoints(1)
> >>>>>>>>>>>>>>>    env.setStateBackend(new
> >>>> RocksDBStateBackend("s3://
> >>>>>>>>>>>>> bucket/location",
> >>>>>>>>>>>>>>> true))
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>
>
>

Re: Unable to write snapshots to S3 on EMR

Posted by Aljoscha Krettek <al...@apache.org>.
Hi Andy,

I remember that I was testing a job with almost exactly the same setup as part of the Flink 1.3.2 release testing. The command I used to start my job is roughly this:

HADOOP_CONF_DIR=/etc/hadoop/conf bin/flink run -c my.main.Class -m yarn-cluster -yn 5 -ys 2 ...

i.e. I export the proper hadoop config dir and I run a per-job YARN cluster. I think I also exported the result of "hadoop classpath" as HADOOP_CLASSPATH.

Best,
Aljoscha

> On 10. Oct 2017, at 16:43, Andy M. <aj...@gmail.com> wrote:
> 
> Hello,
> 
> Bowen:  Unless I am missing something, it says there needs to be no setup
> on EMR, Each topic says: "You don’t have to configure this manually if you
> are running Flink on EMR."  S3 access from CLI works fine on my clusters.
> 
> Chen: Thank you for this, I will look into this if I am unable to get this
> running on YARN successfully.
> 
> Stephan:  Removing the said library causes the flink
> (flink-1.3.2/bin/flink) bash script to fail.  The underlying Java needs
> this to work.  I tried explicitly setting the classpath for the java call
> as well to point to the hadoop library jars.  This is the original java
> command that I was trying to run:
> 
> java
> -Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-ip-172-31-19-27.log
> -Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/log4j-cli.properties
> -Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/conf/logback.xml
> -classpath
> /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:
> org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
> /home/hadoop/flink-consumer.jar
> 
> 
> This is what I changed it too(removing the shadded-hadoop2-uber jar and
> adding in the hadoop folder):
> 
> java
> -Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-ip-172-31-19-27.log
> -Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/log4j-cli.properties
> -Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/conf/logback.xml
> -classpath
> /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.10.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/httpcore-4.4.4.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/etc/hadoop/conf
> org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
> /home/hadoop/flink-consumer.jar
> 
> The later throws the following error:
> 
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/hadoop/util/VersionInfo
>        at
> org.apache.flink.runtime.util.EnvironmentInformation.logEnvironmentInfo(EnvironmentInformation.java:283)
>        at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1124)
> Caused by: java.lang.ClassNotFoundException:
> org.apache.hadoop.util.VersionInfo
>        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>        ... 2 more
> 
> Simply removing the .jar from the folder causes the same error.
> 
> Thank you
> 
> 
> 
> 
> On Mon, Oct 9, 2017 at 5:46 AM, Stephan Ewen <se...@apache.org> wrote:
> 
>> Hi!
>> 
>> It looks like multiple Hadoop versions are in the classpath. Flink's hadoop
>> jar and the EMR Hadoop jars.
>> I would simply drop Flink's own Hadoop dependency and only use the EMR
>> Hadoop jars.
>> 
>> Delete the 'flink-shaded-h‌​adoop2-uber' jar from Flink, and make sure the
>> setup is such that the Hadoop lib environment variable is set. Then it
>> should not have conflicts any more.
>> 
>> 
>> 
>> On Sun, Oct 8, 2017 at 12:08 AM, Chen Qin <qi...@gmail.com> wrote:
>> 
>>> Attached my side project verified working to deploy jobmanager and
>>> taskmanager as stateless service(non yarn/mesos), configuration here
>>> 
>>> https://github.com/chenqin/flink-jar/tree/master/config/hadoop
>>> 
>>> more detail here
>>> https://github.com/chenqin/flink-jar/blob/master/src/
>>> main/java/FlinkBootstrap.java#L49
>>> 
>>> On Fri, Oct 6, 2017 at 10:26 PM, Bowen Li <bo...@offerupnow.com>
>> wrote:
>>> 
>>>> Hi Andy,
>>>> 
>>>> I believe it's because you didn't set your s3 impl correctly. Try to
>> set
>>>> your core-site.xml by following https://ci.apache.org/
>>>> projects/flink/flink-docs-release-1.4/ops/deployment/
>>>> aws.html#s3afilesystem-
>>>> recommended
>>>> 
>>>> Bowen
>>>> 
>>>> On Fri, Oct 6, 2017 at 7:59 AM, Andy M. <aj...@gmail.com> wrote:
>>>> 
>>>>> Hi Till,
>>>>> 
>>>>> Seems like everything is in line there.  hadoop-common.jar ->
>>>>> hadoop-common-2.7.3-amzn-3.jar
>>>>> 
>>>>> And when i decompiled that jar I see  public void
>>>> addResource(Configuration
>>>>> conf) in org/apache/hadoop/conf/Configuration.java
>>>>> 
>>>>> I agree that an incorrect version of the jar is probably being run,
>> is
>>>>> there a way to limit the classpath for the TaskManager when starting
>>> the
>>>>> job?
>>>>> 
>>>>> Thank you
>>>>> 
>>>>> On Fri, Oct 6, 2017 at 6:49 AM, Till Rohrmann <tr...@apache.org>
>>>>> wrote:
>>>>> 
>>>>>> Hi Andy,
>>>>>> 
>>>>>> could you check which Hadoop version this jar
>>>>>> /usr/lib/hadoop/hadoop-common.jar is? Maybe also checking whether
>>> the
>>>>>> contained hadoop Configuration class has the method
>>>>>> Configuration.addResource(Lorg/apache/hadoop/conf/
>> Configuration;)V.
>>>>> Maybe
>>>>>> this jar is the culprit because it comes from a different Hadoop
>>>> version.
>>>>>> 
>>>>>> Cheers,
>>>>>> Till
>>>>>> ​
>>>>>> 
>>>>>> On Thu, Oct 5, 2017 at 4:22 PM, Andy M. <aj...@gmail.com> wrote:
>>>>>> 
>>>>>>> Hi Till,
>>>>>>> 
>>>>>>> I believe this is what you are looking for, classpath is much
>>> bigger
>>>>> for
>>>>>>> the task manager.  I can also post the whole log file if needed:
>>>>>>> 
>>>>>>> 2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.
>>>>>> YarnTaskManagerRunner
>>>>>>>                 -  Classpath:
>>>>>>> flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-
>>>>>>> python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.
>>>>>>> 2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:
>>>>>>> log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/
>>>>>>> etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-
>>>>>>> amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.
>>>>>>> 3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/
>>>>>>> hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
>>>>>>> nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-
>>>>>>> amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/
>>>>>>> usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>> hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-
>>>>>>> streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/
>>>>>>> hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/
>>>>>>> usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>> hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.
>>>>>>> 3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/
>>>>>>> hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/
>>>>>>> hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.
>>>>>>> 7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-
>>>>>>> amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/
>>>>>>> hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.
>>>>>>> jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/
>>>>>>> hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
>>>>>>> datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
>>>>>>> archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/
>>>>>>> hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-
>>>>>>> amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/
>>>>>>> usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>> hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.
>>>>>>> 2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.
>>>>>>> jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
>>>>>>> lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/
>>>>>>> httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
>>>>>>> 1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
>>>>>>> jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/
>>>>>>> lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-
>>>>>>> 1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/
>>>>>>> usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/
>>>>>>> gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.
>>>>>>> 8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/
>>>>>>> lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/
>>>>>>> apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
>>>>>>> httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.
>>>>>>> 1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/
>>>>>>> lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
>>>>>>> commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/
>>>>>>> usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/
>>>>>>> xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
>>>>>>> jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/
>>>>>>> commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.
>>>>>>> jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/
>>>>>>> usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/
>>>>>>> jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.
>>>>>>> Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/
>>>>>>> lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-
>>>>>>> api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-
>>>>>>> 1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:
>>>>>>> /usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/
>>>>>>> jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-
>>>>>>> cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/
>>>>>>> lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/
>>>>>>> apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/
>>>>>>> commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.
>>>>>>> 11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:
>>>>>>> /usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
>>>>>>> lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.
>>>>>>> 10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/
>>>>>>> usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/
>>>>>>> lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.
>>>>>>> jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/
>>>>>>> hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/
>>>>>>> curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-
>>>>>>> jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.
>>>>>>> 4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/
>>>>>>> usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/
>>>>>>> usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/
>>>>>>> usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/
>>>>>>> hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-
>>>>>>> amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-
>>>>>>> incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-
>>>>>>> 2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.
>>>>>>> jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/
>>>>>>> lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/
>>>>>>> lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/
>>>>>>> commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.
>>>>>>> jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/
>>>>>>> usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/
>>>>>>> hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/
>>>>>>> netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-
>>>>>>> 1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/
>>>>>>> hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/
>>>>>>> lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/
>>>>>>> hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
>>>>>>> hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/
>>>>>>> lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-
>>>>>>> api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:
>>>>>>> /usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/
>>>>>>> hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/
>>>>>>> lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-
>>>>>>> logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.
>>>>>>> jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/
>>>>>> hadoop-mapreduce/hadoop-
>>>>>>> mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-
>>>>>>> mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.
>>>>>>> jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-
>>>>>>> 3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.
>>>>>> 8.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-
>>>>>>> 3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.
>>>>>>> jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-
>>>>>>> net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-
>>>>>>> 1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-
>>>>>>> 0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:
>>>>>>> /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.
>>>>>>> 7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.
>>>>>>> 2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-
>>>>>>> client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.
>>>>>>> jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
>>>>>>> common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.
>>>>>>> 3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
>>>>>>> jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
>>>>>>> archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.
>>>>>>> 9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.
>>>>>>> jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-
>>>>>>> kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.
>>>>>>> jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:
>>>>>>> /usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:
>>>>>>> /usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-
>>>> mapreduce/commons-
>>>>>>> configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-
>>>>>>> 1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-
>>>>>>> 2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:
>>>>>>> /usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/
>>>>>>> lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/
>>>>>> lib/hadoop-mapreduce/lib/
>>>>>>> snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/
>>>>>>> jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/
>>>>>>> paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-
>>>>>>> guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.
>>>>>>> jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/
>>>>>>> lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-
>>>>>>> mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/
>>>>>>> guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.
>>>>>>> 0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/
>>>>>>> usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/
>>>>>>> hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-
>>>>>>> mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/
>>>>>>> hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-
>>>> yarn-server-
>>>>>>> sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
>>>>>>> server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
>>>>>>> hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
>>>>>>> hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>> yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/
>>>>>>> hadoop-yarn/hadoop-yarn-applications-distributedshell-
>>>>>>> 2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
>>>>>>> server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-
>>> yarn-server-
>>>>>>> sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
>>>>>>> hadoop-yarn-server-applicationhistoryservice.jar:
>>>>>>> /usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/
>>>>>>> lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-
>>>>>>> 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-
>>>>>>> 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
>>>>>>> applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>> yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/
>>>>>>> lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-
>>>>>>> yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>> yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.
>>>>>>> jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
>>>>>>> unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-
>>>>>>> yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
>>>>>>> hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
>>>>>>> common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-
>>>>>>> proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
>>>>>>> nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.
>>>>>>> jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
>>>>>>> unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-
>>>>>>> yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-
>>>>>>> api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.
>>>>>>> jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/
>>>>>>> lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/
>>>>>>> hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/
>>>>>>> lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.
>>>>>>> 9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/
>>>>>>> usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/
>>>>>>> hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/
>>>>>>> lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-
>>>>>>> client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.
>>>>>>> jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/
>>>>>>> usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-
>>>>>>> yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/
>>>>>>> netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/
>>>>>>> aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/
>>>>>>> usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/
>>>>>>> usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:
>>>>>>> /usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/
>>>>>>> lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/
>>>>>>> hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-
>>>>>>> 3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.
>>>>>>> jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/
>>>>>>> hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/
>>>>>>> lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-
>>>>>>> collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-
>>>>>>> 1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/
>>>>>>> usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/
>>>>>>> hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/
>>>>>>> lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-
>>>>>>> jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-
>>>>>>> logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:
>>>>>>> /usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/
>>>>>>> share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/
>>>>>>> jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/
>>>>>>> bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-
>>>>>>> over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-
>>>>>>> java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.
>>>>>>> 21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/
>>>>>>> usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/
>>>>>>> share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/
>>>>>>> share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/
>>>>>>> usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/
>>>>>>> lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/
>>>>>>> share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/
>>>>>>> aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/
>>>>>>> aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/
>>>>>>> usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/
>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-
>>>>>>> gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>> java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>> aws-java-sdk-
>>>>>>> elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-
>> mechanicalturkrequester-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/
>>>>>>> aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/
>>>>>>> usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/
>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/
>>>>>>> aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>> java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-
>>>>>>> sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-
>>>>>>> sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/
>>>>>>> aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/
>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>> stepfunctions-1.11.160.jar:/
>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/
>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>> java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-
>>>>>> sdk/aws-java-sdk-
>>>>>>> cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/
>>>>>>> aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/
>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>> elastictranscoder-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>> organizations-1.11.160.jar:/
>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-
>>>>>> sdk/aws-java-sdk-
>>>>>>> applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/
>>>>>>> aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/
>>>>>>> aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/
>>>>>>> aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:
>>>>>>> /usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>> java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:
>>>>>>> /usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>> java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-
>> resourcegroupstaggingapi-1.11.
>>>>>>> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-
>>>>>>> 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
>>>>>>> java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/
>>>>>>> usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:
>>>>>>> /usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.
>>>>>>> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.
>>>>>>> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-
>>>>>>> java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/
>>>>>>> aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/
>>>>>>> aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/
>>>>>>> share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-
>>>>>>> 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
>>>>>>> elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/
>>>>>>> aws-java-sdk-iot-1.11.160.jar
>>>>>>> 
>>>>>>> Thank you
>>>>>>> 
>>>>>>> On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <
>>> trohrmann@apache.org>
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> Hi Andy,
>>>>>>>> 
>>>>>>>> the CliFrontend is not executed via Yarn, thus, it is not
>>> affected
>>>> by
>>>>>>>> dependencies which are added due to the underlying Yarn
>> cluster.
>>>>>>> Therefore,
>>>>>>>> it would be helpful to look at the TaskManager logs. Either you
>>>> have
>>>>>>>> enabled log aggregation on your Yarn cluster, then you can
>> obtain
>>>> the
>>>>>>> logs
>>>>>>>> via `yarn logs -applicationId <application ID>` or you have to
>>>>> retrieve
>>>>>>>> them from the machines where they were running (either by going
>>>>>> directly
>>>>>>>> there or via the Yarn web interface).
>>>>>>>> 
>>>>>>>> Cheers,
>>>>>>>> Till
>>>>>>>> 
>>>>>>>> On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com>
>>> wrote:
>>>>>>>> 
>>>>>>>>> Hi Till,
>>>>>>>>> 
>>>>>>>>> That is actually the classpath used by the flink bash
>>> script(that
>>>>>>>> launches
>>>>>>>>> the jar using the java command).  I changed the execute to an
>>>> echo,
>>>>>> and
>>>>>>>>> grabbed that for the CLI arguments.
>>>>>>>>> 
>>>>>>>>> I believe this is the class path from the log file(although
>> it
>>>>> might
>>>>>>> not
>>>>>>>> be
>>>>>>>>> the taskmanager log, is that any different from what would be
>>> in
>>>> my
>>>>>>>>> flink-1.3.2/log folder?):
>>>>>>>>> 
>>>>>>>>> 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.
>>>> CliFrontend
>>>>>>>>>                 -  Classpath:
>>>>>>>>> /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
>>>>>>>>> home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
>>>>>>>>> 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
>>>>>>>>> hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
>>>>>>>>> hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/
>>>>> hadoop/conf:
>>>>>>>>> 
>>>>>>>>> If that doesn't seem right, and you can point me in the right
>>>>>> direction
>>>>>>>> as
>>>>>>>>> to where the TaskManager logs would be, I would be happy to
>>> grab
>>>>> the
>>>>>>>>> information your looking for.
>>>>>>>>> 
>>>>>>>>> Thank you
>>>>>>>>> 
>>>>>>>>> On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <
>>>>> trohrmann@apache.org>
>>>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> Hi Andy,
>>>>>>>>>> 
>>>>>>>>>> this looks to me indeed like a dependency problem. I assume
>>>> that
>>>>>> EMR
>>>>>>> or
>>>>>>>>>> something else is pulling in an incompatible version of
>>> Hadoop.
>>>>>>>>>> 
>>>>>>>>>> The classpath you've posted, is this the one logged in the
>>> log
>>>>>> files
>>>>>>>>>> (TaskManager log) or did you compile it yourself? In the
>>> latter
>>>>>> case,
>>>>>>>> it
>>>>>>>>>> would also be helpful to get access to the TaskManager
>> logs.
>>>>>>>>>> 
>>>>>>>>>> Cheers,
>>>>>>>>>> Till
>>>>>>>>>> 
>>>>>>>>>> On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <
>> ajm2444@gmail.com>
>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>>> Hi Fabian,
>>>>>>>>>>> 
>>>>>>>>>>> 1) I have looked at the linked docs, and from what I can
>>> tell
>>>>> no
>>>>>>>> setup
>>>>>>>>>>> should really need to be done to get Flink working(Other
>>> than
>>>>>>>>> downloading
>>>>>>>>>>> the correct binaries, which I believe I did)
>>>>>>>>>>> 2) I have downloaded the Flink 1.3.2
>>>> binaries(flink-1.3.2-bin-
>>>>>>>>>>> hadoop27-scala_2.11.tgz
>>>>>>>>>>> <http://apache.claz.org/flink/
>> flink-1.3.2/flink-1.3.2-bin-
>>>>>>>>>>> hadoop27-scala_2.11.tgz>)
>>>>>>>>>>> This is for hadoop 2.7.X, which matches EMR 5.8.0.
>>>>>>>>>>> 
>>>>>>>>>>> I appreciate any help or guidance you can provide me in
>>>> fixing
>>>>> my
>>>>>>>>>> problems,
>>>>>>>>>>> please let me know if there is anything else I can
>> provide
>>>> you.
>>>>>>>>>>> 
>>>>>>>>>>> Thank you
>>>>>>>>>>> 
>>>>>>>>>>> On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <
>>>>> fhueske@gmail.com
>>>>>>> 
>>>>>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Hi Andy,
>>>>>>>>>>>> 
>>>>>>>>>>>> I'm not an AWS expert, so I'll just check on some
>> common
>>>>>> issues.
>>>>>>>>>>>> 
>>>>>>>>>>>> I guess you already had a look at the Flink docs for
>>>> AWS/EMR
>>>>>> but
>>>>>>>> I'll
>>>>>>>>>>> post
>>>>>>>>>>>> the link just be to sure [1].
>>>>>>>>>>>> 
>>>>>>>>>>>> Since you are using Flink 1.3.2 (EMR 5.8.0 comes with
>>> Flink
>>>>>>> 1.3.1)
>>>>>>>>> did
>>>>>>>>>>> you
>>>>>>>>>>>> built Flink yourself or did you download the binaries?
>>>>>>>>>>>> Does the Hadoop version of the Flink build match the
>>> Hadoop
>>>>>>> version
>>>>>>>>> of
>>>>>>>>>>> EMR
>>>>>>>>>>>> 5.8.0, i.e., Hadoop 2.7.x?
>>>>>>>>>>>> 
>>>>>>>>>>>> Best, Fabian
>>>>>>>>>>>> 
>>>>>>>>>>>> [1]
>>>>>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-
>>>>>>>>>>> release-1.3/setup/aws.html
>>>>>>>>>>>> 
>>>>>>>>>>>> 2017-10-02 21:51 GMT+02:00 Andy M. <ajm2444@gmail.com
>>> :
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Fabian,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sorry, I just realized I forgot to include that part.
>>>> The
>>>>>>> error
>>>>>>>>>>> returned
>>>>>>>>>>>>> is:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> java.lang.NoSuchMethodError:
>>>>>>>>>>>>> org.apache.hadoop.conf.Configuration.addResource(
>>>>>>>>>>>> Lorg/apache/hadoop/conf/
>>>>>>>>>>>>> Configuration;)V
>>>>>>>>>>>>>    at com.amazon.ws.emr.hadoop.fs.
>>>>> EmrFileSystem.initialize(
>>>>>>>>>>>>> EmrFileSystem.java:93)
>>>>>>>>>>>>>    at org.apache.flink.runtime.fs.
>>>> hdfs.HadoopFileSystem.
>>>>>>>>>>>>> initialize(HadoopFileSystem.java:328)
>>>>>>>>>>>>>    at org.apache.flink.core.fs.FileSystem.
>>>>>>>> getUnguardedFileSystem(
>>>>>>>>>>>>> FileSystem.java:350)
>>>>>>>>>>>>>    at org.apache.flink.core.fs.
>>>> FileSystem.get(FileSystem.
>>>>>>>>> java:389)
>>>>>>>>>>>>>    at org.apache.flink.core.fs.Path.
>>>>>>>> getFileSystem(Path.java:293)
>>>>>>>>>>>>>    at org.apache.flink.runtime.state.filesystem.
>>>>>>>>>>>>> FsCheckpointStreamFactory.<init>(
>>>>> FsCheckpointStreamFactory.
>>>>>>>>> java:99)
>>>>>>>>>>>>>    at org.apache.flink.runtime.state.filesystem.
>>>>>>> FsStateBackend.
>>>>>>>>>>>>> createStreamFactory(FsStateBackend.java:282)
>>>>>>>>>>>>>    at org.apache.flink.contrib.streaming.state.
>>>>>>>>> RocksDBStateBackend.
>>>>>>>>>>>>> createStreamFactory(RocksDBStateBackend.java:273
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I believe it has something to do with the classpath,
>>> but
>>>> I
>>>>> am
>>>>>>>>> unsure
>>>>>>>>>>> why
>>>>>>>>>>>> or
>>>>>>>>>>>>> how to fix it.  The classpath being used during the
>>>>> execution
>>>>>>> is:
>>>>>>>>>>>>> /home/hadoop/flink-1.3.2/lib/
>>>> flink-python_2.11-1.3.2.jar:/
>>>>>>>>>>>>> ho‌​me/hadoop/flink-1.3.‌​2/
>>> lib/flink-shaded-h‌​adoop2-
>>>>>>>>>>>>> uber-1.3.2.ja‌​r:/home/hadoop/
>>>>> flink‌​-1.3.2/lib/log4j-1.2‌​.
>>>>>>>>>>>>> 17.jar:/home/hadoop‌​/flink-1.
>>>>> 3.2/lib/slf‌​4j-log4j12-1.7.7.
>>>>>>>>>>>>> jar‌​:/home/hadoop/flink-‌​1.
>>>>> 3.2/lib/flink-dist‌​_2.11-1.3.
>>>>>>>>>>>>> 2.jar::/et‌​c/hadoop/conf:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r
>>> and
>>>>> it
>>>>>>>> seems
>>>>>>>>>> the
>>>>>>>>>>>>> addResource function does seem to be there.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thank you
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <
>>>>>>> fhueske@gmail.com
>>>>>>>>> 
>>>>>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Andy,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> can you describe in more detail what exactly isn't
>>>>> working?
>>>>>>>>>>>>>> Do you see error messages in the log files or on
>> the
>>>>>> console?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thanks, Fabian
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 2017-10-02 15:52 GMT+02:00 Andy M. <
>>> ajm2444@gmail.com
>>>>> :
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hello,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> I am about to deploy my first Flink projects  to
>>>>>>> production,
>>>>>>>>> but
>>>>>>>>>> I
>>>>>>>>>>> am
>>>>>>>>>>>>>>> running into a very big hurdle.  I am unable to
>>>> launch
>>>>> my
>>>>>>>>> project
>>>>>>>>>>> so
>>>>>>>>>>>> it
>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>> write to an S3 bucket.  My project is running on
>> an
>>>> EMR
>>>>>>>>> cluster,
>>>>>>>>>>>> where
>>>>>>>>>>>>> I
>>>>>>>>>>>>>>> have installed Flink 1.3.2.  I am using Yarn to
>>>> launch
>>>>>> the
>>>>>>>>>>>> application,
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>> it seems to run fine unless I am trying to enable
>>>> check
>>>>>>>>>>>> pointing(with a
>>>>>>>>>>>>>> S3
>>>>>>>>>>>>>>> target).  I am looking to use RocksDB as my
>>>>>> check-pointing
>>>>>>>>>> backend.
>>>>>>>>>>>> I
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>> asked a few places, and I am still unable to
>> find a
>>>>>>> solution
>>>>>>>> to
>>>>>>>>>>> this
>>>>>>>>>>>>>>> problem.  Here are my steps for creating a
>> cluster,
>>>> and
>>>>>>>>> launching
>>>>>>>>>>> my
>>>>>>>>>>>>>>> application, perhaps I am missing a step.  I'd be
>>>> happy
>>>>>> to
>>>>>>>>>> provide
>>>>>>>>>>>> any
>>>>>>>>>>>>>>> additional information if needed.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> AWS Portal:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>    1) EMR -> Create Cluster
>>>>>>>>>>>>>>>    2) Advanced Options
>>>>>>>>>>>>>>>    3) Release = emr-5.8.0
>>>>>>>>>>>>>>>    4) Only select Hadoop 2.7.3
>>>>>>>>>>>>>>>    5) Next -> Next -> Next -> Create Cluster ( I
>>> do
>>>>> fill
>>>>>>> out
>>>>>>>>>>>>>>> names/keys/etc)
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Once the cluster is up I ssh into the Master and
>> do
>>>> the
>>>>>>>>>> following:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>    1  wget
>>>>>>>>>>>>>>> http://apache.claz.org/flink/
>>>>>> flink-1.3.2/flink-1.3.2-bin-
>>>>>>>>>>>>>>> hadoop27-scala_2.11.tgz
>>>>>>>>>>>>>>>    2  tar -xzf flink-1.3.2-bin-hadoop27-
>>>>> scala_2.11.tgz
>>>>>>>>>>>>>>>    3  cd flink-1.3.2
>>>>>>>>>>>>>>>    4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4
>> -d
>>>>>>>>>>>>>>>    5  Change conf/flink-conf.yaml
>>>>>>>>>>>>>>>    6  ./bin/flink run -m yarn-cluster -yn 1
>>>>>>>>> ~/flink-consumer.jar
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> My conf/flink-conf.yaml I add the following
>> fields:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>    state.backend: rocksdb
>>>>>>>>>>>>>>>    state.backend.fs.checkpointdir:
>>>>> s3:/bucket/location
>>>>>>>>>>>>>>>    state.checkpoints.dir: s3:/bucket/location
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> My program's checkpointing setup:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> env.enableCheckpointing(getCheckpointRate,
>>>>>>>>>>> CheckpointingMode.EXACTLY_
>>>>>>>>>>>>>> ONCE)
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> env.getCheckpointConfig.
>>>> enableExternalizedCheckpoints(
>>>>>>>>>>>>>>> ExternalizedCheckpointCleanup.
>>>> RETAIN_ON_CANCELLATION)
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> env.getCheckpointConfig.
>>>> setMinPauseBetweenCheckpoints(
>>>>>>>>>>>>>>> getCheckpointMinPause)
>>>>>>>>>>>>>>>    env.getCheckpointConfig.
>> setCheckpointTimeout(
>>>>>>>>>>>> getCheckpointTimeout)
>>>>>>>>>>>>>>>    env.getCheckpointConfig.
>>>>>> setMaxConcurrentCheckpoints(1)
>>>>>>>>>>>>>>>    env.setStateBackend(new
>>>> RocksDBStateBackend("s3://
>>>>>>>>>>>>> bucket/location",
>>>>>>>>>>>>>>> true))
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 


Re: Unable to write snapshots to S3 on EMR

Posted by "Andy M." <aj...@gmail.com>.
Hello,

Bowen:  Unless I am missing something, it says there needs to be no setup
on EMR, Each topic says: "You don’t have to configure this manually if you
are running Flink on EMR."  S3 access from CLI works fine on my clusters.

Chen: Thank you for this, I will look into this if I am unable to get this
running on YARN successfully.

Stephan:  Removing the said library causes the flink
(flink-1.3.2/bin/flink) bash script to fail.  The underlying Java needs
this to work.  I tried explicitly setting the classpath for the java call
as well to point to the hadoop library jars.  This is the original java
command that I was trying to run:

java
-Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-ip-172-31-19-27.log
-Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/log4j-cli.properties
-Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/conf/logback.xml
-classpath
/home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:
org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
/home/hadoop/flink-consumer.jar


This is what I changed it too(removing the shadded-hadoop2-uber jar and
adding in the hadoop folder):

java
-Dlog.file=/home/hadoop/flink-1.3.2/log/flink-hadoop-client-ip-172-31-19-27.log
-Dlog4j.configuration=file:/home/hadoop/flink-1.3.2/conf/log4j-cli.properties
-Dlogback.configurationFile=file:/home/hadoop/flink-1.3.2/conf/logback.xml
-classpath
/home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.10.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/httpcore-4.4.4.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/etc/hadoop/conf
org.apache.flink.client.CliFrontend run -m yarn-cluster -yn 1
/home/hadoop/flink-consumer.jar

The later throws the following error:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/hadoop/util/VersionInfo
        at
org.apache.flink.runtime.util.EnvironmentInformation.logEnvironmentInfo(EnvironmentInformation.java:283)
        at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1124)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.util.VersionInfo
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 2 more

Simply removing the .jar from the folder causes the same error.

Thank you




On Mon, Oct 9, 2017 at 5:46 AM, Stephan Ewen <se...@apache.org> wrote:

> Hi!
>
> It looks like multiple Hadoop versions are in the classpath. Flink's hadoop
> jar and the EMR Hadoop jars.
> I would simply drop Flink's own Hadoop dependency and only use the EMR
> Hadoop jars.
>
> Delete the 'flink-shaded-h‌​adoop2-uber' jar from Flink, and make sure the
> setup is such that the Hadoop lib environment variable is set. Then it
> should not have conflicts any more.
>
>
>
> On Sun, Oct 8, 2017 at 12:08 AM, Chen Qin <qi...@gmail.com> wrote:
>
> > Attached my side project verified working to deploy jobmanager and
> > taskmanager as stateless service(non yarn/mesos), configuration here
> >
> > https://github.com/chenqin/flink-jar/tree/master/config/hadoop
> >
> > more detail here
> > https://github.com/chenqin/flink-jar/blob/master/src/
> > main/java/FlinkBootstrap.java#L49
> >
> > On Fri, Oct 6, 2017 at 10:26 PM, Bowen Li <bo...@offerupnow.com>
> wrote:
> >
> > > Hi Andy,
> > >
> > > I believe it's because you didn't set your s3 impl correctly. Try to
> set
> > > your core-site.xml by following https://ci.apache.org/
> > > projects/flink/flink-docs-release-1.4/ops/deployment/
> > > aws.html#s3afilesystem-
> > > recommended
> > >
> > > Bowen
> > >
> > > On Fri, Oct 6, 2017 at 7:59 AM, Andy M. <aj...@gmail.com> wrote:
> > >
> > > > Hi Till,
> > > >
> > > > Seems like everything is in line there.  hadoop-common.jar ->
> > > > hadoop-common-2.7.3-amzn-3.jar
> > > >
> > > > And when i decompiled that jar I see  public void
> > > addResource(Configuration
> > > > conf) in org/apache/hadoop/conf/Configuration.java
> > > >
> > > > I agree that an incorrect version of the jar is probably being run,
> is
> > > > there a way to limit the classpath for the TaskManager when starting
> > the
> > > > job?
> > > >
> > > > Thank you
> > > >
> > > > On Fri, Oct 6, 2017 at 6:49 AM, Till Rohrmann <tr...@apache.org>
> > > > wrote:
> > > >
> > > > > Hi Andy,
> > > > >
> > > > > could you check which Hadoop version this jar
> > > > > /usr/lib/hadoop/hadoop-common.jar is? Maybe also checking whether
> > the
> > > > > contained hadoop Configuration class has the method
> > > > > Configuration.addResource(Lorg/apache/hadoop/conf/
> Configuration;)V.
> > > > Maybe
> > > > > this jar is the culprit because it comes from a different Hadoop
> > > version.
> > > > >
> > > > > Cheers,
> > > > > Till
> > > > > ​
> > > > >
> > > > > On Thu, Oct 5, 2017 at 4:22 PM, Andy M. <aj...@gmail.com> wrote:
> > > > >
> > > > > > Hi Till,
> > > > > >
> > > > > > I believe this is what you are looking for, classpath is much
> > bigger
> > > > for
> > > > > > the task manager.  I can also post the whole log file if needed:
> > > > > >
> > > > > > 2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.
> > > > > YarnTaskManagerRunner
> > > > > >                  -  Classpath:
> > > > > > flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-
> > > > > > python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.
> > > > > > 2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:
> > > > > > log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/
> > > > > > etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-
> > > > > > amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.
> > > > > > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/
> > > > > > hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > > > > nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-
> > > > > > amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/
> > > > > > usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/
> > > > > > hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-
> > > > > > streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/
> > > > > > hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/
> > > > > > usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > > > > > hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.
> > > > > > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/
> > > > > > hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/
> > > > > > hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.
> > > > > > 7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-
> > > > > > amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/
> > > > > > hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.
> > > > > > jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/
> > > > > > hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > > > > datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > > > > archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/
> > > > > > hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-
> > > > > > amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/
> > > > > > usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/
> > > > > > hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.
> > > > > > 2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.
> > > > > > jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
> > > > > > lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/
> > > > > > httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
> > > > > > 1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
> > > > > > jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/
> > > > > > lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-
> > > > > > 1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/
> > > > > > usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/
> > > > > > gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.
> > > > > > 8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/
> > > > > > lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/
> > > > > > apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
> > > > > > httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.
> > > > > > 1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/
> > > > > > lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
> > > > > > commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/
> > > > > > usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/
> > > > > > xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
> > > > > > jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/
> > > > > > commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.
> > > > > > jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/
> > > > > > usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/
> > > > > > jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.
> > > > > > Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/
> > > > > > lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-
> > > > > > api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-
> > > > > > 1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:
> > > > > > /usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/
> > > > > > jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-
> > > > > > cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/
> > > > > > lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/
> > > > > > apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/
> > > > > > commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.
> > > > > > 11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:
> > > > > > /usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
> > > > > > lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.
> > > > > > 10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/
> > > > > > usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/
> > > > > > lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.
> > > > > > jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/
> > > > > > hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/
> > > > > > curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-
> > > > > > jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.
> > > > > > 4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/
> > > > > > usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/
> > > > > > usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/
> > > > > > usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/
> > > > > > hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-
> > > > > > amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-
> > > > > > incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-
> > > > > > 2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.
> > > > > > jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/
> > > > > > lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/
> > > > > > lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/
> > > > > > commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.
> > > > > > jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/
> > > > > > usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/
> > > > > > hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/
> > > > > > netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-
> > > > > > 1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/
> > > > > > hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/
> > > > > > lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/
> > > > > > hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > > > > > hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/
> > > > > > lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-
> > > > > > api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:
> > > > > > /usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/
> > > > > > hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/
> > > > > > lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-
> > > > > > logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.
> > > > > > jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/
> > > > > > hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-
> > > > > > mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/
> > > > > > hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/
> > > > > > hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/
> > > > > hadoop-mapreduce/hadoop-
> > > > > > mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/
> > > > > > httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > > > > > mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/
> > > > > > lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-
> > > > > > mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/
> > > > > > commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/
> > > > > > hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/
> > > > > > jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-
> > > > > > mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.
> > > > > > jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/
> > > > > > lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/
> > > > > > usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/
> > > > > > usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/
> > > > > > usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/
> > > > > > hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/
> > > > > > hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-
> > > > > > 3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.
> > > > > 8.jar:/usr/lib/hadoop-
> > > > > > mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-
> > > > > > 3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.
> > > > > > jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/
> > > > > > hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/
> > > > > > hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/
> > > > > > hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/
> > > > > > hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-
> > > > > > mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/
> > > > > > commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-
> > > > > > net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/
> > > > > > usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/
> > > > > > hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/
> > > > > > xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-
> > > > > > 1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/
> > > > > > lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-
> > > > > > mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-
> > > > > > mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-
> > > > > > mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/
> > > > > > hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > > > > > hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/
> > > > > > hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-
> > > > > > mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/
> > > > > > lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-
> > > > > > mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-
> > > > > > mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > > mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-
> > > > > > 0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:
> > > > > > /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.
> > > > > > 7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.
> > > > > > 2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-
> > > > > > client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.
> > > > > > jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/
> > > > > > lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/
> > > > > > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > > > > > common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.
> > > > > > 3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/
> > > > > > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > > > > > jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/
> > > > > > usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-
> > > > > > mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-
> > > > > > mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/
> > > > > > usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/
> > > > > > hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/
> > > > > > hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/
> > > > > > lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/
> > > > > > hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > > > > > mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > > mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > > > > > archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.
> > > > > > 9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.
> > > > > > jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/
> > > > > > lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-
> > > > > > mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/
> > > > > > jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-
> > > > > > kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.
> > > > > > jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:
> > > > > > /usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:
> > > > > > /usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/
> > > > > > usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/
> > > > > > lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/
> > > > > > lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-
> > > mapreduce/commons-
> > > > > > configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-
> > > > > > 1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/
> > > > > > usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/
> > > > > > usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/
> > > > > > lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/
> > > > > > hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/
> > > > > > hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/
> > > > > > zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-
> > > > > > 2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:
> > > > > > /usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/
> > > > > > usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/
> > > > > > lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/
> > > > > > lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/
> > > > > > lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/
> > > > > > hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/
> > > > > > hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/
> > > > > > hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/
> > > > > lib/hadoop-mapreduce/lib/
> > > > > > snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/
> > > > > > jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/
> > > > > > paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-
> > > > > > guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.
> > > > > > jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/
> > > > > > hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-
> > > > > > mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/
> > > > > > hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > > > > > mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-
> > > > > > mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/
> > > > > > lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-
> > > > > > mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/
> > > > > > guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.
> > > > > > 0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/
> > > > > > usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/
> > > > > > usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/
> > > > > > hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-
> > > > > > mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/
> > > > > > hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-
> > > yarn-server-
> > > > > > sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > > > > server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > > > hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > > > hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > > yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/
> > > > > > hadoop-yarn/hadoop-yarn-applications-distributedshell-
> > > > > > 2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > > > > server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-
> > yarn-server-
> > > > > > sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > > > hadoop-yarn-server-applicationhistoryservice.jar:
> > > > > > /usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/
> > > > > > lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-
> > > > > > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-
> > > > > > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > > > > applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > > yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/
> > > > > > lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-
> > > > > > yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > > yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.
> > > > > > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > > > > > unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-
> > > > > > yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > > > hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > > > > common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-
> > > > > > proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > > > > nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.
> > > > > > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > > > > > unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > > yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-
> > > > > > api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.
> > > > > > jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/
> > > > > > lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/
> > > > > > hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/
> > > > > > lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.
> > > > > > 9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/
> > > > > > usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/
> > > > > > hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/
> > > > > > lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-
> > > > > > client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.
> > > > > > jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/
> > > > > > usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > > > > > yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/
> > > > > > netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/
> > > > > > aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/
> > > > > > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/
> > > > > > usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:
> > > > > > /usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/
> > > > > > lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/
> > > > > > hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-
> > > > > > 3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.
> > > > > > jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/
> > > > > > hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/
> > > > > > lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-
> > > > > > collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-
> > > > > > 1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/
> > > > > > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/
> > > > > > hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/
> > > > > > lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-
> > > > > > jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-
> > > > > > logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:
> > > > > > /usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/
> > > > > > share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/
> > > > > > jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/
> > > > > > bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-
> > > > > > over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-
> > > > > > java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.
> > > > > > 21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/
> > > > > > usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/
> > > > > > share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/
> > > > > > share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/
> > > > > > usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/
> > > > > > lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/
> > > > > > share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/
> > > > > > aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/
> > > > > > aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/
> > > > > > usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/
> > > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-
> > > > > > gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > > java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-
> > > > > > elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-
> mechanicalturkrequester-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/
> > > > > > aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/
> > > > > > usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/
> > > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/
> > > > > > aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > > java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-
> > > > > > sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-
> > > > > > sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/
> > > > > > aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/
> > > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > stepfunctions-1.11.160.jar:/
> > > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/
> > > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > > java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-
> > > > > sdk/aws-java-sdk-
> > > > > > cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/
> > > > > > aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/
> > > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > elastictranscoder-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > organizations-1.11.160.jar:/
> > > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-
> > > > > sdk/aws-java-sdk-
> > > > > > applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/
> > > > > > aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/
> > > > > > aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/
> > > > > > aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:
> > > > > > /usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > > java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:
> > > > > > /usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > > java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-
> resourcegroupstaggingapi-1.11.
> > > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-
> > > > > > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > > java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/
> > > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:
> > > > > > /usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.
> > > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.
> > > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-
> > > > > > java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/
> > > > > > aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/
> > > > > > aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/
> > > > > > share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-
> > > > > > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > > elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > > aws-java-sdk-iot-1.11.160.jar
> > > > > >
> > > > > > Thank you
> > > > > >
> > > > > > On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <
> > trohrmann@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Andy,
> > > > > > >
> > > > > > > the CliFrontend is not executed via Yarn, thus, it is not
> > affected
> > > by
> > > > > > > dependencies which are added due to the underlying Yarn
> cluster.
> > > > > > Therefore,
> > > > > > > it would be helpful to look at the TaskManager logs. Either you
> > > have
> > > > > > > enabled log aggregation on your Yarn cluster, then you can
> obtain
> > > the
> > > > > > logs
> > > > > > > via `yarn logs -applicationId <application ID>` or you have to
> > > > retrieve
> > > > > > > them from the machines where they were running (either by going
> > > > > directly
> > > > > > > there or via the Yarn web interface).
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Till
> > > > > > >
> > > > > > > On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com>
> > wrote:
> > > > > > >
> > > > > > > > Hi Till,
> > > > > > > >
> > > > > > > > That is actually the classpath used by the flink bash
> > script(that
> > > > > > > launches
> > > > > > > > the jar using the java command).  I changed the execute to an
> > > echo,
> > > > > and
> > > > > > > > grabbed that for the CLI arguments.
> > > > > > > >
> > > > > > > > I believe this is the class path from the log file(although
> it
> > > > might
> > > > > > not
> > > > > > > be
> > > > > > > > the taskmanager log, is that any different from what would be
> > in
> > > my
> > > > > > > > flink-1.3.2/log folder?):
> > > > > > > >
> > > > > > > > 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.
> > > CliFrontend
> > > > > > > >                  -  Classpath:
> > > > > > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > > > > > home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> > > > > > > > 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> > > > > > > > hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> > > > > > > > hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/
> > > > hadoop/conf:
> > > > > > > >
> > > > > > > > If that doesn't seem right, and you can point me in the right
> > > > > direction
> > > > > > > as
> > > > > > > > to where the TaskManager logs would be, I would be happy to
> > grab
> > > > the
> > > > > > > > information your looking for.
> > > > > > > >
> > > > > > > > Thank you
> > > > > > > >
> > > > > > > > On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <
> > > > trohrmann@apache.org>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Andy,
> > > > > > > > >
> > > > > > > > > this looks to me indeed like a dependency problem. I assume
> > > that
> > > > > EMR
> > > > > > or
> > > > > > > > > something else is pulling in an incompatible version of
> > Hadoop.
> > > > > > > > >
> > > > > > > > > The classpath you've posted, is this the one logged in the
> > log
> > > > > files
> > > > > > > > > (TaskManager log) or did you compile it yourself? In the
> > latter
> > > > > case,
> > > > > > > it
> > > > > > > > > would also be helpful to get access to the TaskManager
> logs.
> > > > > > > > >
> > > > > > > > > Cheers,
> > > > > > > > > Till
> > > > > > > > >
> > > > > > > > > On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <
> ajm2444@gmail.com>
> > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Fabian,
> > > > > > > > > >
> > > > > > > > > > 1) I have looked at the linked docs, and from what I can
> > tell
> > > > no
> > > > > > > setup
> > > > > > > > > > should really need to be done to get Flink working(Other
> > than
> > > > > > > > downloading
> > > > > > > > > > the correct binaries, which I believe I did)
> > > > > > > > > > 2) I have downloaded the Flink 1.3.2
> > > binaries(flink-1.3.2-bin-
> > > > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > > > > <http://apache.claz.org/flink/
> flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > > > hadoop27-scala_2.11.tgz>)
> > > > > > > > > > This is for hadoop 2.7.X, which matches EMR 5.8.0.
> > > > > > > > > >
> > > > > > > > > > I appreciate any help or guidance you can provide me in
> > > fixing
> > > > my
> > > > > > > > > problems,
> > > > > > > > > > please let me know if there is anything else I can
> provide
> > > you.
> > > > > > > > > >
> > > > > > > > > > Thank you
> > > > > > > > > >
> > > > > > > > > > On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <
> > > > fhueske@gmail.com
> > > > > >
> > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi Andy,
> > > > > > > > > > >
> > > > > > > > > > > I'm not an AWS expert, so I'll just check on some
> common
> > > > > issues.
> > > > > > > > > > >
> > > > > > > > > > > I guess you already had a look at the Flink docs for
> > > AWS/EMR
> > > > > but
> > > > > > > I'll
> > > > > > > > > > post
> > > > > > > > > > > the link just be to sure [1].
> > > > > > > > > > >
> > > > > > > > > > > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with
> > Flink
> > > > > > 1.3.1)
> > > > > > > > did
> > > > > > > > > > you
> > > > > > > > > > > built Flink yourself or did you download the binaries?
> > > > > > > > > > > Does the Hadoop version of the Flink build match the
> > Hadoop
> > > > > > version
> > > > > > > > of
> > > > > > > > > > EMR
> > > > > > > > > > > 5.8.0, i.e., Hadoop 2.7.x?
> > > > > > > > > > >
> > > > > > > > > > > Best, Fabian
> > > > > > > > > > >
> > > > > > > > > > > [1]
> > > > > > > > > > > https://ci.apache.org/projects/flink/flink-docs-
> > > > > > > > > > release-1.3/setup/aws.html
> > > > > > > > > > >
> > > > > > > > > > > 2017-10-02 21:51 GMT+02:00 Andy M. <ajm2444@gmail.com
> >:
> > > > > > > > > > >
> > > > > > > > > > > > Hi Fabian,
> > > > > > > > > > > >
> > > > > > > > > > > > Sorry, I just realized I forgot to include that part.
> > > The
> > > > > > error
> > > > > > > > > > returned
> > > > > > > > > > > > is:
> > > > > > > > > > > >
> > > > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > > > org.apache.hadoop.conf.Configuration.addResource(
> > > > > > > > > > > Lorg/apache/hadoop/conf/
> > > > > > > > > > > > Configuration;)V
> > > > > > > > > > > >     at com.amazon.ws.emr.hadoop.fs.
> > > > EmrFileSystem.initialize(
> > > > > > > > > > > > EmrFileSystem.java:93)
> > > > > > > > > > > >     at org.apache.flink.runtime.fs.
> > > hdfs.HadoopFileSystem.
> > > > > > > > > > > > initialize(HadoopFileSystem.java:328)
> > > > > > > > > > > >     at org.apache.flink.core.fs.FileSystem.
> > > > > > > getUnguardedFileSystem(
> > > > > > > > > > > > FileSystem.java:350)
> > > > > > > > > > > >     at org.apache.flink.core.fs.
> > > FileSystem.get(FileSystem.
> > > > > > > > java:389)
> > > > > > > > > > > >     at org.apache.flink.core.fs.Path.
> > > > > > > getFileSystem(Path.java:293)
> > > > > > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > > > > > > > > FsCheckpointStreamFactory.<init>(
> > > > FsCheckpointStreamFactory.
> > > > > > > > java:99)
> > > > > > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > > FsStateBackend.
> > > > > > > > > > > > createStreamFactory(FsStateBackend.java:282)
> > > > > > > > > > > >     at org.apache.flink.contrib.streaming.state.
> > > > > > > > RocksDBStateBackend.
> > > > > > > > > > > > createStreamFactory(RocksDBStateBackend.java:273
> > > > > > > > > > > >
> > > > > > > > > > > > I believe it has something to do with the classpath,
> > but
> > > I
> > > > am
> > > > > > > > unsure
> > > > > > > > > > why
> > > > > > > > > > > or
> > > > > > > > > > > > how to fix it.  The classpath being used during the
> > > > execution
> > > > > > is:
> > > > > > > > > > > > /home/hadoop/flink-1.3.2/lib/
> > > flink-python_2.11-1.3.2.jar:/
> > > > > > > > > > > > ho‌​me/hadoop/flink-1.3.‌​2/
> > lib/flink-shaded-h‌​adoop2-
> > > > > > > > > > > > uber-1.3.2.ja‌​r:/home/hadoop/
> > > > flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > > > > > > > > > > 17.jar:/home/hadoop‌​/flink-1.
> > > > 3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > > > > > > > > > > jar‌​:/home/hadoop/flink-‌​1.
> > > > 3.2/lib/flink-dist‌​_2.11-1.3.
> > > > > > > > > > > > 2.jar::/et‌​c/hadoop/conf:
> > > > > > > > > > > >
> > > > > > > > > > > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r
> > and
> > > > it
> > > > > > > seems
> > > > > > > > > the
> > > > > > > > > > > > addResource function does seem to be there.
> > > > > > > > > > > >
> > > > > > > > > > > > Thank you
> > > > > > > > > > > >
> > > > > > > > > > > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <
> > > > > > fhueske@gmail.com
> > > > > > > >
> > > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > Hi Andy,
> > > > > > > > > > > > >
> > > > > > > > > > > > > can you describe in more detail what exactly isn't
> > > > working?
> > > > > > > > > > > > > Do you see error messages in the log files or on
> the
> > > > > console?
> > > > > > > > > > > > >
> > > > > > > > > > > > > Thanks, Fabian
> > > > > > > > > > > > >
> > > > > > > > > > > > > 2017-10-02 15:52 GMT+02:00 Andy M. <
> > ajm2444@gmail.com
> > > >:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Hello,
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > I am about to deploy my first Flink projects  to
> > > > > > production,
> > > > > > > > but
> > > > > > > > > I
> > > > > > > > > > am
> > > > > > > > > > > > > > running into a very big hurdle.  I am unable to
> > > launch
> > > > my
> > > > > > > > project
> > > > > > > > > > so
> > > > > > > > > > > it
> > > > > > > > > > > > > can
> > > > > > > > > > > > > > write to an S3 bucket.  My project is running on
> an
> > > EMR
> > > > > > > > cluster,
> > > > > > > > > > > where
> > > > > > > > > > > > I
> > > > > > > > > > > > > > have installed Flink 1.3.2.  I am using Yarn to
> > > launch
> > > > > the
> > > > > > > > > > > application,
> > > > > > > > > > > > > and
> > > > > > > > > > > > > > it seems to run fine unless I am trying to enable
> > > check
> > > > > > > > > > > pointing(with a
> > > > > > > > > > > > > S3
> > > > > > > > > > > > > > target).  I am looking to use RocksDB as my
> > > > > check-pointing
> > > > > > > > > backend.
> > > > > > > > > > > I
> > > > > > > > > > > > > have
> > > > > > > > > > > > > > asked a few places, and I am still unable to
> find a
> > > > > > solution
> > > > > > > to
> > > > > > > > > > this
> > > > > > > > > > > > > > problem.  Here are my steps for creating a
> cluster,
> > > and
> > > > > > > > launching
> > > > > > > > > > my
> > > > > > > > > > > > > > application, perhaps I am missing a step.  I'd be
> > > happy
> > > > > to
> > > > > > > > > provide
> > > > > > > > > > > any
> > > > > > > > > > > > > > additional information if needed.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > AWS Portal:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >     1) EMR -> Create Cluster
> > > > > > > > > > > > > >     2) Advanced Options
> > > > > > > > > > > > > >     3) Release = emr-5.8.0
> > > > > > > > > > > > > >     4) Only select Hadoop 2.7.3
> > > > > > > > > > > > > >     5) Next -> Next -> Next -> Create Cluster ( I
> > do
> > > > fill
> > > > > > out
> > > > > > > > > > > > > > names/keys/etc)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Once the cluster is up I ssh into the Master and
> do
> > > the
> > > > > > > > > following:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >     1  wget
> > > > > > > > > > > > > > http://apache.claz.org/flink/
> > > > > flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > > > > > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-
> > > > scala_2.11.tgz
> > > > > > > > > > > > > >     3  cd flink-1.3.2
> > > > > > > > > > > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4
> -d
> > > > > > > > > > > > > >     5  Change conf/flink-conf.yaml
> > > > > > > > > > > > > >     6  ./bin/flink run -m yarn-cluster -yn 1
> > > > > > > > ~/flink-consumer.jar
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > My conf/flink-conf.yaml I add the following
> fields:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >     state.backend: rocksdb
> > > > > > > > > > > > > >     state.backend.fs.checkpointdir:
> > > > s3:/bucket/location
> > > > > > > > > > > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > My program's checkpointing setup:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > env.enableCheckpointing(getCheckpointRate,
> > > > > > > > > > CheckpointingMode.EXACTLY_
> > > > > > > > > > > > > ONCE)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > env.getCheckpointConfig.
> > > enableExternalizedCheckpoints(
> > > > > > > > > > > > > > ExternalizedCheckpointCleanup.
> > > RETAIN_ON_CANCELLATION)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > env.getCheckpointConfig.
> > > setMinPauseBetweenCheckpoints(
> > > > > > > > > > > > > > getCheckpointMinPause)
> > > > > > > > > > > > > >     env.getCheckpointConfig.
> setCheckpointTimeout(
> > > > > > > > > > > getCheckpointTimeout)
> > > > > > > > > > > > > >     env.getCheckpointConfig.
> > > > > setMaxConcurrentCheckpoints(1)
> > > > > > > > > > > > > >     env.setStateBackend(new
> > > RocksDBStateBackend("s3://
> > > > > > > > > > > > bucket/location",
> > > > > > > > > > > > > > true))
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by Stephan Ewen <se...@apache.org>.
Hi!

It looks like multiple Hadoop versions are in the classpath. Flink's hadoop
jar and the EMR Hadoop jars.
I would simply drop Flink's own Hadoop dependency and only use the EMR
Hadoop jars.

Delete the 'flink-shaded-h‌​adoop2-uber' jar from Flink, and make sure the
setup is such that the Hadoop lib environment variable is set. Then it
should not have conflicts any more.



On Sun, Oct 8, 2017 at 12:08 AM, Chen Qin <qi...@gmail.com> wrote:

> Attached my side project verified working to deploy jobmanager and
> taskmanager as stateless service(non yarn/mesos), configuration here
>
> https://github.com/chenqin/flink-jar/tree/master/config/hadoop
>
> more detail here
> https://github.com/chenqin/flink-jar/blob/master/src/
> main/java/FlinkBootstrap.java#L49
>
> On Fri, Oct 6, 2017 at 10:26 PM, Bowen Li <bo...@offerupnow.com> wrote:
>
> > Hi Andy,
> >
> > I believe it's because you didn't set your s3 impl correctly. Try to set
> > your core-site.xml by following https://ci.apache.org/
> > projects/flink/flink-docs-release-1.4/ops/deployment/
> > aws.html#s3afilesystem-
> > recommended
> >
> > Bowen
> >
> > On Fri, Oct 6, 2017 at 7:59 AM, Andy M. <aj...@gmail.com> wrote:
> >
> > > Hi Till,
> > >
> > > Seems like everything is in line there.  hadoop-common.jar ->
> > > hadoop-common-2.7.3-amzn-3.jar
> > >
> > > And when i decompiled that jar I see  public void
> > addResource(Configuration
> > > conf) in org/apache/hadoop/conf/Configuration.java
> > >
> > > I agree that an incorrect version of the jar is probably being run, is
> > > there a way to limit the classpath for the TaskManager when starting
> the
> > > job?
> > >
> > > Thank you
> > >
> > > On Fri, Oct 6, 2017 at 6:49 AM, Till Rohrmann <tr...@apache.org>
> > > wrote:
> > >
> > > > Hi Andy,
> > > >
> > > > could you check which Hadoop version this jar
> > > > /usr/lib/hadoop/hadoop-common.jar is? Maybe also checking whether
> the
> > > > contained hadoop Configuration class has the method
> > > > Configuration.addResource(Lorg/apache/hadoop/conf/Configuration;)V.
> > > Maybe
> > > > this jar is the culprit because it comes from a different Hadoop
> > version.
> > > >
> > > > Cheers,
> > > > Till
> > > > ​
> > > >
> > > > On Thu, Oct 5, 2017 at 4:22 PM, Andy M. <aj...@gmail.com> wrote:
> > > >
> > > > > Hi Till,
> > > > >
> > > > > I believe this is what you are looking for, classpath is much
> bigger
> > > for
> > > > > the task manager.  I can also post the whole log file if needed:
> > > > >
> > > > > 2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.
> > > > YarnTaskManagerRunner
> > > > >                  -  Classpath:
> > > > > flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-
> > > > > python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.
> > > > > 2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:
> > > > > log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/
> > > > > etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-
> > > > > amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.
> > > > > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/
> > > > > hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > > > nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-
> > > > > amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/
> > > > > usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/
> > > > > hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-
> > > > > streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/
> > > > > hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/
> > > > > usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > > > > hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.
> > > > > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/
> > > > > hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/
> > > > > hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.
> > > > > 7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-
> > > > > amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/
> > > > > hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.
> > > > > jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/
> > > > > hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > > > datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > > > archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/
> > > > > hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-
> > > > > amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/
> > > > > usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/
> > > > > hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.
> > > > > 2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.
> > > > > jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
> > > > > lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/
> > > > > httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
> > > > > 1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
> > > > > jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/
> > > > > lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-
> > > > > 1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/
> > > > > usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/
> > > > > gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.
> > > > > 8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/
> > > > > lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/
> > > > > apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
> > > > > httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.
> > > > > 1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/
> > > > > lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
> > > > > commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/
> > > > > usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/
> > > > > xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
> > > > > jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/
> > > > > commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.
> > > > > jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/
> > > > > usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/
> > > > > jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.
> > > > > Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/
> > > > > lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-
> > > > > api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-
> > > > > 1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:
> > > > > /usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/
> > > > > jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-
> > > > > cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/
> > > > > lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/
> > > > > apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/
> > > > > commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.
> > > > > 11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:
> > > > > /usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
> > > > > lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.
> > > > > 10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/
> > > > > usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/
> > > > > lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.
> > > > > jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/
> > > > > hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/
> > > > > curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-
> > > > > jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.
> > > > > 4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/
> > > > > usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/
> > > > > usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/
> > > > > usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/
> > > > > hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-
> > > > > amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-
> > > > > incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-
> > > > > 2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.
> > > > > jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/
> > > > > lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/
> > > > > lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/
> > > > > commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.
> > > > > jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/
> > > > > usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/
> > > > > hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/
> > > > > netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-
> > > > > 1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/
> > > > > hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/
> > > > > lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/
> > > > > hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > > > > hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/
> > > > > lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-
> > > > > api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:
> > > > > /usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/
> > > > > hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/
> > > > > lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-
> > > > > logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.
> > > > > jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/
> > > > > hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-
> > > > > mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/
> > > > > hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/
> > > > > hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/
> > > > hadoop-mapreduce/hadoop-
> > > > > mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/
> > > > > httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > > > > mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/
> > > > > lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-
> > > > > mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/
> > > > > commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/
> > > > > hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/
> > > > > jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-
> > > > > mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.
> > > > > jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/
> > > > > lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/
> > > > > usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/
> > > > > usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/
> > > > > usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/
> > > > > hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/
> > > > > hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-
> > > > > 3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.
> > > > 8.jar:/usr/lib/hadoop-
> > > > > mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-
> > > > > 3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.
> > > > > jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/
> > > > > hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/
> > > > > hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/
> > > > > hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/
> > > > > hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-
> > > > > mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/
> > > > > commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-
> > > > > net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/
> > > > > usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/
> > > > > hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/
> > > > > xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-
> > > > > 1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/
> > > > > lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-
> > > > > mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-
> > > > > mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-
> > > > > mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/
> > > > > hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > > > > hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/
> > > > > hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-
> > > > > mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/
> > > > > lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-
> > > > > mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-
> > > > > mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-
> > > > > 0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:
> > > > > /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.
> > > > > 7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.
> > > > > 2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-
> > > > > client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.
> > > > > jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/
> > > > > lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/
> > > > > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > > > > common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.
> > > > > 3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/
> > > > > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > > > > jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/
> > > > > usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-
> > > > > mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-
> > > > > mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/
> > > > > usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/
> > > > > hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/
> > > > > hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/
> > > > > lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/
> > > > > hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > > > > mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > > > > archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.
> > > > > 9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.
> > > > > jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/
> > > > > lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-
> > > > > mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/
> > > > > jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-
> > > > > kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.
> > > > > jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:
> > > > > /usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:
> > > > > /usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/
> > > > > usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/
> > > > > lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/
> > > > > lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-
> > mapreduce/commons-
> > > > > configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-
> > > > > 1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/
> > > > > usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/
> > > > > usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/
> > > > > lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/
> > > > > hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/
> > > > > hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/
> > > > > zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-
> > > > > 2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:
> > > > > /usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/
> > > > > usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/
> > > > > lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/
> > > > > lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/
> > > > > lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/
> > > > > hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/
> > > > > hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/
> > > > > hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/
> > > > lib/hadoop-mapreduce/lib/
> > > > > snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/
> > > > > jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/
> > > > > paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-
> > > > > guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.
> > > > > jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/
> > > > > hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-
> > > > > mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/
> > > > > hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > > > > mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-
> > > > > mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/
> > > > > lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-
> > > > > mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/
> > > > > guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.
> > > > > 0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/
> > > > > usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/
> > > > > usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/
> > > > > hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-
> > > > > mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/
> > > > > hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-
> > yarn-server-
> > > > > sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > > > server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > > hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > > hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/
> > > > > hadoop-yarn/hadoop-yarn-applications-distributedshell-
> > > > > 2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > > > server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-
> yarn-server-
> > > > > sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > > hadoop-yarn-server-applicationhistoryservice.jar:
> > > > > /usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/
> > > > > lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-
> > > > > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-
> > > > > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > > > applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/
> > > > > lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-
> > > > > yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.
> > > > > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > > > > unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-
> > > > > yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > > hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > > > common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-
> > > > > proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > > > nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.
> > > > > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > > > > unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > > yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-
> > > > > api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.
> > > > > jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/
> > > > > lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/
> > > > > hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/
> > > > > lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.
> > > > > 9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/
> > > > > usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/
> > > > > hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/
> > > > > lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-
> > > > > client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.
> > > > > jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/
> > > > > usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > > > > yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/
> > > > > netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/
> > > > > aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/
> > > > > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/
> > > > > usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:
> > > > > /usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/
> > > > > lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/
> > > > > hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-
> > > > > 3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.
> > > > > jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/
> > > > > hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/
> > > > > lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-
> > > > > collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-
> > > > > 1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/
> > > > > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/
> > > > > hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/
> > > > > lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-
> > > > > jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-
> > > > > logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:
> > > > > /usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/
> > > > > share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/
> > > > > jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/
> > > > > bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-
> > > > > over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-
> > > > > java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.
> > > > > 21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/
> > > > > usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/
> > > > > share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/
> > > > > share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/
> > > > > usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/
> > > > > lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/
> > > > > share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/
> > > > > aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/
> > > > > aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/
> > > > > usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/
> > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-
> > > > > gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-
> > > > > elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-mechanicalturkrequester-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/
> > > > > aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/
> > > > > usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/
> > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/
> > > > > aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-
> > > > > sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-
> > > > > sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/
> > > > > aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/
> > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > stepfunctions-1.11.160.jar:/
> > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/
> > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-
> > > > sdk/aws-java-sdk-
> > > > > cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/
> > > > > aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/
> > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > elastictranscoder-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > organizations-1.11.160.jar:/
> > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-
> > > > sdk/aws-java-sdk-
> > > > > applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/
> > > > > aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/
> > > > > aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/
> > > > > aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:
> > > > > /usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:
> > > > > /usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-resourcegroupstaggingapi-1.11.
> > > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-
> > > > > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > > java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/
> > > > > usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:
> > > > > /usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.
> > > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.
> > > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-
> > > > > java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/
> > > > > aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/
> > > > > aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/
> > > > > share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-
> > > > > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > > elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > > aws-java-sdk-iot-1.11.160.jar
> > > > >
> > > > > Thank you
> > > > >
> > > > > On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <
> trohrmann@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Hi Andy,
> > > > > >
> > > > > > the CliFrontend is not executed via Yarn, thus, it is not
> affected
> > by
> > > > > > dependencies which are added due to the underlying Yarn cluster.
> > > > > Therefore,
> > > > > > it would be helpful to look at the TaskManager logs. Either you
> > have
> > > > > > enabled log aggregation on your Yarn cluster, then you can obtain
> > the
> > > > > logs
> > > > > > via `yarn logs -applicationId <application ID>` or you have to
> > > retrieve
> > > > > > them from the machines where they were running (either by going
> > > > directly
> > > > > > there or via the Yarn web interface).
> > > > > >
> > > > > > Cheers,
> > > > > > Till
> > > > > >
> > > > > > On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com>
> wrote:
> > > > > >
> > > > > > > Hi Till,
> > > > > > >
> > > > > > > That is actually the classpath used by the flink bash
> script(that
> > > > > > launches
> > > > > > > the jar using the java command).  I changed the execute to an
> > echo,
> > > > and
> > > > > > > grabbed that for the CLI arguments.
> > > > > > >
> > > > > > > I believe this is the class path from the log file(although it
> > > might
> > > > > not
> > > > > > be
> > > > > > > the taskmanager log, is that any different from what would be
> in
> > my
> > > > > > > flink-1.3.2/log folder?):
> > > > > > >
> > > > > > > 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.
> > CliFrontend
> > > > > > >                  -  Classpath:
> > > > > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > > > > home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> > > > > > > 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> > > > > > > hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> > > > > > > hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/
> > > hadoop/conf:
> > > > > > >
> > > > > > > If that doesn't seem right, and you can point me in the right
> > > > direction
> > > > > > as
> > > > > > > to where the TaskManager logs would be, I would be happy to
> grab
> > > the
> > > > > > > information your looking for.
> > > > > > >
> > > > > > > Thank you
> > > > > > >
> > > > > > > On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <
> > > trohrmann@apache.org>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Andy,
> > > > > > > >
> > > > > > > > this looks to me indeed like a dependency problem. I assume
> > that
> > > > EMR
> > > > > or
> > > > > > > > something else is pulling in an incompatible version of
> Hadoop.
> > > > > > > >
> > > > > > > > The classpath you've posted, is this the one logged in the
> log
> > > > files
> > > > > > > > (TaskManager log) or did you compile it yourself? In the
> latter
> > > > case,
> > > > > > it
> > > > > > > > would also be helpful to get access to the TaskManager logs.
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > > Till
> > > > > > > >
> > > > > > > > On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <aj...@gmail.com>
> > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Fabian,
> > > > > > > > >
> > > > > > > > > 1) I have looked at the linked docs, and from what I can
> tell
> > > no
> > > > > > setup
> > > > > > > > > should really need to be done to get Flink working(Other
> than
> > > > > > > downloading
> > > > > > > > > the correct binaries, which I believe I did)
> > > > > > > > > 2) I have downloaded the Flink 1.3.2
> > binaries(flink-1.3.2-bin-
> > > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > > > <http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > > hadoop27-scala_2.11.tgz>)
> > > > > > > > > This is for hadoop 2.7.X, which matches EMR 5.8.0.
> > > > > > > > >
> > > > > > > > > I appreciate any help or guidance you can provide me in
> > fixing
> > > my
> > > > > > > > problems,
> > > > > > > > > please let me know if there is anything else I can provide
> > you.
> > > > > > > > >
> > > > > > > > > Thank you
> > > > > > > > >
> > > > > > > > > On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <
> > > fhueske@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Andy,
> > > > > > > > > >
> > > > > > > > > > I'm not an AWS expert, so I'll just check on some common
> > > > issues.
> > > > > > > > > >
> > > > > > > > > > I guess you already had a look at the Flink docs for
> > AWS/EMR
> > > > but
> > > > > > I'll
> > > > > > > > > post
> > > > > > > > > > the link just be to sure [1].
> > > > > > > > > >
> > > > > > > > > > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with
> Flink
> > > > > 1.3.1)
> > > > > > > did
> > > > > > > > > you
> > > > > > > > > > built Flink yourself or did you download the binaries?
> > > > > > > > > > Does the Hadoop version of the Flink build match the
> Hadoop
> > > > > version
> > > > > > > of
> > > > > > > > > EMR
> > > > > > > > > > 5.8.0, i.e., Hadoop 2.7.x?
> > > > > > > > > >
> > > > > > > > > > Best, Fabian
> > > > > > > > > >
> > > > > > > > > > [1]
> > > > > > > > > > https://ci.apache.org/projects/flink/flink-docs-
> > > > > > > > > release-1.3/setup/aws.html
> > > > > > > > > >
> > > > > > > > > > 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > > > > > >
> > > > > > > > > > > Hi Fabian,
> > > > > > > > > > >
> > > > > > > > > > > Sorry, I just realized I forgot to include that part.
> > The
> > > > > error
> > > > > > > > > returned
> > > > > > > > > > > is:
> > > > > > > > > > >
> > > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > > org.apache.hadoop.conf.Configuration.addResource(
> > > > > > > > > > Lorg/apache/hadoop/conf/
> > > > > > > > > > > Configuration;)V
> > > > > > > > > > >     at com.amazon.ws.emr.hadoop.fs.
> > > EmrFileSystem.initialize(
> > > > > > > > > > > EmrFileSystem.java:93)
> > > > > > > > > > >     at org.apache.flink.runtime.fs.
> > hdfs.HadoopFileSystem.
> > > > > > > > > > > initialize(HadoopFileSystem.java:328)
> > > > > > > > > > >     at org.apache.flink.core.fs.FileSystem.
> > > > > > getUnguardedFileSystem(
> > > > > > > > > > > FileSystem.java:350)
> > > > > > > > > > >     at org.apache.flink.core.fs.
> > FileSystem.get(FileSystem.
> > > > > > > java:389)
> > > > > > > > > > >     at org.apache.flink.core.fs.Path.
> > > > > > getFileSystem(Path.java:293)
> > > > > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > > > > > > > FsCheckpointStreamFactory.<init>(
> > > FsCheckpointStreamFactory.
> > > > > > > java:99)
> > > > > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > FsStateBackend.
> > > > > > > > > > > createStreamFactory(FsStateBackend.java:282)
> > > > > > > > > > >     at org.apache.flink.contrib.streaming.state.
> > > > > > > RocksDBStateBackend.
> > > > > > > > > > > createStreamFactory(RocksDBStateBackend.java:273
> > > > > > > > > > >
> > > > > > > > > > > I believe it has something to do with the classpath,
> but
> > I
> > > am
> > > > > > > unsure
> > > > > > > > > why
> > > > > > > > > > or
> > > > > > > > > > > how to fix it.  The classpath being used during the
> > > execution
> > > > > is:
> > > > > > > > > > > /home/hadoop/flink-1.3.2/lib/
> > flink-python_2.11-1.3.2.jar:/
> > > > > > > > > > > ho‌​me/hadoop/flink-1.3.‌​2/
> lib/flink-shaded-h‌​adoop2-
> > > > > > > > > > > uber-1.3.2.ja‌​r:/home/hadoop/
> > > flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > > > > > > > > > 17.jar:/home/hadoop‌​/flink-1.
> > > 3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > > > > > > > > > jar‌​:/home/hadoop/flink-‌​1.
> > > 3.2/lib/flink-dist‌​_2.11-1.3.
> > > > > > > > > > > 2.jar::/et‌​c/hadoop/conf:
> > > > > > > > > > >
> > > > > > > > > > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r
> and
> > > it
> > > > > > seems
> > > > > > > > the
> > > > > > > > > > > addResource function does seem to be there.
> > > > > > > > > > >
> > > > > > > > > > > Thank you
> > > > > > > > > > >
> > > > > > > > > > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <
> > > > > fhueske@gmail.com
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Hi Andy,
> > > > > > > > > > > >
> > > > > > > > > > > > can you describe in more detail what exactly isn't
> > > working?
> > > > > > > > > > > > Do you see error messages in the log files or on the
> > > > console?
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks, Fabian
> > > > > > > > > > > >
> > > > > > > > > > > > 2017-10-02 15:52 GMT+02:00 Andy M. <
> ajm2444@gmail.com
> > >:
> > > > > > > > > > > >
> > > > > > > > > > > > > Hello,
> > > > > > > > > > > > >
> > > > > > > > > > > > > I am about to deploy my first Flink projects  to
> > > > > production,
> > > > > > > but
> > > > > > > > I
> > > > > > > > > am
> > > > > > > > > > > > > running into a very big hurdle.  I am unable to
> > launch
> > > my
> > > > > > > project
> > > > > > > > > so
> > > > > > > > > > it
> > > > > > > > > > > > can
> > > > > > > > > > > > > write to an S3 bucket.  My project is running on an
> > EMR
> > > > > > > cluster,
> > > > > > > > > > where
> > > > > > > > > > > I
> > > > > > > > > > > > > have installed Flink 1.3.2.  I am using Yarn to
> > launch
> > > > the
> > > > > > > > > > application,
> > > > > > > > > > > > and
> > > > > > > > > > > > > it seems to run fine unless I am trying to enable
> > check
> > > > > > > > > > pointing(with a
> > > > > > > > > > > > S3
> > > > > > > > > > > > > target).  I am looking to use RocksDB as my
> > > > check-pointing
> > > > > > > > backend.
> > > > > > > > > > I
> > > > > > > > > > > > have
> > > > > > > > > > > > > asked a few places, and I am still unable to find a
> > > > > solution
> > > > > > to
> > > > > > > > > this
> > > > > > > > > > > > > problem.  Here are my steps for creating a cluster,
> > and
> > > > > > > launching
> > > > > > > > > my
> > > > > > > > > > > > > application, perhaps I am missing a step.  I'd be
> > happy
> > > > to
> > > > > > > > provide
> > > > > > > > > > any
> > > > > > > > > > > > > additional information if needed.
> > > > > > > > > > > > >
> > > > > > > > > > > > > AWS Portal:
> > > > > > > > > > > > >
> > > > > > > > > > > > >     1) EMR -> Create Cluster
> > > > > > > > > > > > >     2) Advanced Options
> > > > > > > > > > > > >     3) Release = emr-5.8.0
> > > > > > > > > > > > >     4) Only select Hadoop 2.7.3
> > > > > > > > > > > > >     5) Next -> Next -> Next -> Create Cluster ( I
> do
> > > fill
> > > > > out
> > > > > > > > > > > > > names/keys/etc)
> > > > > > > > > > > > >
> > > > > > > > > > > > > Once the cluster is up I ssh into the Master and do
> > the
> > > > > > > > following:
> > > > > > > > > > > > >
> > > > > > > > > > > > >     1  wget
> > > > > > > > > > > > > http://apache.claz.org/flink/
> > > > flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > > > > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-
> > > scala_2.11.tgz
> > > > > > > > > > > > >     3  cd flink-1.3.2
> > > > > > > > > > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > > > > > > > > > > >     5  Change conf/flink-conf.yaml
> > > > > > > > > > > > >     6  ./bin/flink run -m yarn-cluster -yn 1
> > > > > > > ~/flink-consumer.jar
> > > > > > > > > > > > >
> > > > > > > > > > > > > My conf/flink-conf.yaml I add the following fields:
> > > > > > > > > > > > >
> > > > > > > > > > > > >     state.backend: rocksdb
> > > > > > > > > > > > >     state.backend.fs.checkpointdir:
> > > s3:/bucket/location
> > > > > > > > > > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > > > > > > > > > >
> > > > > > > > > > > > > My program's checkpointing setup:
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > env.enableCheckpointing(getCheckpointRate,
> > > > > > > > > CheckpointingMode.EXACTLY_
> > > > > > > > > > > > ONCE)
> > > > > > > > > > > > >
> > > > > > > > > > > > > env.getCheckpointConfig.
> > enableExternalizedCheckpoints(
> > > > > > > > > > > > > ExternalizedCheckpointCleanup.
> > RETAIN_ON_CANCELLATION)
> > > > > > > > > > > > >
> > > > > > > > > > > > > env.getCheckpointConfig.
> > setMinPauseBetweenCheckpoints(
> > > > > > > > > > > > > getCheckpointMinPause)
> > > > > > > > > > > > >     env.getCheckpointConfig.setCheckpointTimeout(
> > > > > > > > > > getCheckpointTimeout)
> > > > > > > > > > > > >     env.getCheckpointConfig.
> > > > setMaxConcurrentCheckpoints(1)
> > > > > > > > > > > > >     env.setStateBackend(new
> > RocksDBStateBackend("s3://
> > > > > > > > > > > bucket/location",
> > > > > > > > > > > > > true))
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by Chen Qin <qi...@gmail.com>.
Attached my side project verified working to deploy jobmanager and
taskmanager as stateless service(non yarn/mesos), configuration here

https://github.com/chenqin/flink-jar/tree/master/config/hadoop

more detail here
https://github.com/chenqin/flink-jar/blob/master/src/main/java/FlinkBootstrap.java#L49

On Fri, Oct 6, 2017 at 10:26 PM, Bowen Li <bo...@offerupnow.com> wrote:

> Hi Andy,
>
> I believe it's because you didn't set your s3 impl correctly. Try to set
> your core-site.xml by following https://ci.apache.org/
> projects/flink/flink-docs-release-1.4/ops/deployment/
> aws.html#s3afilesystem-
> recommended
>
> Bowen
>
> On Fri, Oct 6, 2017 at 7:59 AM, Andy M. <aj...@gmail.com> wrote:
>
> > Hi Till,
> >
> > Seems like everything is in line there.  hadoop-common.jar ->
> > hadoop-common-2.7.3-amzn-3.jar
> >
> > And when i decompiled that jar I see  public void
> addResource(Configuration
> > conf) in org/apache/hadoop/conf/Configuration.java
> >
> > I agree that an incorrect version of the jar is probably being run, is
> > there a way to limit the classpath for the TaskManager when starting the
> > job?
> >
> > Thank you
> >
> > On Fri, Oct 6, 2017 at 6:49 AM, Till Rohrmann <tr...@apache.org>
> > wrote:
> >
> > > Hi Andy,
> > >
> > > could you check which Hadoop version this jar
> > > /usr/lib/hadoop/hadoop-common.jar is? Maybe also checking whether the
> > > contained hadoop Configuration class has the method
> > > Configuration.addResource(Lorg/apache/hadoop/conf/Configuration;)V.
> > Maybe
> > > this jar is the culprit because it comes from a different Hadoop
> version.
> > >
> > > Cheers,
> > > Till
> > > ​
> > >
> > > On Thu, Oct 5, 2017 at 4:22 PM, Andy M. <aj...@gmail.com> wrote:
> > >
> > > > Hi Till,
> > > >
> > > > I believe this is what you are looking for, classpath is much bigger
> > for
> > > > the task manager.  I can also post the whole log file if needed:
> > > >
> > > > 2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.
> > > YarnTaskManagerRunner
> > > >                  -  Classpath:
> > > > flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-
> > > > python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.
> > > > 2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:
> > > > log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/
> > > > etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-
> > > > amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.
> > > > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/
> > > > hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > > nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-
> > > > amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/
> > > > usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/
> > > > hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-
> > > > streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/
> > > > hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/
> > > > usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > > > hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.
> > > > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/
> > > > hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/
> > > > hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.
> > > > 7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-
> > > > amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/
> > > > hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.
> > > > jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/
> > > > hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > > datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > > archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/
> > > > hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-
> > > > amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/
> > > > usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/
> > > > hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.
> > > > 2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.
> > > > jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
> > > > lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/
> > > > httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
> > > > 1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
> > > > jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/
> > > > lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-
> > > > 1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/
> > > > usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/
> > > > gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.
> > > > 8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/
> > > > lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/
> > > > apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
> > > > httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.
> > > > 1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/
> > > > lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
> > > > commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/
> > > > usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/
> > > > xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
> > > > jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/
> > > > commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.
> > > > jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/
> > > > usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/
> > > > jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.
> > > > Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/
> > > > lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-
> > > > api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-
> > > > 1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:
> > > > /usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/
> > > > jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-
> > > > cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/
> > > > lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/
> > > > apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/
> > > > commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.
> > > > 11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:
> > > > /usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
> > > > lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.
> > > > 10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/
> > > > usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/
> > > > lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.
> > > > jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/
> > > > hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/
> > > > curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-
> > > > jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.
> > > > 4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/
> > > > usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/
> > > > usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/
> > > > usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/
> > > > hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-
> > > > amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-
> > > > incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-
> > > > 2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.
> > > > jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/
> > > > lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/
> > > > lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/
> > > > commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.
> > > > jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/
> > > > usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/
> > > > hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/
> > > > netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-
> > > > 1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/
> > > > hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/
> > > > lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/
> > > > hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > > > hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/
> > > > lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-
> > > > api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:
> > > > /usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/
> > > > hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/
> > > > lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-
> > > > logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.
> > > > jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/
> > > > hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-
> > > > mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/
> > > > hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/
> > > > hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/
> > > hadoop-mapreduce/hadoop-
> > > > mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/
> > > > httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > > > mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/
> > > > lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-
> > > > mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/
> > > > commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/
> > > > hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/
> > > > jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-
> > > > mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.
> > > > jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/
> > > > lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/
> > > > usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/
> > > > usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/
> > > > usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/
> > > > hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/
> > > > hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-
> > > > 3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.
> > > 8.jar:/usr/lib/hadoop-
> > > > mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-
> > > > 3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.
> > > > jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/
> > > > hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/
> > > > hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/
> > > > hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/
> > > > hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-
> > > > mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/
> > > > commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-
> > > > net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/
> > > > usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/
> > > > hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/
> > > > xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-
> > > > 1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/
> > > > lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-
> > > > mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-
> > > > mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-
> > > > mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/
> > > > hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > > > hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/
> > > > hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-
> > > > mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/
> > > > lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-
> > > > mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-
> > > > mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-
> > > > 0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:
> > > > /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.
> > > > 7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.
> > > > 2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-
> > > > client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.
> > > > jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/
> > > > lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/
> > > > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > > > common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.
> > > > 3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/
> > > > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > > > jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/
> > > > usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-
> > > > mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-
> > > > mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/
> > > > usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/
> > > > hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/
> > > > hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/
> > > > lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/
> > > > hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > > > mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > > > archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.
> > > > 9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.
> > > > jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/
> > > > lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-
> > > > mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/
> > > > jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-
> > > > kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.
> > > > jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:
> > > > /usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:
> > > > /usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/
> > > > usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/
> > > > lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/
> > > > lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-
> mapreduce/commons-
> > > > configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-
> > > > 1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/
> > > > usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/
> > > > usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/
> > > > lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/
> > > > hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/
> > > > hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/
> > > > zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-
> > > > 2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:
> > > > /usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/
> > > > usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/
> > > > lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/
> > > > lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/
> > > > lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/
> > > > hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/
> > > > hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/
> > > > hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/
> > > lib/hadoop-mapreduce/lib/
> > > > snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/
> > > > jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/
> > > > paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-
> > > > guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.
> > > > jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/
> > > > hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-
> > > > mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/
> > > > hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > > > mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-
> > > > mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/
> > > > lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-
> > > > mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/
> > > > guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.
> > > > 0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/
> > > > usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/
> > > > usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/
> > > > hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-
> > > > mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/
> > > > hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-
> yarn-server-
> > > > sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > > server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/
> > > > hadoop-yarn/hadoop-yarn-applications-distributedshell-
> > > > 2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > > server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > > sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > hadoop-yarn-server-applicationhistoryservice.jar:
> > > > /usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/
> > > > lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-
> > > > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-
> > > > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > > applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/
> > > > lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-
> > > > yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.
> > > > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > > > unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-
> > > > yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > > hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > > common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-
> > > > proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > > nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.
> > > > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > > > unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > > yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-
> > > > api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.
> > > > jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/
> > > > lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/
> > > > hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/
> > > > lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.
> > > > 9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/
> > > > usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/
> > > > hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/
> > > > lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-
> > > > client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.
> > > > jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/
> > > > usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > > > yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/
> > > > netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/
> > > > aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/
> > > > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/
> > > > usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:
> > > > /usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/
> > > > lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/
> > > > hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-
> > > > 3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.
> > > > jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/
> > > > hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/
> > > > lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-
> > > > collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-
> > > > 1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/
> > > > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/
> > > > hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/
> > > > lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-
> > > > jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-
> > > > logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:
> > > > /usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/
> > > > share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/
> > > > jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/
> > > > bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-
> > > > over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-
> > > > java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.
> > > > 21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/
> > > > usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/
> > > > share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/
> > > > share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/
> > > > usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/
> > > > lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/
> > > > share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/
> > > > aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/
> > > > aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/
> > > > usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/
> > > > usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-
> > > > gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-
> > > > elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-mechanicalturkrequester-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/
> > > > aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/
> > > > usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/
> > > > usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/
> > > > aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-
> > > > sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-
> > > > sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/
> > > > aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/
> > > > usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > stepfunctions-1.11.160.jar:/
> > > > usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/
> > > > usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-
> > > sdk/aws-java-sdk-
> > > > cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/
> > > > aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/
> > > > usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > elastictranscoder-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > organizations-1.11.160.jar:/
> > > > usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-
> > > sdk/aws-java-sdk-
> > > > applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/
> > > > aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/
> > > > aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/
> > > > aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:
> > > > /usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:
> > > > /usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-resourcegroupstaggingapi-1.11.
> > > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-
> > > > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > > java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/
> > > > usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:
> > > > /usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.
> > > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.
> > > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-
> > > > java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/
> > > > aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/
> > > > aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/
> > > > share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-
> > > > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > > elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > > aws-java-sdk-iot-1.11.160.jar
> > > >
> > > > Thank you
> > > >
> > > > On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <tr...@apache.org>
> > > > wrote:
> > > >
> > > > > Hi Andy,
> > > > >
> > > > > the CliFrontend is not executed via Yarn, thus, it is not affected
> by
> > > > > dependencies which are added due to the underlying Yarn cluster.
> > > > Therefore,
> > > > > it would be helpful to look at the TaskManager logs. Either you
> have
> > > > > enabled log aggregation on your Yarn cluster, then you can obtain
> the
> > > > logs
> > > > > via `yarn logs -applicationId <application ID>` or you have to
> > retrieve
> > > > > them from the machines where they were running (either by going
> > > directly
> > > > > there or via the Yarn web interface).
> > > > >
> > > > > Cheers,
> > > > > Till
> > > > >
> > > > > On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com> wrote:
> > > > >
> > > > > > Hi Till,
> > > > > >
> > > > > > That is actually the classpath used by the flink bash script(that
> > > > > launches
> > > > > > the jar using the java command).  I changed the execute to an
> echo,
> > > and
> > > > > > grabbed that for the CLI arguments.
> > > > > >
> > > > > > I believe this is the class path from the log file(although it
> > might
> > > > not
> > > > > be
> > > > > > the taskmanager log, is that any different from what would be in
> my
> > > > > > flink-1.3.2/log folder?):
> > > > > >
> > > > > > 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.
> CliFrontend
> > > > > >                  -  Classpath:
> > > > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > > > home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> > > > > > 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> > > > > > hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> > > > > > hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/
> > hadoop/conf:
> > > > > >
> > > > > > If that doesn't seem right, and you can point me in the right
> > > direction
> > > > > as
> > > > > > to where the TaskManager logs would be, I would be happy to grab
> > the
> > > > > > information your looking for.
> > > > > >
> > > > > > Thank you
> > > > > >
> > > > > > On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <
> > trohrmann@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Andy,
> > > > > > >
> > > > > > > this looks to me indeed like a dependency problem. I assume
> that
> > > EMR
> > > > or
> > > > > > > something else is pulling in an incompatible version of Hadoop.
> > > > > > >
> > > > > > > The classpath you've posted, is this the one logged in the log
> > > files
> > > > > > > (TaskManager log) or did you compile it yourself? In the latter
> > > case,
> > > > > it
> > > > > > > would also be helpful to get access to the TaskManager logs.
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Till
> > > > > > >
> > > > > > > On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <aj...@gmail.com>
> > > wrote:
> > > > > > >
> > > > > > > > Hi Fabian,
> > > > > > > >
> > > > > > > > 1) I have looked at the linked docs, and from what I can tell
> > no
> > > > > setup
> > > > > > > > should really need to be done to get Flink working(Other than
> > > > > > downloading
> > > > > > > > the correct binaries, which I believe I did)
> > > > > > > > 2) I have downloaded the Flink 1.3.2
> binaries(flink-1.3.2-bin-
> > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > > <http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > hadoop27-scala_2.11.tgz>)
> > > > > > > > This is for hadoop 2.7.X, which matches EMR 5.8.0.
> > > > > > > >
> > > > > > > > I appreciate any help or guidance you can provide me in
> fixing
> > my
> > > > > > > problems,
> > > > > > > > please let me know if there is anything else I can provide
> you.
> > > > > > > >
> > > > > > > > Thank you
> > > > > > > >
> > > > > > > > On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <
> > fhueske@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Andy,
> > > > > > > > >
> > > > > > > > > I'm not an AWS expert, so I'll just check on some common
> > > issues.
> > > > > > > > >
> > > > > > > > > I guess you already had a look at the Flink docs for
> AWS/EMR
> > > but
> > > > > I'll
> > > > > > > > post
> > > > > > > > > the link just be to sure [1].
> > > > > > > > >
> > > > > > > > > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink
> > > > 1.3.1)
> > > > > > did
> > > > > > > > you
> > > > > > > > > built Flink yourself or did you download the binaries?
> > > > > > > > > Does the Hadoop version of the Flink build match the Hadoop
> > > > version
> > > > > > of
> > > > > > > > EMR
> > > > > > > > > 5.8.0, i.e., Hadoop 2.7.x?
> > > > > > > > >
> > > > > > > > > Best, Fabian
> > > > > > > > >
> > > > > > > > > [1]
> > > > > > > > > https://ci.apache.org/projects/flink/flink-docs-
> > > > > > > > release-1.3/setup/aws.html
> > > > > > > > >
> > > > > > > > > 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > > > > >
> > > > > > > > > > Hi Fabian,
> > > > > > > > > >
> > > > > > > > > > Sorry, I just realized I forgot to include that part.
> The
> > > > error
> > > > > > > > returned
> > > > > > > > > > is:
> > > > > > > > > >
> > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > org.apache.hadoop.conf.Configuration.addResource(
> > > > > > > > > Lorg/apache/hadoop/conf/
> > > > > > > > > > Configuration;)V
> > > > > > > > > >     at com.amazon.ws.emr.hadoop.fs.
> > EmrFileSystem.initialize(
> > > > > > > > > > EmrFileSystem.java:93)
> > > > > > > > > >     at org.apache.flink.runtime.fs.
> hdfs.HadoopFileSystem.
> > > > > > > > > > initialize(HadoopFileSystem.java:328)
> > > > > > > > > >     at org.apache.flink.core.fs.FileSystem.
> > > > > getUnguardedFileSystem(
> > > > > > > > > > FileSystem.java:350)
> > > > > > > > > >     at org.apache.flink.core.fs.
> FileSystem.get(FileSystem.
> > > > > > java:389)
> > > > > > > > > >     at org.apache.flink.core.fs.Path.
> > > > > getFileSystem(Path.java:293)
> > > > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > > > > > > FsCheckpointStreamFactory.<init>(
> > FsCheckpointStreamFactory.
> > > > > > java:99)
> > > > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > FsStateBackend.
> > > > > > > > > > createStreamFactory(FsStateBackend.java:282)
> > > > > > > > > >     at org.apache.flink.contrib.streaming.state.
> > > > > > RocksDBStateBackend.
> > > > > > > > > > createStreamFactory(RocksDBStateBackend.java:273
> > > > > > > > > >
> > > > > > > > > > I believe it has something to do with the classpath, but
> I
> > am
> > > > > > unsure
> > > > > > > > why
> > > > > > > > > or
> > > > > > > > > > how to fix it.  The classpath being used during the
> > execution
> > > > is:
> > > > > > > > > > /home/hadoop/flink-1.3.2/lib/
> flink-python_2.11-1.3.2.jar:/
> > > > > > > > > > ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> > > > > > > > > > uber-1.3.2.ja‌​r:/home/hadoop/
> > flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > > > > > > > > 17.jar:/home/hadoop‌​/flink-1.
> > 3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > > > > > > > > jar‌​:/home/hadoop/flink-‌​1.
> > 3.2/lib/flink-dist‌​_2.11-1.3.
> > > > > > > > > > 2.jar::/et‌​c/hadoop/conf:
> > > > > > > > > >
> > > > > > > > > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and
> > it
> > > > > seems
> > > > > > > the
> > > > > > > > > > addResource function does seem to be there.
> > > > > > > > > >
> > > > > > > > > > Thank you
> > > > > > > > > >
> > > > > > > > > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <
> > > > fhueske@gmail.com
> > > > > >
> > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi Andy,
> > > > > > > > > > >
> > > > > > > > > > > can you describe in more detail what exactly isn't
> > working?
> > > > > > > > > > > Do you see error messages in the log files or on the
> > > console?
> > > > > > > > > > >
> > > > > > > > > > > Thanks, Fabian
> > > > > > > > > > >
> > > > > > > > > > > 2017-10-02 15:52 GMT+02:00 Andy M. <ajm2444@gmail.com
> >:
> > > > > > > > > > >
> > > > > > > > > > > > Hello,
> > > > > > > > > > > >
> > > > > > > > > > > > I am about to deploy my first Flink projects  to
> > > > production,
> > > > > > but
> > > > > > > I
> > > > > > > > am
> > > > > > > > > > > > running into a very big hurdle.  I am unable to
> launch
> > my
> > > > > > project
> > > > > > > > so
> > > > > > > > > it
> > > > > > > > > > > can
> > > > > > > > > > > > write to an S3 bucket.  My project is running on an
> EMR
> > > > > > cluster,
> > > > > > > > > where
> > > > > > > > > > I
> > > > > > > > > > > > have installed Flink 1.3.2.  I am using Yarn to
> launch
> > > the
> > > > > > > > > application,
> > > > > > > > > > > and
> > > > > > > > > > > > it seems to run fine unless I am trying to enable
> check
> > > > > > > > > pointing(with a
> > > > > > > > > > > S3
> > > > > > > > > > > > target).  I am looking to use RocksDB as my
> > > check-pointing
> > > > > > > backend.
> > > > > > > > > I
> > > > > > > > > > > have
> > > > > > > > > > > > asked a few places, and I am still unable to find a
> > > > solution
> > > > > to
> > > > > > > > this
> > > > > > > > > > > > problem.  Here are my steps for creating a cluster,
> and
> > > > > > launching
> > > > > > > > my
> > > > > > > > > > > > application, perhaps I am missing a step.  I'd be
> happy
> > > to
> > > > > > > provide
> > > > > > > > > any
> > > > > > > > > > > > additional information if needed.
> > > > > > > > > > > >
> > > > > > > > > > > > AWS Portal:
> > > > > > > > > > > >
> > > > > > > > > > > >     1) EMR -> Create Cluster
> > > > > > > > > > > >     2) Advanced Options
> > > > > > > > > > > >     3) Release = emr-5.8.0
> > > > > > > > > > > >     4) Only select Hadoop 2.7.3
> > > > > > > > > > > >     5) Next -> Next -> Next -> Create Cluster ( I do
> > fill
> > > > out
> > > > > > > > > > > > names/keys/etc)
> > > > > > > > > > > >
> > > > > > > > > > > > Once the cluster is up I ssh into the Master and do
> the
> > > > > > > following:
> > > > > > > > > > > >
> > > > > > > > > > > >     1  wget
> > > > > > > > > > > > http://apache.claz.org/flink/
> > > flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > > > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-
> > scala_2.11.tgz
> > > > > > > > > > > >     3  cd flink-1.3.2
> > > > > > > > > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > > > > > > > > > >     5  Change conf/flink-conf.yaml
> > > > > > > > > > > >     6  ./bin/flink run -m yarn-cluster -yn 1
> > > > > > ~/flink-consumer.jar
> > > > > > > > > > > >
> > > > > > > > > > > > My conf/flink-conf.yaml I add the following fields:
> > > > > > > > > > > >
> > > > > > > > > > > >     state.backend: rocksdb
> > > > > > > > > > > >     state.backend.fs.checkpointdir:
> > s3:/bucket/location
> > > > > > > > > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > > > > > > > > >
> > > > > > > > > > > > My program's checkpointing setup:
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > env.enableCheckpointing(getCheckpointRate,
> > > > > > > > CheckpointingMode.EXACTLY_
> > > > > > > > > > > ONCE)
> > > > > > > > > > > >
> > > > > > > > > > > > env.getCheckpointConfig.
> enableExternalizedCheckpoints(
> > > > > > > > > > > > ExternalizedCheckpointCleanup.
> RETAIN_ON_CANCELLATION)
> > > > > > > > > > > >
> > > > > > > > > > > > env.getCheckpointConfig.
> setMinPauseBetweenCheckpoints(
> > > > > > > > > > > > getCheckpointMinPause)
> > > > > > > > > > > >     env.getCheckpointConfig.setCheckpointTimeout(
> > > > > > > > > getCheckpointTimeout)
> > > > > > > > > > > >     env.getCheckpointConfig.
> > > setMaxConcurrentCheckpoints(1)
> > > > > > > > > > > >     env.setStateBackend(new
> RocksDBStateBackend("s3://
> > > > > > > > > > bucket/location",
> > > > > > > > > > > > true))
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by Bowen Li <bo...@offerupnow.com>.
Hi Andy,

I believe it's because you didn't set your s3 impl correctly. Try to set
your core-site.xml by following https://ci.apache.org/
projects/flink/flink-docs-release-1.4/ops/deployment/aws.html#s3afilesystem-
recommended

Bowen

On Fri, Oct 6, 2017 at 7:59 AM, Andy M. <aj...@gmail.com> wrote:

> Hi Till,
>
> Seems like everything is in line there.  hadoop-common.jar ->
> hadoop-common-2.7.3-amzn-3.jar
>
> And when i decompiled that jar I see  public void addResource(Configuration
> conf) in org/apache/hadoop/conf/Configuration.java
>
> I agree that an incorrect version of the jar is probably being run, is
> there a way to limit the classpath for the TaskManager when starting the
> job?
>
> Thank you
>
> On Fri, Oct 6, 2017 at 6:49 AM, Till Rohrmann <tr...@apache.org>
> wrote:
>
> > Hi Andy,
> >
> > could you check which Hadoop version this jar
> > /usr/lib/hadoop/hadoop-common.jar is? Maybe also checking whether the
> > contained hadoop Configuration class has the method
> > Configuration.addResource(Lorg/apache/hadoop/conf/Configuration;)V.
> Maybe
> > this jar is the culprit because it comes from a different Hadoop version.
> >
> > Cheers,
> > Till
> > ​
> >
> > On Thu, Oct 5, 2017 at 4:22 PM, Andy M. <aj...@gmail.com> wrote:
> >
> > > Hi Till,
> > >
> > > I believe this is what you are looking for, classpath is much bigger
> for
> > > the task manager.  I can also post the whole log file if needed:
> > >
> > > 2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.
> > YarnTaskManagerRunner
> > >                  -  Classpath:
> > > flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-
> > > python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.
> > > 2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:
> > > log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/
> > > etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-
> > > amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.
> > > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/
> > > hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-
> > > amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/
> > > usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/
> > > hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-
> > > streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/
> > > hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/
> > > usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > > hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.
> > > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/
> > > hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/
> > > hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.
> > > 7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-
> > > amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/
> > > hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.
> > > jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/
> > > hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > > archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/
> > > hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-
> > > amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/
> > > usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/
> > > hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.
> > > 2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.
> > > jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
> > > lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/
> > > httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
> > > 1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
> > > jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/
> > > lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-
> > > 1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/
> > > usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/
> > > gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.
> > > 8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/
> > > lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/
> > > apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
> > > httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.
> > > 1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/
> > > lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
> > > commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/
> > > usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/
> > > xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
> > > jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/
> > > commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.
> > > jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/
> > > usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/
> > > jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.
> > > Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/
> > > lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-
> > > api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-
> > > 1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:
> > > /usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/
> > > jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-
> > > cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/
> > > lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/
> > > apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/
> > > commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.
> > > 11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:
> > > /usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
> > > lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.
> > > 10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/
> > > usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/
> > > lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.
> > > jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/
> > > hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/
> > > curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-
> > > jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.
> > > 4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/
> > > usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/
> > > usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/
> > > usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/
> > > hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-
> > > amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-
> > > incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-
> > > 2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.
> > > jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/
> > > lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/
> > > lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/
> > > commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.
> > > jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/
> > > usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/
> > > hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/
> > > netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-
> > > 1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/
> > > hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/
> > > lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/
> > > hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > > hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/
> > > lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-
> > > api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:
> > > /usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/
> > > hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/
> > > lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-
> > > logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.
> > > jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/
> > > hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-
> > > mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/
> > > hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/
> > > hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/
> > hadoop-mapreduce/hadoop-
> > > mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/
> > > httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > > mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/
> > > lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-
> > > mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/
> > > commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/
> > > hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/
> > > jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-
> > > mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.
> > > jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/
> > > lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/
> > > usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/
> > > usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/
> > > usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/
> > > hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/
> > > hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-
> > > 3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.
> > 8.jar:/usr/lib/hadoop-
> > > mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-
> > > 3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.
> > > jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/
> > > hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/
> > > hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/
> > > hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/
> > > hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-
> > > mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/
> > > commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-
> > > net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/
> > > usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/
> > > hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/
> > > xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-
> > > 1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/
> > > lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-
> > > mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-
> > > mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-
> > > mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/
> > > hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > > hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/
> > > hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-
> > > mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/
> > > lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-
> > > mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-
> > > mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-
> > > 0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:
> > > /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.
> > > 7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.
> > > 2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-
> > > client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.
> > > jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/
> > > lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/
> > > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > > common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.
> > > 3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/
> > > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > > jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/
> > > usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-
> > > mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-
> > > mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/
> > > usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/
> > > hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/
> > > hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/
> > > lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/
> > > hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > > mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > > archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.
> > > 9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.
> > > jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/
> > > lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-
> > > mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/
> > > jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-
> > > kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.
> > > jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:
> > > /usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:
> > > /usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/
> > > usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/
> > > lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/
> > > lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-mapreduce/commons-
> > > configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-
> > > 1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/
> > > usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/
> > > usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/
> > > lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/
> > > hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/
> > > hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/
> > > zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-
> > > 2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:
> > > /usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/
> > > usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/
> > > lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/
> > > lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/
> > > lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/
> > > hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/
> > > hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/
> > > hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/
> > lib/hadoop-mapreduce/lib/
> > > snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/
> > > jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/
> > > paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-
> > > guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.
> > > jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/
> > > hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-
> > > mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/
> > > hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > > mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-
> > > mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/
> > > lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-
> > > mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/
> > > guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.
> > > 0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/
> > > usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/
> > > usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/
> > > hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-
> > > mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/
> > > hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/
> > > hadoop-yarn/hadoop-yarn-applications-distributedshell-
> > > 2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > hadoop-yarn-server-applicationhistoryservice.jar:
> > > /usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/
> > > lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-
> > > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-
> > > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/
> > > lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-
> > > yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.
> > > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > > unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-
> > > yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > > hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > > common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-
> > > proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > > nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.
> > > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > > unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > > yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-
> > > api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.
> > > jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/
> > > lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/
> > > hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/
> > > lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.
> > > 9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/
> > > usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/
> > > hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/
> > > lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-
> > > client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.
> > > jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/
> > > usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > > yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/
> > > netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/
> > > aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/
> > > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/
> > > usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:
> > > /usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/
> > > lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/
> > > hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-
> > > 3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.
> > > jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/
> > > hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/
> > > lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-
> > > collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-
> > > 1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/
> > > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/
> > > hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/
> > > lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-
> > > jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-
> > > logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:
> > > /usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/
> > > share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/
> > > jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/
> > > bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-
> > > over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-
> > > java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.
> > > 21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/
> > > usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/
> > > share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/
> > > share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/
> > > usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/
> > > lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/
> > > share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/
> > > aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/
> > > aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/
> > > usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/
> > > usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-
> > > gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-
> > > elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-mechanicalturkrequester-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/
> > > aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/
> > > usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/
> > > usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/
> > > aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-
> > > sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-
> > > sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/
> > > aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/
> > > usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > stepfunctions-1.11.160.jar:/
> > > usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/
> > > usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-
> > sdk/aws-java-sdk-
> > > cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/
> > > aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/
> > > usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> elastictranscoder-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > organizations-1.11.160.jar:/
> > > usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-
> > sdk/aws-java-sdk-
> > > applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/
> > > aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/
> > > aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/
> > > aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:
> > > /usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:
> > > /usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-resourcegroupstaggingapi-1.11.
> > > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-
> > > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > > java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/
> > > usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:
> > > /usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.
> > > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.
> > > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-
> > > java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/
> > > aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/
> > > aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/
> > > share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-
> > > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > > elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > > aws-java-sdk-iot-1.11.160.jar
> > >
> > > Thank you
> > >
> > > On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <tr...@apache.org>
> > > wrote:
> > >
> > > > Hi Andy,
> > > >
> > > > the CliFrontend is not executed via Yarn, thus, it is not affected by
> > > > dependencies which are added due to the underlying Yarn cluster.
> > > Therefore,
> > > > it would be helpful to look at the TaskManager logs. Either you have
> > > > enabled log aggregation on your Yarn cluster, then you can obtain the
> > > logs
> > > > via `yarn logs -applicationId <application ID>` or you have to
> retrieve
> > > > them from the machines where they were running (either by going
> > directly
> > > > there or via the Yarn web interface).
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com> wrote:
> > > >
> > > > > Hi Till,
> > > > >
> > > > > That is actually the classpath used by the flink bash script(that
> > > > launches
> > > > > the jar using the java command).  I changed the execute to an echo,
> > and
> > > > > grabbed that for the CLI arguments.
> > > > >
> > > > > I believe this is the class path from the log file(although it
> might
> > > not
> > > > be
> > > > > the taskmanager log, is that any different from what would be in my
> > > > > flink-1.3.2/log folder?):
> > > > >
> > > > > 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.CliFrontend
> > > > >                  -  Classpath:
> > > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > > home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> > > > > 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> > > > > hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> > > > > hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/
> hadoop/conf:
> > > > >
> > > > > If that doesn't seem right, and you can point me in the right
> > direction
> > > > as
> > > > > to where the TaskManager logs would be, I would be happy to grab
> the
> > > > > information your looking for.
> > > > >
> > > > > Thank you
> > > > >
> > > > > On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <
> trohrmann@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Hi Andy,
> > > > > >
> > > > > > this looks to me indeed like a dependency problem. I assume that
> > EMR
> > > or
> > > > > > something else is pulling in an incompatible version of Hadoop.
> > > > > >
> > > > > > The classpath you've posted, is this the one logged in the log
> > files
> > > > > > (TaskManager log) or did you compile it yourself? In the latter
> > case,
> > > > it
> > > > > > would also be helpful to get access to the TaskManager logs.
> > > > > >
> > > > > > Cheers,
> > > > > > Till
> > > > > >
> > > > > > On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <aj...@gmail.com>
> > wrote:
> > > > > >
> > > > > > > Hi Fabian,
> > > > > > >
> > > > > > > 1) I have looked at the linked docs, and from what I can tell
> no
> > > > setup
> > > > > > > should really need to be done to get Flink working(Other than
> > > > > downloading
> > > > > > > the correct binaries, which I believe I did)
> > > > > > > 2) I have downloaded the Flink 1.3.2 binaries(flink-1.3.2-bin-
> > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > <http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > > > hadoop27-scala_2.11.tgz>)
> > > > > > > This is for hadoop 2.7.X, which matches EMR 5.8.0.
> > > > > > >
> > > > > > > I appreciate any help or guidance you can provide me in fixing
> my
> > > > > > problems,
> > > > > > > please let me know if there is anything else I can provide you.
> > > > > > >
> > > > > > > Thank you
> > > > > > >
> > > > > > > On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <
> fhueske@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi Andy,
> > > > > > > >
> > > > > > > > I'm not an AWS expert, so I'll just check on some common
> > issues.
> > > > > > > >
> > > > > > > > I guess you already had a look at the Flink docs for AWS/EMR
> > but
> > > > I'll
> > > > > > > post
> > > > > > > > the link just be to sure [1].
> > > > > > > >
> > > > > > > > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink
> > > 1.3.1)
> > > > > did
> > > > > > > you
> > > > > > > > built Flink yourself or did you download the binaries?
> > > > > > > > Does the Hadoop version of the Flink build match the Hadoop
> > > version
> > > > > of
> > > > > > > EMR
> > > > > > > > 5.8.0, i.e., Hadoop 2.7.x?
> > > > > > > >
> > > > > > > > Best, Fabian
> > > > > > > >
> > > > > > > > [1]
> > > > > > > > https://ci.apache.org/projects/flink/flink-docs-
> > > > > > > release-1.3/setup/aws.html
> > > > > > > >
> > > > > > > > 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > > > >
> > > > > > > > > Hi Fabian,
> > > > > > > > >
> > > > > > > > > Sorry, I just realized I forgot to include that part.  The
> > > error
> > > > > > > returned
> > > > > > > > > is:
> > > > > > > > >
> > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > org.apache.hadoop.conf.Configuration.addResource(
> > > > > > > > Lorg/apache/hadoop/conf/
> > > > > > > > > Configuration;)V
> > > > > > > > >     at com.amazon.ws.emr.hadoop.fs.
> EmrFileSystem.initialize(
> > > > > > > > > EmrFileSystem.java:93)
> > > > > > > > >     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.
> > > > > > > > > initialize(HadoopFileSystem.java:328)
> > > > > > > > >     at org.apache.flink.core.fs.FileSystem.
> > > > getUnguardedFileSystem(
> > > > > > > > > FileSystem.java:350)
> > > > > > > > >     at org.apache.flink.core.fs.FileSystem.get(FileSystem.
> > > > > java:389)
> > > > > > > > >     at org.apache.flink.core.fs.Path.
> > > > getFileSystem(Path.java:293)
> > > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > > > > > FsCheckpointStreamFactory.<init>(
> FsCheckpointStreamFactory.
> > > > > java:99)
> > > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > FsStateBackend.
> > > > > > > > > createStreamFactory(FsStateBackend.java:282)
> > > > > > > > >     at org.apache.flink.contrib.streaming.state.
> > > > > RocksDBStateBackend.
> > > > > > > > > createStreamFactory(RocksDBStateBackend.java:273
> > > > > > > > >
> > > > > > > > > I believe it has something to do with the classpath, but I
> am
> > > > > unsure
> > > > > > > why
> > > > > > > > or
> > > > > > > > > how to fix it.  The classpath being used during the
> execution
> > > is:
> > > > > > > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > > > > > > ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> > > > > > > > > uber-1.3.2.ja‌​r:/home/hadoop/
> flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > > > > > > > 17.jar:/home/hadoop‌​/flink-1.
> 3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > > > > > > > jar‌​:/home/hadoop/flink-‌​1.
> 3.2/lib/flink-dist‌​_2.11-1.3.
> > > > > > > > > 2.jar::/et‌​c/hadoop/conf:
> > > > > > > > >
> > > > > > > > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and
> it
> > > > seems
> > > > > > the
> > > > > > > > > addResource function does seem to be there.
> > > > > > > > >
> > > > > > > > > Thank you
> > > > > > > > >
> > > > > > > > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <
> > > fhueske@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Andy,
> > > > > > > > > >
> > > > > > > > > > can you describe in more detail what exactly isn't
> working?
> > > > > > > > > > Do you see error messages in the log files or on the
> > console?
> > > > > > > > > >
> > > > > > > > > > Thanks, Fabian
> > > > > > > > > >
> > > > > > > > > > 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > > > > > >
> > > > > > > > > > > Hello,
> > > > > > > > > > >
> > > > > > > > > > > I am about to deploy my first Flink projects  to
> > > production,
> > > > > but
> > > > > > I
> > > > > > > am
> > > > > > > > > > > running into a very big hurdle.  I am unable to launch
> my
> > > > > project
> > > > > > > so
> > > > > > > > it
> > > > > > > > > > can
> > > > > > > > > > > write to an S3 bucket.  My project is running on an EMR
> > > > > cluster,
> > > > > > > > where
> > > > > > > > > I
> > > > > > > > > > > have installed Flink 1.3.2.  I am using Yarn to launch
> > the
> > > > > > > > application,
> > > > > > > > > > and
> > > > > > > > > > > it seems to run fine unless I am trying to enable check
> > > > > > > > pointing(with a
> > > > > > > > > > S3
> > > > > > > > > > > target).  I am looking to use RocksDB as my
> > check-pointing
> > > > > > backend.
> > > > > > > > I
> > > > > > > > > > have
> > > > > > > > > > > asked a few places, and I am still unable to find a
> > > solution
> > > > to
> > > > > > > this
> > > > > > > > > > > problem.  Here are my steps for creating a cluster, and
> > > > > launching
> > > > > > > my
> > > > > > > > > > > application, perhaps I am missing a step.  I'd be happy
> > to
> > > > > > provide
> > > > > > > > any
> > > > > > > > > > > additional information if needed.
> > > > > > > > > > >
> > > > > > > > > > > AWS Portal:
> > > > > > > > > > >
> > > > > > > > > > >     1) EMR -> Create Cluster
> > > > > > > > > > >     2) Advanced Options
> > > > > > > > > > >     3) Release = emr-5.8.0
> > > > > > > > > > >     4) Only select Hadoop 2.7.3
> > > > > > > > > > >     5) Next -> Next -> Next -> Create Cluster ( I do
> fill
> > > out
> > > > > > > > > > > names/keys/etc)
> > > > > > > > > > >
> > > > > > > > > > > Once the cluster is up I ssh into the Master and do the
> > > > > > following:
> > > > > > > > > > >
> > > > > > > > > > >     1  wget
> > > > > > > > > > > http://apache.claz.org/flink/
> > flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-
> scala_2.11.tgz
> > > > > > > > > > >     3  cd flink-1.3.2
> > > > > > > > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > > > > > > > > >     5  Change conf/flink-conf.yaml
> > > > > > > > > > >     6  ./bin/flink run -m yarn-cluster -yn 1
> > > > > ~/flink-consumer.jar
> > > > > > > > > > >
> > > > > > > > > > > My conf/flink-conf.yaml I add the following fields:
> > > > > > > > > > >
> > > > > > > > > > >     state.backend: rocksdb
> > > > > > > > > > >     state.backend.fs.checkpointdir:
> s3:/bucket/location
> > > > > > > > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > > > > > > > >
> > > > > > > > > > > My program's checkpointing setup:
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > env.enableCheckpointing(getCheckpointRate,
> > > > > > > CheckpointingMode.EXACTLY_
> > > > > > > > > > ONCE)
> > > > > > > > > > >
> > > > > > > > > > > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > > > > > > > > > > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> > > > > > > > > > >
> > > > > > > > > > > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > > > > > > > > > > getCheckpointMinPause)
> > > > > > > > > > >     env.getCheckpointConfig.setCheckpointTimeout(
> > > > > > > > getCheckpointTimeout)
> > > > > > > > > > >     env.getCheckpointConfig.
> > setMaxConcurrentCheckpoints(1)
> > > > > > > > > > >     env.setStateBackend(new RocksDBStateBackend("s3://
> > > > > > > > > bucket/location",
> > > > > > > > > > > true))
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by "Andy M." <aj...@gmail.com>.
Hi Till,

Seems like everything is in line there.  hadoop-common.jar ->
hadoop-common-2.7.3-amzn-3.jar

And when i decompiled that jar I see  public void addResource(Configuration
conf) in org/apache/hadoop/conf/Configuration.java

I agree that an incorrect version of the jar is probably being run, is
there a way to limit the classpath for the TaskManager when starting the
job?

Thank you

On Fri, Oct 6, 2017 at 6:49 AM, Till Rohrmann <tr...@apache.org> wrote:

> Hi Andy,
>
> could you check which Hadoop version this jar
> /usr/lib/hadoop/hadoop-common.jar is? Maybe also checking whether the
> contained hadoop Configuration class has the method
> Configuration.addResource(Lorg/apache/hadoop/conf/Configuration;)V. Maybe
> this jar is the culprit because it comes from a different Hadoop version.
>
> Cheers,
> Till
> ​
>
> On Thu, Oct 5, 2017 at 4:22 PM, Andy M. <aj...@gmail.com> wrote:
>
> > Hi Till,
> >
> > I believe this is what you are looking for, classpath is much bigger for
> > the task manager.  I can also post the whole log file if needed:
> >
> > 2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.
> YarnTaskManagerRunner
> >                  -  Classpath:
> > flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-
> > python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.
> > 2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:
> > log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/
> > etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-
> > amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.
> > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/
> > hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-
> > amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/
> > usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/
> > hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-
> > streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/
> > hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/
> > usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.
> > 3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/
> > hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/
> > hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.
> > 7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-
> > amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/
> > hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.
> > jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/
> > hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> > archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/
> > hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-
> > amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/
> > usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/
> > hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.
> > 2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.
> > jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
> > lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/
> > httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
> > 1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
> > jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/
> > lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-
> > 1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/
> > usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/
> > gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.
> > 8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/
> > lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/
> > apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
> > httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.
> > 1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/
> > lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
> > commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/
> > usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/
> > xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
> > jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/
> > commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.
> > jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/
> > usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/
> > jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.
> > Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/
> > lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-
> > api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-
> > 1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:
> > /usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/
> > jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-
> > cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/
> > lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/
> > apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/
> > commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.
> > 11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:
> > /usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
> > lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.
> > 10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/
> > usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/
> > lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.
> > jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/
> > hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/
> > curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-
> > jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.
> > 4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/
> > usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/
> > usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/
> > usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/
> > hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-
> > amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-
> > incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-
> > 2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.
> > jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/
> > lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/
> > lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/
> > commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.
> > jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/
> > usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/
> > hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/
> > netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-
> > 1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/
> > hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/
> > lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/
> > hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/
> > lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-
> > api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:
> > /usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/
> > hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/
> > lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-
> > logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.
> > jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/
> > hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-
> > mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/
> > hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/
> > hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/
> hadoop-mapreduce/hadoop-
> > mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/
> > httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/
> > lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-
> > mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/
> > commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/
> > hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/
> > jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-
> > mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.
> > jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/
> > lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/
> > usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/
> > usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/
> > usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/
> > hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/
> > hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-
> > 3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.
> 8.jar:/usr/lib/hadoop-
> > mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-
> > 3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.
> > jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/
> > hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/
> > hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/
> > hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/
> > hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-
> > mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/
> > commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-
> > net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/
> > usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/
> > hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/
> > xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-
> > 1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/
> > lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-
> > mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-
> > mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-
> > mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/
> > hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> > hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/
> > hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-
> > mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/
> > lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-
> > mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-
> > mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-
> > 0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:
> > /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.
> > 7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.
> > 2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-
> > client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.
> > jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/
> > lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/
> > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.
> > 3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/
> > usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> > jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/
> > usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-
> > mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-
> > mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/
> > usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/
> > hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/
> > hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/
> > lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/
> > hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> > mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
> > archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.
> > 9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.
> > jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/
> > lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-
> > mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/
> > jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-
> > kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.
> > jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:
> > /usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:
> > /usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/
> > usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/
> > lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/
> > lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-mapreduce/commons-
> > configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-
> > 1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/
> > usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/
> > usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/
> > lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/
> > hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/
> > hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/
> > zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-
> > 2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:
> > /usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/
> > usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/
> > lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/
> > lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/
> > lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/
> > hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/
> > hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/
> > hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/
> lib/hadoop-mapreduce/lib/
> > snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/
> > jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/
> > paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-
> > guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.
> > jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/
> > hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-
> > mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/
> > hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-
> > mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/
> > lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-
> > mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/
> > guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.
> > 0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/
> > usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/
> > usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/
> > hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-
> > mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/
> > hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/
> > hadoop-yarn/hadoop-yarn-applications-distributedshell-
> > 2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > hadoop-yarn-server-applicationhistoryservice.jar:
> > /usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/
> > lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-
> > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-
> > 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/
> > lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-
> > yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.
> > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-
> > yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> > hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> > common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-
> > proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> > nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.
> > jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> > unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> > yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-
> > api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.
> > jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/
> > lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/
> > hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/
> > lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.
> > 9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/
> > usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/
> > hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/
> > lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-
> > client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.
> > jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/
> > usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-
> > yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/
> > netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/
> > aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/
> > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/
> > usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:
> > /usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/
> > lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/
> > hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-
> > 3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.
> > jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/
> > hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/
> > lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-
> > collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-
> > 1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/
> > usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/
> > hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/
> > lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-
> > jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-
> > logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:
> > /usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/
> > share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/
> > jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/
> > bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-
> > over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-
> > java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.
> > 21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/
> > usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/
> > share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/
> > share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/
> > usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/
> > lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/
> > share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/
> > aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/
> > aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/
> > usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/
> > usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-
> > gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-mechanicalturkrequester-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/
> > aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/
> > usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/
> > usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/
> > aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-
> > sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-
> > sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/
> > aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/
> > usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> stepfunctions-1.11.160.jar:/
> > usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/
> > usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-
> sdk/aws-java-sdk-
> > cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/
> > aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/
> > usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-elastictranscoder-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> organizations-1.11.160.jar:/
> > usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-
> sdk/aws-java-sdk-
> > applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/
> > aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/
> > aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/
> > aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:
> > /usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:
> > /usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-resourcegroupstaggingapi-1.11.
> > 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-
> > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> > java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/
> > usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:
> > /usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.
> > jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.
> > 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-
> > java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/
> > aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/
> > aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/
> > share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-
> > 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> > elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> > aws-java-sdk-iot-1.11.160.jar
> >
> > Thank you
> >
> > On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <tr...@apache.org>
> > wrote:
> >
> > > Hi Andy,
> > >
> > > the CliFrontend is not executed via Yarn, thus, it is not affected by
> > > dependencies which are added due to the underlying Yarn cluster.
> > Therefore,
> > > it would be helpful to look at the TaskManager logs. Either you have
> > > enabled log aggregation on your Yarn cluster, then you can obtain the
> > logs
> > > via `yarn logs -applicationId <application ID>` or you have to retrieve
> > > them from the machines where they were running (either by going
> directly
> > > there or via the Yarn web interface).
> > >
> > > Cheers,
> > > Till
> > >
> > > On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com> wrote:
> > >
> > > > Hi Till,
> > > >
> > > > That is actually the classpath used by the flink bash script(that
> > > launches
> > > > the jar using the java command).  I changed the execute to an echo,
> and
> > > > grabbed that for the CLI arguments.
> > > >
> > > > I believe this is the class path from the log file(although it might
> > not
> > > be
> > > > the taskmanager log, is that any different from what would be in my
> > > > flink-1.3.2/log folder?):
> > > >
> > > > 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.CliFrontend
> > > >                  -  Classpath:
> > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> > > > 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> > > > hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> > > > hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:
> > > >
> > > > If that doesn't seem right, and you can point me in the right
> direction
> > > as
> > > > to where the TaskManager logs would be, I would be happy to grab the
> > > > information your looking for.
> > > >
> > > > Thank you
> > > >
> > > > On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <tr...@apache.org>
> > > > wrote:
> > > >
> > > > > Hi Andy,
> > > > >
> > > > > this looks to me indeed like a dependency problem. I assume that
> EMR
> > or
> > > > > something else is pulling in an incompatible version of Hadoop.
> > > > >
> > > > > The classpath you've posted, is this the one logged in the log
> files
> > > > > (TaskManager log) or did you compile it yourself? In the latter
> case,
> > > it
> > > > > would also be helpful to get access to the TaskManager logs.
> > > > >
> > > > > Cheers,
> > > > > Till
> > > > >
> > > > > On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <aj...@gmail.com>
> wrote:
> > > > >
> > > > > > Hi Fabian,
> > > > > >
> > > > > > 1) I have looked at the linked docs, and from what I can tell no
> > > setup
> > > > > > should really need to be done to get Flink working(Other than
> > > > downloading
> > > > > > the correct binaries, which I believe I did)
> > > > > > 2) I have downloaded the Flink 1.3.2 binaries(flink-1.3.2-bin-
> > > > > > hadoop27-scala_2.11.tgz
> > > > > > <http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > > hadoop27-scala_2.11.tgz>)
> > > > > > This is for hadoop 2.7.X, which matches EMR 5.8.0.
> > > > > >
> > > > > > I appreciate any help or guidance you can provide me in fixing my
> > > > > problems,
> > > > > > please let me know if there is anything else I can provide you.
> > > > > >
> > > > > > Thank you
> > > > > >
> > > > > > On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <fhueske@gmail.com
> >
> > > > wrote:
> > > > > >
> > > > > > > Hi Andy,
> > > > > > >
> > > > > > > I'm not an AWS expert, so I'll just check on some common
> issues.
> > > > > > >
> > > > > > > I guess you already had a look at the Flink docs for AWS/EMR
> but
> > > I'll
> > > > > > post
> > > > > > > the link just be to sure [1].
> > > > > > >
> > > > > > > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink
> > 1.3.1)
> > > > did
> > > > > > you
> > > > > > > built Flink yourself or did you download the binaries?
> > > > > > > Does the Hadoop version of the Flink build match the Hadoop
> > version
> > > > of
> > > > > > EMR
> > > > > > > 5.8.0, i.e., Hadoop 2.7.x?
> > > > > > >
> > > > > > > Best, Fabian
> > > > > > >
> > > > > > > [1]
> > > > > > > https://ci.apache.org/projects/flink/flink-docs-
> > > > > > release-1.3/setup/aws.html
> > > > > > >
> > > > > > > 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > > >
> > > > > > > > Hi Fabian,
> > > > > > > >
> > > > > > > > Sorry, I just realized I forgot to include that part.  The
> > error
> > > > > > returned
> > > > > > > > is:
> > > > > > > >
> > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > org.apache.hadoop.conf.Configuration.addResource(
> > > > > > > Lorg/apache/hadoop/conf/
> > > > > > > > Configuration;)V
> > > > > > > >     at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(
> > > > > > > > EmrFileSystem.java:93)
> > > > > > > >     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.
> > > > > > > > initialize(HadoopFileSystem.java:328)
> > > > > > > >     at org.apache.flink.core.fs.FileSystem.
> > > getUnguardedFileSystem(
> > > > > > > > FileSystem.java:350)
> > > > > > > >     at org.apache.flink.core.fs.FileSystem.get(FileSystem.
> > > > java:389)
> > > > > > > >     at org.apache.flink.core.fs.Path.
> > > getFileSystem(Path.java:293)
> > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > > > > FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.
> > > > java:99)
> > > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > FsStateBackend.
> > > > > > > > createStreamFactory(FsStateBackend.java:282)
> > > > > > > >     at org.apache.flink.contrib.streaming.state.
> > > > RocksDBStateBackend.
> > > > > > > > createStreamFactory(RocksDBStateBackend.java:273
> > > > > > > >
> > > > > > > > I believe it has something to do with the classpath, but I am
> > > > unsure
> > > > > > why
> > > > > > > or
> > > > > > > > how to fix it.  The classpath being used during the execution
> > is:
> > > > > > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > > > > > ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> > > > > > > > uber-1.3.2.ja‌​r:/home/hadoop/flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > > > > > > 17.jar:/home/hadoop‌​/flink-1.3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > > > > > > jar‌​:/home/hadoop/flink-‌​1.3.2/lib/flink-dist‌​_2.11-1.3.
> > > > > > > > 2.jar::/et‌​c/hadoop/conf:
> > > > > > > >
> > > > > > > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and it
> > > seems
> > > > > the
> > > > > > > > addResource function does seem to be there.
> > > > > > > >
> > > > > > > > Thank you
> > > > > > > >
> > > > > > > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <
> > fhueske@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Andy,
> > > > > > > > >
> > > > > > > > > can you describe in more detail what exactly isn't working?
> > > > > > > > > Do you see error messages in the log files or on the
> console?
> > > > > > > > >
> > > > > > > > > Thanks, Fabian
> > > > > > > > >
> > > > > > > > > 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > > > > >
> > > > > > > > > > Hello,
> > > > > > > > > >
> > > > > > > > > > I am about to deploy my first Flink projects  to
> > production,
> > > > but
> > > > > I
> > > > > > am
> > > > > > > > > > running into a very big hurdle.  I am unable to launch my
> > > > project
> > > > > > so
> > > > > > > it
> > > > > > > > > can
> > > > > > > > > > write to an S3 bucket.  My project is running on an EMR
> > > > cluster,
> > > > > > > where
> > > > > > > > I
> > > > > > > > > > have installed Flink 1.3.2.  I am using Yarn to launch
> the
> > > > > > > application,
> > > > > > > > > and
> > > > > > > > > > it seems to run fine unless I am trying to enable check
> > > > > > > pointing(with a
> > > > > > > > > S3
> > > > > > > > > > target).  I am looking to use RocksDB as my
> check-pointing
> > > > > backend.
> > > > > > > I
> > > > > > > > > have
> > > > > > > > > > asked a few places, and I am still unable to find a
> > solution
> > > to
> > > > > > this
> > > > > > > > > > problem.  Here are my steps for creating a cluster, and
> > > > launching
> > > > > > my
> > > > > > > > > > application, perhaps I am missing a step.  I'd be happy
> to
> > > > > provide
> > > > > > > any
> > > > > > > > > > additional information if needed.
> > > > > > > > > >
> > > > > > > > > > AWS Portal:
> > > > > > > > > >
> > > > > > > > > >     1) EMR -> Create Cluster
> > > > > > > > > >     2) Advanced Options
> > > > > > > > > >     3) Release = emr-5.8.0
> > > > > > > > > >     4) Only select Hadoop 2.7.3
> > > > > > > > > >     5) Next -> Next -> Next -> Create Cluster ( I do fill
> > out
> > > > > > > > > > names/keys/etc)
> > > > > > > > > >
> > > > > > > > > > Once the cluster is up I ssh into the Master and do the
> > > > > following:
> > > > > > > > > >
> > > > > > > > > >     1  wget
> > > > > > > > > > http://apache.claz.org/flink/
> flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
> > > > > > > > > >     3  cd flink-1.3.2
> > > > > > > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > > > > > > > >     5  Change conf/flink-conf.yaml
> > > > > > > > > >     6  ./bin/flink run -m yarn-cluster -yn 1
> > > > ~/flink-consumer.jar
> > > > > > > > > >
> > > > > > > > > > My conf/flink-conf.yaml I add the following fields:
> > > > > > > > > >
> > > > > > > > > >     state.backend: rocksdb
> > > > > > > > > >     state.backend.fs.checkpointdir: s3:/bucket/location
> > > > > > > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > > > > > > >
> > > > > > > > > > My program's checkpointing setup:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > env.enableCheckpointing(getCheckpointRate,
> > > > > > CheckpointingMode.EXACTLY_
> > > > > > > > > ONCE)
> > > > > > > > > >
> > > > > > > > > > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > > > > > > > > > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> > > > > > > > > >
> > > > > > > > > > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > > > > > > > > > getCheckpointMinPause)
> > > > > > > > > >     env.getCheckpointConfig.setCheckpointTimeout(
> > > > > > > getCheckpointTimeout)
> > > > > > > > > >     env.getCheckpointConfig.
> setMaxConcurrentCheckpoints(1)
> > > > > > > > > >     env.setStateBackend(new RocksDBStateBackend("s3://
> > > > > > > > bucket/location",
> > > > > > > > > > true))
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by Till Rohrmann <tr...@apache.org>.
Hi Andy,

could you check which Hadoop version this jar
/usr/lib/hadoop/hadoop-common.jar is? Maybe also checking whether the
contained hadoop Configuration class has the method
Configuration.addResource(Lorg/apache/hadoop/conf/Configuration;)V. Maybe
this jar is the culprit because it comes from a different Hadoop version.

Cheers,
Till
​

On Thu, Oct 5, 2017 at 4:22 PM, Andy M. <aj...@gmail.com> wrote:

> Hi Till,
>
> I believe this is what you are looking for, classpath is much bigger for
> the task manager.  I can also post the whole log file if needed:
>
> 2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.YarnTaskManagerRunner
>                  -  Classpath:
> flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-
> python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.
> 2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:
> log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/
> etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-
> amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.
> 3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/
> hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-
> amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/
> usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/
> hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-
> streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/
> hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/
> usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.
> 3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/
> hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/
> hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.
> 7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-
> amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/
> hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.
> jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/
> hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-
> archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/
> hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-
> amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/
> usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/
> hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.
> 2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.
> jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/
> lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/
> httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.
> 1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.
> jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/
> lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-
> 1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/
> usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/
> gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.
> 8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/
> lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/
> apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-
> httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.
> 1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/
> lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/
> commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/
> usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/
> xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.
> jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/
> commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.
> jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/
> usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/
> jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.
> Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/
> lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-
> api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-
> 1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:
> /usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/
> jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-
> cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/
> lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/
> apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/
> commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.
> 11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:
> /usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/
> lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.
> 10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/
> usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/
> lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.
> jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/
> hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/
> curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-
> jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.
> 4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/
> usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/
> usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/
> usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/
> hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-
> amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-
> incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-
> 2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.
> jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/
> lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/
> lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/
> commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.
> jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/
> usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/
> hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/
> netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-
> 1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/
> hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/
> lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/
> hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/
> lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-
> api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:
> /usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/
> hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/
> lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-
> logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.
> jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/
> hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-
> mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/
> hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/
> hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
> mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/
> httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-
> mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/
> hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/
> lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-
> mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/
> commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/
> hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/
> hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/
> jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/
> hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-
> mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/
> hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.
> jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/
> lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/
> usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/
> usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/
> usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/
> hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/
> hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/
> hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/
> hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-
> 3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.8.jar:/usr/lib/hadoop-
> mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-
> 3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.
> jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/
> hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/
> hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/
> hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/
> hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-
> mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/
> commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-
> net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/
> usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/
> hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/
> xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-
> 1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/
> lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-
> mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-
> mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-
> mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/
> hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/
> hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/
> hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-
> mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/
> hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/
> lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-
> mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-
> mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-
> 0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:
> /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.
> 7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.
> 2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-
> client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.
> jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/
> lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/
> usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.
> 3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/
> usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-
> jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/
> usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-
> mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-
> mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/
> hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/
> usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/
> hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/
> hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/
> lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/
> hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-
> mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-
> archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.
> 9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.
> jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/
> lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-
> mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/
> jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-
> kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/
> hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.
> jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:
> /usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:
> /usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/
> usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/
> lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/
> lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-mapreduce/commons-
> configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-
> 1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/
> usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/
> usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/
> lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/
> hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/
> hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/
> zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-
> 2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:
> /usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/
> usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/
> lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/
> lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/
> lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/
> hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/
> hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/
> hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/
> snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/
> jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/
> paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-
> guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.
> jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/
> hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-
> mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/
> hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-
> mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-
> mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/
> lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-
> mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/
> guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.
> 0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/
> usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/
> usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/
> hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-
> mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/
> hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/
> hadoop-yarn/hadoop-yarn-applications-distributedshell-
> 2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> hadoop-yarn-server-applicationhistoryservice.jar:
> /usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/
> lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-
> 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-
> 3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/
> lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-
> yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.
> jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-
> yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/
> hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-
> common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-
> proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-
> nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.
> jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-
> unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-
> yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-
> api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.
> jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/
> lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/
> hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/
> lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.
> 9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/
> usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/
> hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/
> lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-
> client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.
> jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/
> usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-
> yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/
> netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/
> aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/
> usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/
> usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:
> /usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/
> lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/
> hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-
> 3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.
> jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/
> hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/
> lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-
> collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-
> 1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/
> usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/
> hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/
> lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-
> jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-
> logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:
> /usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/
> share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/
> jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/
> bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-
> over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-
> java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.
> 21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/
> usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/
> share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/
> share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/
> usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/
> lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/
> share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/
> aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/
> aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/
> usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/
> usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-
> gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-mechanicalturkrequester-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/
> aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/
> usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/
> usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/
> aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-
> sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-
> sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/
> aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/
> usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-stepfunctions-1.11.160.jar:/
> usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/
> usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/
> aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/
> usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-elastictranscoder-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-organizations-1.11.160.jar:/
> usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/
> aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/
> aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/
> aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:
> /usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:
> /usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-resourcegroupstaggingapi-1.11.
> 160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-
> 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-
> java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/
> usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:
> /usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.
> jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.
> 11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-
> java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/
> aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/
> aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/
> share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-
> 1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-
> elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/
> aws-java-sdk-iot-1.11.160.jar
>
> Thank you
>
> On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <tr...@apache.org>
> wrote:
>
> > Hi Andy,
> >
> > the CliFrontend is not executed via Yarn, thus, it is not affected by
> > dependencies which are added due to the underlying Yarn cluster.
> Therefore,
> > it would be helpful to look at the TaskManager logs. Either you have
> > enabled log aggregation on your Yarn cluster, then you can obtain the
> logs
> > via `yarn logs -applicationId <application ID>` or you have to retrieve
> > them from the machines where they were running (either by going directly
> > there or via the Yarn web interface).
> >
> > Cheers,
> > Till
> >
> > On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com> wrote:
> >
> > > Hi Till,
> > >
> > > That is actually the classpath used by the flink bash script(that
> > launches
> > > the jar using the java command).  I changed the execute to an echo, and
> > > grabbed that for the CLI arguments.
> > >
> > > I believe this is the class path from the log file(although it might
> not
> > be
> > > the taskmanager log, is that any different from what would be in my
> > > flink-1.3.2/log folder?):
> > >
> > > 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.CliFrontend
> > >                  -  Classpath:
> > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> > > 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> > > hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> > > hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:
> > >
> > > If that doesn't seem right, and you can point me in the right direction
> > as
> > > to where the TaskManager logs would be, I would be happy to grab the
> > > information your looking for.
> > >
> > > Thank you
> > >
> > > On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <tr...@apache.org>
> > > wrote:
> > >
> > > > Hi Andy,
> > > >
> > > > this looks to me indeed like a dependency problem. I assume that EMR
> or
> > > > something else is pulling in an incompatible version of Hadoop.
> > > >
> > > > The classpath you've posted, is this the one logged in the log files
> > > > (TaskManager log) or did you compile it yourself? In the latter case,
> > it
> > > > would also be helpful to get access to the TaskManager logs.
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <aj...@gmail.com> wrote:
> > > >
> > > > > Hi Fabian,
> > > > >
> > > > > 1) I have looked at the linked docs, and from what I can tell no
> > setup
> > > > > should really need to be done to get Flink working(Other than
> > > downloading
> > > > > the correct binaries, which I believe I did)
> > > > > 2) I have downloaded the Flink 1.3.2 binaries(flink-1.3.2-bin-
> > > > > hadoop27-scala_2.11.tgz
> > > > > <http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > hadoop27-scala_2.11.tgz>)
> > > > > This is for hadoop 2.7.X, which matches EMR 5.8.0.
> > > > >
> > > > > I appreciate any help or guidance you can provide me in fixing my
> > > > problems,
> > > > > please let me know if there is anything else I can provide you.
> > > > >
> > > > > Thank you
> > > > >
> > > > > On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <fh...@gmail.com>
> > > wrote:
> > > > >
> > > > > > Hi Andy,
> > > > > >
> > > > > > I'm not an AWS expert, so I'll just check on some common issues.
> > > > > >
> > > > > > I guess you already had a look at the Flink docs for AWS/EMR but
> > I'll
> > > > > post
> > > > > > the link just be to sure [1].
> > > > > >
> > > > > > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink
> 1.3.1)
> > > did
> > > > > you
> > > > > > built Flink yourself or did you download the binaries?
> > > > > > Does the Hadoop version of the Flink build match the Hadoop
> version
> > > of
> > > > > EMR
> > > > > > 5.8.0, i.e., Hadoop 2.7.x?
> > > > > >
> > > > > > Best, Fabian
> > > > > >
> > > > > > [1]
> > > > > > https://ci.apache.org/projects/flink/flink-docs-
> > > > > release-1.3/setup/aws.html
> > > > > >
> > > > > > 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > >
> > > > > > > Hi Fabian,
> > > > > > >
> > > > > > > Sorry, I just realized I forgot to include that part.  The
> error
> > > > > returned
> > > > > > > is:
> > > > > > >
> > > > > > > java.lang.NoSuchMethodError:
> > > > > > > org.apache.hadoop.conf.Configuration.addResource(
> > > > > > Lorg/apache/hadoop/conf/
> > > > > > > Configuration;)V
> > > > > > >     at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(
> > > > > > > EmrFileSystem.java:93)
> > > > > > >     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.
> > > > > > > initialize(HadoopFileSystem.java:328)
> > > > > > >     at org.apache.flink.core.fs.FileSystem.
> > getUnguardedFileSystem(
> > > > > > > FileSystem.java:350)
> > > > > > >     at org.apache.flink.core.fs.FileSystem.get(FileSystem.
> > > java:389)
> > > > > > >     at org.apache.flink.core.fs.Path.
> > getFileSystem(Path.java:293)
> > > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > > > FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.
> > > java:99)
> > > > > > >     at org.apache.flink.runtime.state.filesystem.
> FsStateBackend.
> > > > > > > createStreamFactory(FsStateBackend.java:282)
> > > > > > >     at org.apache.flink.contrib.streaming.state.
> > > RocksDBStateBackend.
> > > > > > > createStreamFactory(RocksDBStateBackend.java:273
> > > > > > >
> > > > > > > I believe it has something to do with the classpath, but I am
> > > unsure
> > > > > why
> > > > > > or
> > > > > > > how to fix it.  The classpath being used during the execution
> is:
> > > > > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > > > > ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> > > > > > > uber-1.3.2.ja‌​r:/home/hadoop/flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > > > > > 17.jar:/home/hadoop‌​/flink-1.3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > > > > > jar‌​:/home/hadoop/flink-‌​1.3.2/lib/flink-dist‌​_2.11-1.3.
> > > > > > > 2.jar::/et‌​c/hadoop/conf:
> > > > > > >
> > > > > > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and it
> > seems
> > > > the
> > > > > > > addResource function does seem to be there.
> > > > > > >
> > > > > > > Thank you
> > > > > > >
> > > > > > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <
> fhueske@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi Andy,
> > > > > > > >
> > > > > > > > can you describe in more detail what exactly isn't working?
> > > > > > > > Do you see error messages in the log files or on the console?
> > > > > > > >
> > > > > > > > Thanks, Fabian
> > > > > > > >
> > > > > > > > 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > > > >
> > > > > > > > > Hello,
> > > > > > > > >
> > > > > > > > > I am about to deploy my first Flink projects  to
> production,
> > > but
> > > > I
> > > > > am
> > > > > > > > > running into a very big hurdle.  I am unable to launch my
> > > project
> > > > > so
> > > > > > it
> > > > > > > > can
> > > > > > > > > write to an S3 bucket.  My project is running on an EMR
> > > cluster,
> > > > > > where
> > > > > > > I
> > > > > > > > > have installed Flink 1.3.2.  I am using Yarn to launch the
> > > > > > application,
> > > > > > > > and
> > > > > > > > > it seems to run fine unless I am trying to enable check
> > > > > > pointing(with a
> > > > > > > > S3
> > > > > > > > > target).  I am looking to use RocksDB as my check-pointing
> > > > backend.
> > > > > > I
> > > > > > > > have
> > > > > > > > > asked a few places, and I am still unable to find a
> solution
> > to
> > > > > this
> > > > > > > > > problem.  Here are my steps for creating a cluster, and
> > > launching
> > > > > my
> > > > > > > > > application, perhaps I am missing a step.  I'd be happy to
> > > > provide
> > > > > > any
> > > > > > > > > additional information if needed.
> > > > > > > > >
> > > > > > > > > AWS Portal:
> > > > > > > > >
> > > > > > > > >     1) EMR -> Create Cluster
> > > > > > > > >     2) Advanced Options
> > > > > > > > >     3) Release = emr-5.8.0
> > > > > > > > >     4) Only select Hadoop 2.7.3
> > > > > > > > >     5) Next -> Next -> Next -> Create Cluster ( I do fill
> out
> > > > > > > > > names/keys/etc)
> > > > > > > > >
> > > > > > > > > Once the cluster is up I ssh into the Master and do the
> > > > following:
> > > > > > > > >
> > > > > > > > >     1  wget
> > > > > > > > > http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
> > > > > > > > >     3  cd flink-1.3.2
> > > > > > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > > > > > > >     5  Change conf/flink-conf.yaml
> > > > > > > > >     6  ./bin/flink run -m yarn-cluster -yn 1
> > > ~/flink-consumer.jar
> > > > > > > > >
> > > > > > > > > My conf/flink-conf.yaml I add the following fields:
> > > > > > > > >
> > > > > > > > >     state.backend: rocksdb
> > > > > > > > >     state.backend.fs.checkpointdir: s3:/bucket/location
> > > > > > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > > > > > >
> > > > > > > > > My program's checkpointing setup:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > env.enableCheckpointing(getCheckpointRate,
> > > > > CheckpointingMode.EXACTLY_
> > > > > > > > ONCE)
> > > > > > > > >
> > > > > > > > > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > > > > > > > > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> > > > > > > > >
> > > > > > > > > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > > > > > > > > getCheckpointMinPause)
> > > > > > > > >     env.getCheckpointConfig.setCheckpointTimeout(
> > > > > > getCheckpointTimeout)
> > > > > > > > >     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
> > > > > > > > >     env.setStateBackend(new RocksDBStateBackend("s3://
> > > > > > > bucket/location",
> > > > > > > > > true))
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by "Andy M." <aj...@gmail.com>.
Hi Till,

I believe this is what you are looking for, classpath is much bigger for
the task manager.  I can also post the whole log file if needed:

2017-10-05 14:17:53,038 INFO  org.apache.flink.yarn.YarnTaskManagerRunner
                 -  Classpath:
flink-consumer.jar:lib/flink-dist_2.11-1.3.2.jar:lib/flink-python_2.11-1.3.2.jar:lib/flink-shaded-hadoop2-uber-1.3.2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:log4j.properties:logback.xml:flink.jar:flink-conf.yaml::/etc/hadoop/conf:/usr/lib/hadoop/hadoop-common-2.7.3-amzn-3-tests.jar:/usr/lib/hadoop/hadoop-annotations-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp.jar:/usr/lib/hadoop/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-streaming-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-ant-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-datajoin.jar:/usr/lib/hadoop/hadoop-streaming.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/hadoop/hadoop-ant.jar:/usr/lib/hadoop/hadoop-sls.jar:/usr/lib/hadoop/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix.jar:/usr/lib/hadoop/hadoop-common-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-annotations.jar:/usr/lib/hadoop/hadoop-openstack-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-azure.jar:/usr/lib/hadoop/hadoop-extras.jar:/usr/lib/hadoop/hadoop-openstack.jar:/usr/lib/hadoop/hadoop-rumen.jar:/usr/lib/hadoop/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-archives.jar:/usr/lib/hadoop/hadoop-aws.jar:/usr/lib/hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-rumen-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-sls-2.7.3-amzn-3.jar:/usr/lib/hadoop/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/httpclient-4.5.3.jar:/usr/lib/hadoop/lib/httpcore-4.4.4.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.10.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.10.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.7.3-amzn-3.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3-tests.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.7.3-amzn-3.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/htrace-core-3.1.0-incubating.jar:/usr/lib/hadoop-mapreduce/hadoop-auth-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/httpclient-4.5.3.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/httpcore-4.4.4.jar:/usr/lib/hadoop-mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-core-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/jmespath-java-1.11.160.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-amzn-3-tests.jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-streaming-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/hadoop-ant-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/hadoop-distcp-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-app-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.9.13.jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/hadoop-ant.jar:/usr/lib/hadoop-mapreduce/xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-mapreduce/aws-java-sdk-core-1.11.160.jar:/usr/lib/hadoop-mapreduce/aws-java-sdk-s3-1.11.160.jar:/usr/lib/hadoop-mapreduce/jackson-dataformat-cbor-2.6.6.jar:/usr/lib/hadoop-mapreduce/hadoop-azure-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/ion-java-1.0.2.jar:/usr/lib/hadoop-mapreduce/azure-storage-2.0.0.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/hadoop-extras-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/usr/lib/hadoop-mapreduce/hadoop-openstack-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-azure.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-mapreduce/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/aws-java-sdk-kms-1.11.160.jar:/usr/lib/hadoop-mapreduce/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/hadoop-aws-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/hadoop-datajoin-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-archives.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/commons-lang3-3.3.2.jar:/usr/lib/hadoop-mapreduce/jackson-core-2.6.6.jar:/usr/lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/hadoop-aws.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:/usr/lib/hadoop-mapreduce/hadoop-sls-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/jackson-annotations-2.6.6.jar:/usr/lib/hadoop-mapreduce/junit-4.11.jar:/usr/lib/hadoop-mapreduce/commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/jackson-databind-2.6.6.jar:/usr/lib/hadoop-mapreduce/jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/zookeeper-3.4.10.jar:/usr/lib/hadoop-mapreduce/commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-mapreduce/hadoop-gridmix-2.7.3-amzn-3.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-sharedcachemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-tests-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-sharedcachemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-common-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-yarn-registry-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3-amzn-3.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.10-tests.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.10.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo.jar:/usr/lib/hadoop-lzo/lib/hadoop-lzo-0.4.19.jar:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/jmespath-java-1.11.129.jar:/usr/share/aws/emr/emrfs/lib/bcpkix-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/jcl-over-slf4j-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/ion-java-1.0.2.jar:/usr/share/aws/emr/emrfs/lib/slf4j-api-1.7.21.jar:/usr/share/aws/emr/emrfs/lib/javax.inject-1.jar:/usr/share/aws/emr/emrfs/lib/aopalliance-1.0.jar:/usr/share/aws/emr/emrfs/lib/bcprov-jdk15on-1.51.jar:/usr/share/aws/emr/emrfs/lib/emrfs-hadoop-assembly-2.18.0.jar:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/lib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink-1.0.0.jar:/usr/share/aws/emr/cloudwatch-sink/lib/cloudwatch-sink.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-inspector-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-api-gateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-kinesis-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-elasticloadbalancingv2-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rekognition-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-test-utils-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-elasticbeanstalk-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cloudhsm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-iam-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cognitoidp-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-mechanicalturkrequester-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-marketplacecommerceanalytics-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-snowball-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-polly-1.11.160.jar:/usr/share/aws/aws-java-sdk/jmespath-java-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codebuild-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-emr-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-simpleworkflow-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-machinelearning-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-pinpoint-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-appstream-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sqs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opensdk-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-greengrass-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ecs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-xray-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cloudtrail-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cloudwatchmetrics-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-elasticloadbalancing-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codepipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cloudwatch-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-shield-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-config-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-stepfunctions-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-gamelift-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-rds-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-core-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-datapipeline-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-s3-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dax-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opsworks-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-servicecatalog-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cloudsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dms-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-directory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-opsworkscm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cloudformation-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cloudfront-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-budgets-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-clouddirectory-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-importexport-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lex-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-marketplaceentitlement-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-dynamodb-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-autoscaling-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-elastictranscoder-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-organizations-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-workspaces-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ssm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-servermigration-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-events-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-applicationautoscaling-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-health-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-kms-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-logs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codestar-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-route53-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-redshift-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-marketplacemeteringservice-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sns-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-batch-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-waf-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-simpledb-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codedeploy-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ec2-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-models-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-devicefarm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cognitoidentity-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lexmodelbuilding-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-directconnect-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-elasticache-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-costandusagereport-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-discovery-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-resourcegroupstaggingapi-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ses-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lambda-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-workdocs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-code-generator-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-cognitosync-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-efs-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-sts-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-athena-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codecommit-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-storagegateway-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-lightsail-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-acm-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-glacier-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-ecr-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-support-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-codegen-maven-plugin-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-elasticsearch-1.11.160.jar:/usr/share/aws/aws-java-sdk/aws-java-sdk-iot-1.11.160.jar

Thank you

On Thu, Oct 5, 2017 at 5:25 AM, Till Rohrmann <tr...@apache.org> wrote:

> Hi Andy,
>
> the CliFrontend is not executed via Yarn, thus, it is not affected by
> dependencies which are added due to the underlying Yarn cluster. Therefore,
> it would be helpful to look at the TaskManager logs. Either you have
> enabled log aggregation on your Yarn cluster, then you can obtain the logs
> via `yarn logs -applicationId <application ID>` or you have to retrieve
> them from the machines where they were running (either by going directly
> there or via the Yarn web interface).
>
> Cheers,
> Till
>
> On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com> wrote:
>
> > Hi Till,
> >
> > That is actually the classpath used by the flink bash script(that
> launches
> > the jar using the java command).  I changed the execute to an echo, and
> > grabbed that for the CLI arguments.
> >
> > I believe this is the class path from the log file(although it might not
> be
> > the taskmanager log, is that any different from what would be in my
> > flink-1.3.2/log folder?):
> >
> > 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.CliFrontend
> >                  -  Classpath:
> > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> > 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> > hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> > hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:
> >
> > If that doesn't seem right, and you can point me in the right direction
> as
> > to where the TaskManager logs would be, I would be happy to grab the
> > information your looking for.
> >
> > Thank you
> >
> > On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <tr...@apache.org>
> > wrote:
> >
> > > Hi Andy,
> > >
> > > this looks to me indeed like a dependency problem. I assume that EMR or
> > > something else is pulling in an incompatible version of Hadoop.
> > >
> > > The classpath you've posted, is this the one logged in the log files
> > > (TaskManager log) or did you compile it yourself? In the latter case,
> it
> > > would also be helpful to get access to the TaskManager logs.
> > >
> > > Cheers,
> > > Till
> > >
> > > On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <aj...@gmail.com> wrote:
> > >
> > > > Hi Fabian,
> > > >
> > > > 1) I have looked at the linked docs, and from what I can tell no
> setup
> > > > should really need to be done to get Flink working(Other than
> > downloading
> > > > the correct binaries, which I believe I did)
> > > > 2) I have downloaded the Flink 1.3.2 binaries(flink-1.3.2-bin-
> > > > hadoop27-scala_2.11.tgz
> > > > <http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > hadoop27-scala_2.11.tgz>)
> > > > This is for hadoop 2.7.X, which matches EMR 5.8.0.
> > > >
> > > > I appreciate any help or guidance you can provide me in fixing my
> > > problems,
> > > > please let me know if there is anything else I can provide you.
> > > >
> > > > Thank you
> > > >
> > > > On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <fh...@gmail.com>
> > wrote:
> > > >
> > > > > Hi Andy,
> > > > >
> > > > > I'm not an AWS expert, so I'll just check on some common issues.
> > > > >
> > > > > I guess you already had a look at the Flink docs for AWS/EMR but
> I'll
> > > > post
> > > > > the link just be to sure [1].
> > > > >
> > > > > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink 1.3.1)
> > did
> > > > you
> > > > > built Flink yourself or did you download the binaries?
> > > > > Does the Hadoop version of the Flink build match the Hadoop version
> > of
> > > > EMR
> > > > > 5.8.0, i.e., Hadoop 2.7.x?
> > > > >
> > > > > Best, Fabian
> > > > >
> > > > > [1]
> > > > > https://ci.apache.org/projects/flink/flink-docs-
> > > > release-1.3/setup/aws.html
> > > > >
> > > > > 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > >
> > > > > > Hi Fabian,
> > > > > >
> > > > > > Sorry, I just realized I forgot to include that part.  The error
> > > > returned
> > > > > > is:
> > > > > >
> > > > > > java.lang.NoSuchMethodError:
> > > > > > org.apache.hadoop.conf.Configuration.addResource(
> > > > > Lorg/apache/hadoop/conf/
> > > > > > Configuration;)V
> > > > > >     at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(
> > > > > > EmrFileSystem.java:93)
> > > > > >     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.
> > > > > > initialize(HadoopFileSystem.java:328)
> > > > > >     at org.apache.flink.core.fs.FileSystem.
> getUnguardedFileSystem(
> > > > > > FileSystem.java:350)
> > > > > >     at org.apache.flink.core.fs.FileSystem.get(FileSystem.
> > java:389)
> > > > > >     at org.apache.flink.core.fs.Path.
> getFileSystem(Path.java:293)
> > > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > > FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.
> > java:99)
> > > > > >     at org.apache.flink.runtime.state.filesystem.FsStateBackend.
> > > > > > createStreamFactory(FsStateBackend.java:282)
> > > > > >     at org.apache.flink.contrib.streaming.state.
> > RocksDBStateBackend.
> > > > > > createStreamFactory(RocksDBStateBackend.java:273
> > > > > >
> > > > > > I believe it has something to do with the classpath, but I am
> > unsure
> > > > why
> > > > > or
> > > > > > how to fix it.  The classpath being used during the execution is:
> > > > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > > > ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> > > > > > uber-1.3.2.ja‌​r:/home/hadoop/flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > > > > 17.jar:/home/hadoop‌​/flink-1.3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > > > > jar‌​:/home/hadoop/flink-‌​1.3.2/lib/flink-dist‌​_2.11-1.3.
> > > > > > 2.jar::/et‌​c/hadoop/conf:
> > > > > >
> > > > > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and it
> seems
> > > the
> > > > > > addResource function does seem to be there.
> > > > > >
> > > > > > Thank you
> > > > > >
> > > > > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <fhueske@gmail.com
> >
> > > > wrote:
> > > > > >
> > > > > > > Hi Andy,
> > > > > > >
> > > > > > > can you describe in more detail what exactly isn't working?
> > > > > > > Do you see error messages in the log files or on the console?
> > > > > > >
> > > > > > > Thanks, Fabian
> > > > > > >
> > > > > > > 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > > >
> > > > > > > > Hello,
> > > > > > > >
> > > > > > > > I am about to deploy my first Flink projects  to production,
> > but
> > > I
> > > > am
> > > > > > > > running into a very big hurdle.  I am unable to launch my
> > project
> > > > so
> > > > > it
> > > > > > > can
> > > > > > > > write to an S3 bucket.  My project is running on an EMR
> > cluster,
> > > > > where
> > > > > > I
> > > > > > > > have installed Flink 1.3.2.  I am using Yarn to launch the
> > > > > application,
> > > > > > > and
> > > > > > > > it seems to run fine unless I am trying to enable check
> > > > > pointing(with a
> > > > > > > S3
> > > > > > > > target).  I am looking to use RocksDB as my check-pointing
> > > backend.
> > > > > I
> > > > > > > have
> > > > > > > > asked a few places, and I am still unable to find a solution
> to
> > > > this
> > > > > > > > problem.  Here are my steps for creating a cluster, and
> > launching
> > > > my
> > > > > > > > application, perhaps I am missing a step.  I'd be happy to
> > > provide
> > > > > any
> > > > > > > > additional information if needed.
> > > > > > > >
> > > > > > > > AWS Portal:
> > > > > > > >
> > > > > > > >     1) EMR -> Create Cluster
> > > > > > > >     2) Advanced Options
> > > > > > > >     3) Release = emr-5.8.0
> > > > > > > >     4) Only select Hadoop 2.7.3
> > > > > > > >     5) Next -> Next -> Next -> Create Cluster ( I do fill out
> > > > > > > > names/keys/etc)
> > > > > > > >
> > > > > > > > Once the cluster is up I ssh into the Master and do the
> > > following:
> > > > > > > >
> > > > > > > >     1  wget
> > > > > > > > http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > > > > hadoop27-scala_2.11.tgz
> > > > > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
> > > > > > > >     3  cd flink-1.3.2
> > > > > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > > > > > >     5  Change conf/flink-conf.yaml
> > > > > > > >     6  ./bin/flink run -m yarn-cluster -yn 1
> > ~/flink-consumer.jar
> > > > > > > >
> > > > > > > > My conf/flink-conf.yaml I add the following fields:
> > > > > > > >
> > > > > > > >     state.backend: rocksdb
> > > > > > > >     state.backend.fs.checkpointdir: s3:/bucket/location
> > > > > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > > > > >
> > > > > > > > My program's checkpointing setup:
> > > > > > > >
> > > > > > > >
> > > > > > > > env.enableCheckpointing(getCheckpointRate,
> > > > CheckpointingMode.EXACTLY_
> > > > > > > ONCE)
> > > > > > > >
> > > > > > > > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > > > > > > > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> > > > > > > >
> > > > > > > > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > > > > > > > getCheckpointMinPause)
> > > > > > > >     env.getCheckpointConfig.setCheckpointTimeout(
> > > > > getCheckpointTimeout)
> > > > > > > >     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
> > > > > > > >     env.setStateBackend(new RocksDBStateBackend("s3://
> > > > > > bucket/location",
> > > > > > > > true))
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by Till Rohrmann <tr...@apache.org>.
Hi Andy,

the CliFrontend is not executed via Yarn, thus, it is not affected by
dependencies which are added due to the underlying Yarn cluster. Therefore,
it would be helpful to look at the TaskManager logs. Either you have
enabled log aggregation on your Yarn cluster, then you can obtain the logs
via `yarn logs -applicationId <application ID>` or you have to retrieve
them from the machines where they were running (either by going directly
there or via the Yarn web interface).

Cheers,
Till

On Wed, Oct 4, 2017 at 4:27 PM, Andy M. <aj...@gmail.com> wrote:

> Hi Till,
>
> That is actually the classpath used by the flink bash script(that launches
> the jar using the java command).  I changed the execute to an echo, and
> grabbed that for the CLI arguments.
>
> I believe this is the class path from the log file(although it might not be
> the taskmanager log, is that any different from what would be in my
> flink-1.3.2/log folder?):
>
> 2017-10-02 20:03:26,450 INFO  org.apache.flink.client.CliFrontend
>                  -  Classpath:
> /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.
> 2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/
> hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/
> hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:
>
> If that doesn't seem right, and you can point me in the right direction as
> to where the TaskManager logs would be, I would be happy to grab the
> information your looking for.
>
> Thank you
>
> On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <tr...@apache.org>
> wrote:
>
> > Hi Andy,
> >
> > this looks to me indeed like a dependency problem. I assume that EMR or
> > something else is pulling in an incompatible version of Hadoop.
> >
> > The classpath you've posted, is this the one logged in the log files
> > (TaskManager log) or did you compile it yourself? In the latter case, it
> > would also be helpful to get access to the TaskManager logs.
> >
> > Cheers,
> > Till
> >
> > On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <aj...@gmail.com> wrote:
> >
> > > Hi Fabian,
> > >
> > > 1) I have looked at the linked docs, and from what I can tell no setup
> > > should really need to be done to get Flink working(Other than
> downloading
> > > the correct binaries, which I believe I did)
> > > 2) I have downloaded the Flink 1.3.2 binaries(flink-1.3.2-bin-
> > > hadoop27-scala_2.11.tgz
> > > <http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > hadoop27-scala_2.11.tgz>)
> > > This is for hadoop 2.7.X, which matches EMR 5.8.0.
> > >
> > > I appreciate any help or guidance you can provide me in fixing my
> > problems,
> > > please let me know if there is anything else I can provide you.
> > >
> > > Thank you
> > >
> > > On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <fh...@gmail.com>
> wrote:
> > >
> > > > Hi Andy,
> > > >
> > > > I'm not an AWS expert, so I'll just check on some common issues.
> > > >
> > > > I guess you already had a look at the Flink docs for AWS/EMR but I'll
> > > post
> > > > the link just be to sure [1].
> > > >
> > > > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink 1.3.1)
> did
> > > you
> > > > built Flink yourself or did you download the binaries?
> > > > Does the Hadoop version of the Flink build match the Hadoop version
> of
> > > EMR
> > > > 5.8.0, i.e., Hadoop 2.7.x?
> > > >
> > > > Best, Fabian
> > > >
> > > > [1]
> > > > https://ci.apache.org/projects/flink/flink-docs-
> > > release-1.3/setup/aws.html
> > > >
> > > > 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > >
> > > > > Hi Fabian,
> > > > >
> > > > > Sorry, I just realized I forgot to include that part.  The error
> > > returned
> > > > > is:
> > > > >
> > > > > java.lang.NoSuchMethodError:
> > > > > org.apache.hadoop.conf.Configuration.addResource(
> > > > Lorg/apache/hadoop/conf/
> > > > > Configuration;)V
> > > > >     at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(
> > > > > EmrFileSystem.java:93)
> > > > >     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.
> > > > > initialize(HadoopFileSystem.java:328)
> > > > >     at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(
> > > > > FileSystem.java:350)
> > > > >     at org.apache.flink.core.fs.FileSystem.get(FileSystem.
> java:389)
> > > > >     at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293)
> > > > >     at org.apache.flink.runtime.state.filesystem.
> > > > > FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.
> java:99)
> > > > >     at org.apache.flink.runtime.state.filesystem.FsStateBackend.
> > > > > createStreamFactory(FsStateBackend.java:282)
> > > > >     at org.apache.flink.contrib.streaming.state.
> RocksDBStateBackend.
> > > > > createStreamFactory(RocksDBStateBackend.java:273
> > > > >
> > > > > I believe it has something to do with the classpath, but I am
> unsure
> > > why
> > > > or
> > > > > how to fix it.  The classpath being used during the execution is:
> > > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > > ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> > > > > uber-1.3.2.ja‌​r:/home/hadoop/flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > > > 17.jar:/home/hadoop‌​/flink-1.3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > > > jar‌​:/home/hadoop/flink-‌​1.3.2/lib/flink-dist‌​_2.11-1.3.
> > > > > 2.jar::/et‌​c/hadoop/conf:
> > > > >
> > > > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and it seems
> > the
> > > > > addResource function does seem to be there.
> > > > >
> > > > > Thank you
> > > > >
> > > > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <fh...@gmail.com>
> > > wrote:
> > > > >
> > > > > > Hi Andy,
> > > > > >
> > > > > > can you describe in more detail what exactly isn't working?
> > > > > > Do you see error messages in the log files or on the console?
> > > > > >
> > > > > > Thanks, Fabian
> > > > > >
> > > > > > 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > > >
> > > > > > > Hello,
> > > > > > >
> > > > > > > I am about to deploy my first Flink projects  to production,
> but
> > I
> > > am
> > > > > > > running into a very big hurdle.  I am unable to launch my
> project
> > > so
> > > > it
> > > > > > can
> > > > > > > write to an S3 bucket.  My project is running on an EMR
> cluster,
> > > > where
> > > > > I
> > > > > > > have installed Flink 1.3.2.  I am using Yarn to launch the
> > > > application,
> > > > > > and
> > > > > > > it seems to run fine unless I am trying to enable check
> > > > pointing(with a
> > > > > > S3
> > > > > > > target).  I am looking to use RocksDB as my check-pointing
> > backend.
> > > > I
> > > > > > have
> > > > > > > asked a few places, and I am still unable to find a solution to
> > > this
> > > > > > > problem.  Here are my steps for creating a cluster, and
> launching
> > > my
> > > > > > > application, perhaps I am missing a step.  I'd be happy to
> > provide
> > > > any
> > > > > > > additional information if needed.
> > > > > > >
> > > > > > > AWS Portal:
> > > > > > >
> > > > > > >     1) EMR -> Create Cluster
> > > > > > >     2) Advanced Options
> > > > > > >     3) Release = emr-5.8.0
> > > > > > >     4) Only select Hadoop 2.7.3
> > > > > > >     5) Next -> Next -> Next -> Create Cluster ( I do fill out
> > > > > > > names/keys/etc)
> > > > > > >
> > > > > > > Once the cluster is up I ssh into the Master and do the
> > following:
> > > > > > >
> > > > > > >     1  wget
> > > > > > > http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > > > hadoop27-scala_2.11.tgz
> > > > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
> > > > > > >     3  cd flink-1.3.2
> > > > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > > > > >     5  Change conf/flink-conf.yaml
> > > > > > >     6  ./bin/flink run -m yarn-cluster -yn 1
> ~/flink-consumer.jar
> > > > > > >
> > > > > > > My conf/flink-conf.yaml I add the following fields:
> > > > > > >
> > > > > > >     state.backend: rocksdb
> > > > > > >     state.backend.fs.checkpointdir: s3:/bucket/location
> > > > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > > > >
> > > > > > > My program's checkpointing setup:
> > > > > > >
> > > > > > >
> > > > > > > env.enableCheckpointing(getCheckpointRate,
> > > CheckpointingMode.EXACTLY_
> > > > > > ONCE)
> > > > > > >
> > > > > > > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > > > > > > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> > > > > > >
> > > > > > > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > > > > > > getCheckpointMinPause)
> > > > > > >     env.getCheckpointConfig.setCheckpointTimeout(
> > > > getCheckpointTimeout)
> > > > > > >     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
> > > > > > >     env.setStateBackend(new RocksDBStateBackend("s3://
> > > > > bucket/location",
> > > > > > > true))
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by "Andy M." <aj...@gmail.com>.
Hi Till,

That is actually the classpath used by the flink bash script(that launches
the jar using the java command).  I changed the execute to an echo, and
grabbed that for the CLI arguments.

I believe this is the class path from the log file(although it might not be
the taskmanager log, is that any different from what would be in my
flink-1.3.2/log folder?):

2017-10-02 20:03:26,450 INFO  org.apache.flink.client.CliFrontend
                 -  Classpath:
/home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/flink-shaded-hadoop2-uber-1.3.2.jar:/home/hadoop/flink-1.3.2/lib/log4j-1.2.17.jar:/home/hadoop/flink-1.3.2/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop/flink-1.3.2/lib/flink-dist_2.11-1.3.2.jar::/etc/hadoop/conf:

If that doesn't seem right, and you can point me in the right direction as
to where the TaskManager logs would be, I would be happy to grab the
information your looking for.

Thank you

On Wed, Oct 4, 2017 at 3:27 AM, Till Rohrmann <tr...@apache.org> wrote:

> Hi Andy,
>
> this looks to me indeed like a dependency problem. I assume that EMR or
> something else is pulling in an incompatible version of Hadoop.
>
> The classpath you've posted, is this the one logged in the log files
> (TaskManager log) or did you compile it yourself? In the latter case, it
> would also be helpful to get access to the TaskManager logs.
>
> Cheers,
> Till
>
> On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <aj...@gmail.com> wrote:
>
> > Hi Fabian,
> >
> > 1) I have looked at the linked docs, and from what I can tell no setup
> > should really need to be done to get Flink working(Other than downloading
> > the correct binaries, which I believe I did)
> > 2) I have downloaded the Flink 1.3.2 binaries(flink-1.3.2-bin-
> > hadoop27-scala_2.11.tgz
> > <http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > hadoop27-scala_2.11.tgz>)
> > This is for hadoop 2.7.X, which matches EMR 5.8.0.
> >
> > I appreciate any help or guidance you can provide me in fixing my
> problems,
> > please let me know if there is anything else I can provide you.
> >
> > Thank you
> >
> > On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <fh...@gmail.com> wrote:
> >
> > > Hi Andy,
> > >
> > > I'm not an AWS expert, so I'll just check on some common issues.
> > >
> > > I guess you already had a look at the Flink docs for AWS/EMR but I'll
> > post
> > > the link just be to sure [1].
> > >
> > > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink 1.3.1) did
> > you
> > > built Flink yourself or did you download the binaries?
> > > Does the Hadoop version of the Flink build match the Hadoop version of
> > EMR
> > > 5.8.0, i.e., Hadoop 2.7.x?
> > >
> > > Best, Fabian
> > >
> > > [1]
> > > https://ci.apache.org/projects/flink/flink-docs-
> > release-1.3/setup/aws.html
> > >
> > > 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
> > >
> > > > Hi Fabian,
> > > >
> > > > Sorry, I just realized I forgot to include that part.  The error
> > returned
> > > > is:
> > > >
> > > > java.lang.NoSuchMethodError:
> > > > org.apache.hadoop.conf.Configuration.addResource(
> > > Lorg/apache/hadoop/conf/
> > > > Configuration;)V
> > > >     at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(
> > > > EmrFileSystem.java:93)
> > > >     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.
> > > > initialize(HadoopFileSystem.java:328)
> > > >     at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(
> > > > FileSystem.java:350)
> > > >     at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
> > > >     at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293)
> > > >     at org.apache.flink.runtime.state.filesystem.
> > > > FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.java:99)
> > > >     at org.apache.flink.runtime.state.filesystem.FsStateBackend.
> > > > createStreamFactory(FsStateBackend.java:282)
> > > >     at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.
> > > > createStreamFactory(RocksDBStateBackend.java:273
> > > >
> > > > I believe it has something to do with the classpath, but I am unsure
> > why
> > > or
> > > > how to fix it.  The classpath being used during the execution is:
> > > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > > ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> > > > uber-1.3.2.ja‌​r:/home/hadoop/flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > > 17.jar:/home/hadoop‌​/flink-1.3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > > jar‌​:/home/hadoop/flink-‌​1.3.2/lib/flink-dist‌​_2.11-1.3.
> > > > 2.jar::/et‌​c/hadoop/conf:
> > > >
> > > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and it seems
> the
> > > > addResource function does seem to be there.
> > > >
> > > > Thank you
> > > >
> > > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <fh...@gmail.com>
> > wrote:
> > > >
> > > > > Hi Andy,
> > > > >
> > > > > can you describe in more detail what exactly isn't working?
> > > > > Do you see error messages in the log files or on the console?
> > > > >
> > > > > Thanks, Fabian
> > > > >
> > > > > 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > I am about to deploy my first Flink projects  to production, but
> I
> > am
> > > > > > running into a very big hurdle.  I am unable to launch my project
> > so
> > > it
> > > > > can
> > > > > > write to an S3 bucket.  My project is running on an EMR cluster,
> > > where
> > > > I
> > > > > > have installed Flink 1.3.2.  I am using Yarn to launch the
> > > application,
> > > > > and
> > > > > > it seems to run fine unless I am trying to enable check
> > > pointing(with a
> > > > > S3
> > > > > > target).  I am looking to use RocksDB as my check-pointing
> backend.
> > > I
> > > > > have
> > > > > > asked a few places, and I am still unable to find a solution to
> > this
> > > > > > problem.  Here are my steps for creating a cluster, and launching
> > my
> > > > > > application, perhaps I am missing a step.  I'd be happy to
> provide
> > > any
> > > > > > additional information if needed.
> > > > > >
> > > > > > AWS Portal:
> > > > > >
> > > > > >     1) EMR -> Create Cluster
> > > > > >     2) Advanced Options
> > > > > >     3) Release = emr-5.8.0
> > > > > >     4) Only select Hadoop 2.7.3
> > > > > >     5) Next -> Next -> Next -> Create Cluster ( I do fill out
> > > > > > names/keys/etc)
> > > > > >
> > > > > > Once the cluster is up I ssh into the Master and do the
> following:
> > > > > >
> > > > > >     1  wget
> > > > > > http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > > hadoop27-scala_2.11.tgz
> > > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
> > > > > >     3  cd flink-1.3.2
> > > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > > > >     5  Change conf/flink-conf.yaml
> > > > > >     6  ./bin/flink run -m yarn-cluster -yn 1 ~/flink-consumer.jar
> > > > > >
> > > > > > My conf/flink-conf.yaml I add the following fields:
> > > > > >
> > > > > >     state.backend: rocksdb
> > > > > >     state.backend.fs.checkpointdir: s3:/bucket/location
> > > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > > >
> > > > > > My program's checkpointing setup:
> > > > > >
> > > > > >
> > > > > > env.enableCheckpointing(getCheckpointRate,
> > CheckpointingMode.EXACTLY_
> > > > > ONCE)
> > > > > >
> > > > > > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > > > > > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> > > > > >
> > > > > > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > > > > > getCheckpointMinPause)
> > > > > >     env.getCheckpointConfig.setCheckpointTimeout(
> > > getCheckpointTimeout)
> > > > > >     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
> > > > > >     env.setStateBackend(new RocksDBStateBackend("s3://
> > > > bucket/location",
> > > > > > true))
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by Till Rohrmann <tr...@apache.org>.
Hi Andy,

this looks to me indeed like a dependency problem. I assume that EMR or
something else is pulling in an incompatible version of Hadoop.

The classpath you've posted, is this the one logged in the log files
(TaskManager log) or did you compile it yourself? In the latter case, it
would also be helpful to get access to the TaskManager logs.

Cheers,
Till

On Mon, Oct 2, 2017 at 10:20 PM, Andy M. <aj...@gmail.com> wrote:

> Hi Fabian,
>
> 1) I have looked at the linked docs, and from what I can tell no setup
> should really need to be done to get Flink working(Other than downloading
> the correct binaries, which I believe I did)
> 2) I have downloaded the Flink 1.3.2 binaries(flink-1.3.2-bin-
> hadoop27-scala_2.11.tgz
> <http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> hadoop27-scala_2.11.tgz>)
> This is for hadoop 2.7.X, which matches EMR 5.8.0.
>
> I appreciate any help or guidance you can provide me in fixing my problems,
> please let me know if there is anything else I can provide you.
>
> Thank you
>
> On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <fh...@gmail.com> wrote:
>
> > Hi Andy,
> >
> > I'm not an AWS expert, so I'll just check on some common issues.
> >
> > I guess you already had a look at the Flink docs for AWS/EMR but I'll
> post
> > the link just be to sure [1].
> >
> > Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink 1.3.1) did
> you
> > built Flink yourself or did you download the binaries?
> > Does the Hadoop version of the Flink build match the Hadoop version of
> EMR
> > 5.8.0, i.e., Hadoop 2.7.x?
> >
> > Best, Fabian
> >
> > [1]
> > https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/setup/aws.html
> >
> > 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
> >
> > > Hi Fabian,
> > >
> > > Sorry, I just realized I forgot to include that part.  The error
> returned
> > > is:
> > >
> > > java.lang.NoSuchMethodError:
> > > org.apache.hadoop.conf.Configuration.addResource(
> > Lorg/apache/hadoop/conf/
> > > Configuration;)V
> > >     at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(
> > > EmrFileSystem.java:93)
> > >     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.
> > > initialize(HadoopFileSystem.java:328)
> > >     at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(
> > > FileSystem.java:350)
> > >     at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
> > >     at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293)
> > >     at org.apache.flink.runtime.state.filesystem.
> > > FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.java:99)
> > >     at org.apache.flink.runtime.state.filesystem.FsStateBackend.
> > > createStreamFactory(FsStateBackend.java:282)
> > >     at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.
> > > createStreamFactory(RocksDBStateBackend.java:273
> > >
> > > I believe it has something to do with the classpath, but I am unsure
> why
> > or
> > > how to fix it.  The classpath being used during the execution is:
> > > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > > ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> > > uber-1.3.2.ja‌​r:/home/hadoop/flink‌​-1.3.2/lib/log4j-1.2‌​.
> > > 17.jar:/home/hadoop‌​/flink-1.3.2/lib/slf‌​4j-log4j12-1.7.7.
> > > jar‌​:/home/hadoop/flink-‌​1.3.2/lib/flink-dist‌​_2.11-1.3.
> > > 2.jar::/et‌​c/hadoop/conf:
> > >
> > > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and it seems the
> > > addResource function does seem to be there.
> > >
> > > Thank you
> > >
> > > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <fh...@gmail.com>
> wrote:
> > >
> > > > Hi Andy,
> > > >
> > > > can you describe in more detail what exactly isn't working?
> > > > Do you see error messages in the log files or on the console?
> > > >
> > > > Thanks, Fabian
> > > >
> > > > 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
> > > >
> > > > > Hello,
> > > > >
> > > > > I am about to deploy my first Flink projects  to production, but I
> am
> > > > > running into a very big hurdle.  I am unable to launch my project
> so
> > it
> > > > can
> > > > > write to an S3 bucket.  My project is running on an EMR cluster,
> > where
> > > I
> > > > > have installed Flink 1.3.2.  I am using Yarn to launch the
> > application,
> > > > and
> > > > > it seems to run fine unless I am trying to enable check
> > pointing(with a
> > > > S3
> > > > > target).  I am looking to use RocksDB as my check-pointing backend.
> > I
> > > > have
> > > > > asked a few places, and I am still unable to find a solution to
> this
> > > > > problem.  Here are my steps for creating a cluster, and launching
> my
> > > > > application, perhaps I am missing a step.  I'd be happy to provide
> > any
> > > > > additional information if needed.
> > > > >
> > > > > AWS Portal:
> > > > >
> > > > >     1) EMR -> Create Cluster
> > > > >     2) Advanced Options
> > > > >     3) Release = emr-5.8.0
> > > > >     4) Only select Hadoop 2.7.3
> > > > >     5) Next -> Next -> Next -> Create Cluster ( I do fill out
> > > > > names/keys/etc)
> > > > >
> > > > > Once the cluster is up I ssh into the Master and do the following:
> > > > >
> > > > >     1  wget
> > > > > http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > > hadoop27-scala_2.11.tgz
> > > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
> > > > >     3  cd flink-1.3.2
> > > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > > >     5  Change conf/flink-conf.yaml
> > > > >     6  ./bin/flink run -m yarn-cluster -yn 1 ~/flink-consumer.jar
> > > > >
> > > > > My conf/flink-conf.yaml I add the following fields:
> > > > >
> > > > >     state.backend: rocksdb
> > > > >     state.backend.fs.checkpointdir: s3:/bucket/location
> > > > >     state.checkpoints.dir: s3:/bucket/location
> > > > >
> > > > > My program's checkpointing setup:
> > > > >
> > > > >
> > > > > env.enableCheckpointing(getCheckpointRate,
> CheckpointingMode.EXACTLY_
> > > > ONCE)
> > > > >
> > > > > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > > > > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> > > > >
> > > > > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > > > > getCheckpointMinPause)
> > > > >     env.getCheckpointConfig.setCheckpointTimeout(
> > getCheckpointTimeout)
> > > > >     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
> > > > >     env.setStateBackend(new RocksDBStateBackend("s3://
> > > bucket/location",
> > > > > true))
> > > > >
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by "Andy M." <aj...@gmail.com>.
Hi Fabian,

1) I have looked at the linked docs, and from what I can tell no setup
should really need to be done to get Flink working(Other than downloading
the correct binaries, which I believe I did)
2) I have downloaded the Flink 1.3.2 binaries(flink-1.3.2-bin-
hadoop27-scala_2.11.tgz
<http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-hadoop27-scala_2.11.tgz>)
This is for hadoop 2.7.X, which matches EMR 5.8.0.

I appreciate any help or guidance you can provide me in fixing my problems,
please let me know if there is anything else I can provide you.

Thank you

On Mon, Oct 2, 2017 at 4:12 PM, Fabian Hueske <fh...@gmail.com> wrote:

> Hi Andy,
>
> I'm not an AWS expert, so I'll just check on some common issues.
>
> I guess you already had a look at the Flink docs for AWS/EMR but I'll post
> the link just be to sure [1].
>
> Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink 1.3.1) did you
> built Flink yourself or did you download the binaries?
> Does the Hadoop version of the Flink build match the Hadoop version of EMR
> 5.8.0, i.e., Hadoop 2.7.x?
>
> Best, Fabian
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html
>
> 2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:
>
> > Hi Fabian,
> >
> > Sorry, I just realized I forgot to include that part.  The error returned
> > is:
> >
> > java.lang.NoSuchMethodError:
> > org.apache.hadoop.conf.Configuration.addResource(
> Lorg/apache/hadoop/conf/
> > Configuration;)V
> >     at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(
> > EmrFileSystem.java:93)
> >     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.
> > initialize(HadoopFileSystem.java:328)
> >     at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(
> > FileSystem.java:350)
> >     at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
> >     at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293)
> >     at org.apache.flink.runtime.state.filesystem.
> > FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.java:99)
> >     at org.apache.flink.runtime.state.filesystem.FsStateBackend.
> > createStreamFactory(FsStateBackend.java:282)
> >     at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.
> > createStreamFactory(RocksDBStateBackend.java:273
> >
> > I believe it has something to do with the classpath, but I am unsure why
> or
> > how to fix it.  The classpath being used during the execution is:
> > /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> > ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> > uber-1.3.2.ja‌​r:/home/hadoop/flink‌​-1.3.2/lib/log4j-1.2‌​.
> > 17.jar:/home/hadoop‌​/flink-1.3.2/lib/slf‌​4j-log4j12-1.7.7.
> > jar‌​:/home/hadoop/flink-‌​1.3.2/lib/flink-dist‌​_2.11-1.3.
> > 2.jar::/et‌​c/hadoop/conf:
> >
> > I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and it seems the
> > addResource function does seem to be there.
> >
> > Thank you
> >
> > On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <fh...@gmail.com> wrote:
> >
> > > Hi Andy,
> > >
> > > can you describe in more detail what exactly isn't working?
> > > Do you see error messages in the log files or on the console?
> > >
> > > Thanks, Fabian
> > >
> > > 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
> > >
> > > > Hello,
> > > >
> > > > I am about to deploy my first Flink projects  to production, but I am
> > > > running into a very big hurdle.  I am unable to launch my project so
> it
> > > can
> > > > write to an S3 bucket.  My project is running on an EMR cluster,
> where
> > I
> > > > have installed Flink 1.3.2.  I am using Yarn to launch the
> application,
> > > and
> > > > it seems to run fine unless I am trying to enable check
> pointing(with a
> > > S3
> > > > target).  I am looking to use RocksDB as my check-pointing backend.
> I
> > > have
> > > > asked a few places, and I am still unable to find a solution to this
> > > > problem.  Here are my steps for creating a cluster, and launching my
> > > > application, perhaps I am missing a step.  I'd be happy to provide
> any
> > > > additional information if needed.
> > > >
> > > > AWS Portal:
> > > >
> > > >     1) EMR -> Create Cluster
> > > >     2) Advanced Options
> > > >     3) Release = emr-5.8.0
> > > >     4) Only select Hadoop 2.7.3
> > > >     5) Next -> Next -> Next -> Create Cluster ( I do fill out
> > > > names/keys/etc)
> > > >
> > > > Once the cluster is up I ssh into the Master and do the following:
> > > >
> > > >     1  wget
> > > > http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > > hadoop27-scala_2.11.tgz
> > > >     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
> > > >     3  cd flink-1.3.2
> > > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > > >     5  Change conf/flink-conf.yaml
> > > >     6  ./bin/flink run -m yarn-cluster -yn 1 ~/flink-consumer.jar
> > > >
> > > > My conf/flink-conf.yaml I add the following fields:
> > > >
> > > >     state.backend: rocksdb
> > > >     state.backend.fs.checkpointdir: s3:/bucket/location
> > > >     state.checkpoints.dir: s3:/bucket/location
> > > >
> > > > My program's checkpointing setup:
> > > >
> > > >
> > > > env.enableCheckpointing(getCheckpointRate,CheckpointingMode.EXACTLY_
> > > ONCE)
> > > >
> > > > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > > > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> > > >
> > > > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > > > getCheckpointMinPause)
> > > >     env.getCheckpointConfig.setCheckpointTimeout(
> getCheckpointTimeout)
> > > >     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
> > > >     env.setStateBackend(new RocksDBStateBackend("s3://
> > bucket/location",
> > > > true))
> > > >
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by Fabian Hueske <fh...@gmail.com>.
Hi Andy,

I'm not an AWS expert, so I'll just check on some common issues.

I guess you already had a look at the Flink docs for AWS/EMR but I'll post
the link just be to sure [1].

Since you are using Flink 1.3.2 (EMR 5.8.0 comes with Flink 1.3.1) did you
built Flink yourself or did you download the binaries?
Does the Hadoop version of the Flink build match the Hadoop version of EMR
5.8.0, i.e., Hadoop 2.7.x?

Best, Fabian

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html

2017-10-02 21:51 GMT+02:00 Andy M. <aj...@gmail.com>:

> Hi Fabian,
>
> Sorry, I just realized I forgot to include that part.  The error returned
> is:
>
> java.lang.NoSuchMethodError:
> org.apache.hadoop.conf.Configuration.addResource(Lorg/apache/hadoop/conf/
> Configuration;)V
>     at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(
> EmrFileSystem.java:93)
>     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.
> initialize(HadoopFileSystem.java:328)
>     at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(
> FileSystem.java:350)
>     at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
>     at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293)
>     at org.apache.flink.runtime.state.filesystem.
> FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.java:99)
>     at org.apache.flink.runtime.state.filesystem.FsStateBackend.
> createStreamFactory(FsStateBackend.java:282)
>     at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.
> createStreamFactory(RocksDBStateBackend.java:273
>
> I believe it has something to do with the classpath, but I am unsure why or
> how to fix it.  The classpath being used during the execution is:
> /home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/
> ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-
> uber-1.3.2.ja‌​r:/home/hadoop/flink‌​-1.3.2/lib/log4j-1.2‌​.
> 17.jar:/home/hadoop‌​/flink-1.3.2/lib/slf‌​4j-log4j12-1.7.7.
> jar‌​:/home/hadoop/flink-‌​1.3.2/lib/flink-dist‌​_2.11-1.3.
> 2.jar::/et‌​c/hadoop/conf:
>
> I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and it seems the
> addResource function does seem to be there.
>
> Thank you
>
> On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <fh...@gmail.com> wrote:
>
> > Hi Andy,
> >
> > can you describe in more detail what exactly isn't working?
> > Do you see error messages in the log files or on the console?
> >
> > Thanks, Fabian
> >
> > 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
> >
> > > Hello,
> > >
> > > I am about to deploy my first Flink projects  to production, but I am
> > > running into a very big hurdle.  I am unable to launch my project so it
> > can
> > > write to an S3 bucket.  My project is running on an EMR cluster, where
> I
> > > have installed Flink 1.3.2.  I am using Yarn to launch the application,
> > and
> > > it seems to run fine unless I am trying to enable check pointing(with a
> > S3
> > > target).  I am looking to use RocksDB as my check-pointing backend.  I
> > have
> > > asked a few places, and I am still unable to find a solution to this
> > > problem.  Here are my steps for creating a cluster, and launching my
> > > application, perhaps I am missing a step.  I'd be happy to provide any
> > > additional information if needed.
> > >
> > > AWS Portal:
> > >
> > >     1) EMR -> Create Cluster
> > >     2) Advanced Options
> > >     3) Release = emr-5.8.0
> > >     4) Only select Hadoop 2.7.3
> > >     5) Next -> Next -> Next -> Create Cluster ( I do fill out
> > > names/keys/etc)
> > >
> > > Once the cluster is up I ssh into the Master and do the following:
> > >
> > >     1  wget
> > > http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > > hadoop27-scala_2.11.tgz
> > >     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
> > >     3  cd flink-1.3.2
> > >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> > >     5  Change conf/flink-conf.yaml
> > >     6  ./bin/flink run -m yarn-cluster -yn 1 ~/flink-consumer.jar
> > >
> > > My conf/flink-conf.yaml I add the following fields:
> > >
> > >     state.backend: rocksdb
> > >     state.backend.fs.checkpointdir: s3:/bucket/location
> > >     state.checkpoints.dir: s3:/bucket/location
> > >
> > > My program's checkpointing setup:
> > >
> > >
> > > env.enableCheckpointing(getCheckpointRate,CheckpointingMode.EXACTLY_
> > ONCE)
> > >
> > > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> > >
> > > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > > getCheckpointMinPause)
> > >     env.getCheckpointConfig.setCheckpointTimeout(getCheckpointTimeout)
> > >     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
> > >     env.setStateBackend(new RocksDBStateBackend("s3://
> bucket/location",
> > > true))
> > >
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by "Andy M." <aj...@gmail.com>.
Hi Fabian,

Sorry, I just realized I forgot to include that part.  The error returned
is:

java.lang.NoSuchMethodError:
org.apache.hadoop.conf.Configuration.addResource(Lorg/apache/hadoop/conf/Configuration;)V
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(EmrFileSystem.java:93)
    at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:328)
    at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:350)
    at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
    at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293)
    at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.java:99)
    at org.apache.flink.runtime.state.filesystem.FsStateBackend.createStreamFactory(FsStateBackend.java:282)
    at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createStreamFactory(RocksDBStateBackend.java:273

I believe it has something to do with the classpath, but I am unsure why or
how to fix it.  The classpath being used during the execution is:
/home/hadoop/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/ho‌​me/hadoop/flink-1.3.‌​2/lib/flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r:/home/hadoop/flink‌​-1.3.2/lib/log4j-1.2‌​.17.jar:/home/hadoop‌​/flink-1.3.2/lib/slf‌​4j-log4j12-1.7.7.jar‌​:/home/hadoop/flink-‌​1.3.2/lib/flink-dist‌​_2.11-1.3.2.jar::/et‌​c/hadoop/conf:

I decompiled flink-shaded-h‌​adoop2-uber-1.3.2.ja‌​r and it seems the
addResource function does seem to be there.

Thank you

On Mon, Oct 2, 2017 at 3:43 PM, Fabian Hueske <fh...@gmail.com> wrote:

> Hi Andy,
>
> can you describe in more detail what exactly isn't working?
> Do you see error messages in the log files or on the console?
>
> Thanks, Fabian
>
> 2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:
>
> > Hello,
> >
> > I am about to deploy my first Flink projects  to production, but I am
> > running into a very big hurdle.  I am unable to launch my project so it
> can
> > write to an S3 bucket.  My project is running on an EMR cluster, where I
> > have installed Flink 1.3.2.  I am using Yarn to launch the application,
> and
> > it seems to run fine unless I am trying to enable check pointing(with a
> S3
> > target).  I am looking to use RocksDB as my check-pointing backend.  I
> have
> > asked a few places, and I am still unable to find a solution to this
> > problem.  Here are my steps for creating a cluster, and launching my
> > application, perhaps I am missing a step.  I'd be happy to provide any
> > additional information if needed.
> >
> > AWS Portal:
> >
> >     1) EMR -> Create Cluster
> >     2) Advanced Options
> >     3) Release = emr-5.8.0
> >     4) Only select Hadoop 2.7.3
> >     5) Next -> Next -> Next -> Create Cluster ( I do fill out
> > names/keys/etc)
> >
> > Once the cluster is up I ssh into the Master and do the following:
> >
> >     1  wget
> > http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> > hadoop27-scala_2.11.tgz
> >     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
> >     3  cd flink-1.3.2
> >     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
> >     5  Change conf/flink-conf.yaml
> >     6  ./bin/flink run -m yarn-cluster -yn 1 ~/flink-consumer.jar
> >
> > My conf/flink-conf.yaml I add the following fields:
> >
> >     state.backend: rocksdb
> >     state.backend.fs.checkpointdir: s3:/bucket/location
> >     state.checkpoints.dir: s3:/bucket/location
> >
> > My program's checkpointing setup:
> >
> >
> > env.enableCheckpointing(getCheckpointRate,CheckpointingMode.EXACTLY_
> ONCE)
> >
> > env.getCheckpointConfig.enableExternalizedCheckpoints(
> > ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
> >
> > env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> > getCheckpointMinPause)
> >     env.getCheckpointConfig.setCheckpointTimeout(getCheckpointTimeout)
> >     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
> >     env.setStateBackend(new RocksDBStateBackend("s3://bucket/location",
> > true))
> >
>

Re: Unable to write snapshots to S3 on EMR

Posted by Fabian Hueske <fh...@gmail.com>.
Hi Andy,

can you describe in more detail what exactly isn't working?
Do you see error messages in the log files or on the console?

Thanks, Fabian

2017-10-02 15:52 GMT+02:00 Andy M. <aj...@gmail.com>:

> Hello,
>
> I am about to deploy my first Flink projects  to production, but I am
> running into a very big hurdle.  I am unable to launch my project so it can
> write to an S3 bucket.  My project is running on an EMR cluster, where I
> have installed Flink 1.3.2.  I am using Yarn to launch the application, and
> it seems to run fine unless I am trying to enable check pointing(with a S3
> target).  I am looking to use RocksDB as my check-pointing backend.  I have
> asked a few places, and I am still unable to find a solution to this
> problem.  Here are my steps for creating a cluster, and launching my
> application, perhaps I am missing a step.  I'd be happy to provide any
> additional information if needed.
>
> AWS Portal:
>
>     1) EMR -> Create Cluster
>     2) Advanced Options
>     3) Release = emr-5.8.0
>     4) Only select Hadoop 2.7.3
>     5) Next -> Next -> Next -> Create Cluster ( I do fill out
> names/keys/etc)
>
> Once the cluster is up I ssh into the Master and do the following:
>
>     1  wget
> http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-
> hadoop27-scala_2.11.tgz
>     2  tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
>     3  cd flink-1.3.2
>     4  ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
>     5  Change conf/flink-conf.yaml
>     6  ./bin/flink run -m yarn-cluster -yn 1 ~/flink-consumer.jar
>
> My conf/flink-conf.yaml I add the following fields:
>
>     state.backend: rocksdb
>     state.backend.fs.checkpointdir: s3:/bucket/location
>     state.checkpoints.dir: s3:/bucket/location
>
> My program's checkpointing setup:
>
>
> env.enableCheckpointing(getCheckpointRate,CheckpointingMode.EXACTLY_ONCE)
>
> env.getCheckpointConfig.enableExternalizedCheckpoints(
> ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
>
> env.getCheckpointConfig.setMinPauseBetweenCheckpoints(
> getCheckpointMinPause)
>     env.getCheckpointConfig.setCheckpointTimeout(getCheckpointTimeout)
>     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
>     env.setStateBackend(new RocksDBStateBackend("s3://bucket/location",
> true))
>