You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Aljoscha Krettek (JIRA)" <ji...@apache.org> on 2018/01/02 15:28:00 UTC
[jira] [Commented] (FLINK-8318) Conflict jackson library with
ElasticSearch connector
[ https://issues.apache.org/jira/browse/FLINK-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308217#comment-16308217 ]
Aljoscha Krettek commented on FLINK-8318:
-----------------------------------------
Do you have a specific reason for using `parent-first`. I think you case should work if you include ES and Jackson in your user jar and use `child-first`, which was introduced for exactly such cases of dependency clashes.
> Conflict jackson library with ElasticSearch connector
> -----------------------------------------------------
>
> Key: FLINK-8318
> URL: https://issues.apache.org/jira/browse/FLINK-8318
> Project: Flink
> Issue Type: Bug
> Components: ElasticSearch Connector, Startup Shell Scripts
> Affects Versions: 1.4.0
> Reporter: Jihyun Cho
>
> My flink job is failed after update flink version to 1.4.0. It uses ElasticSearch connector.
> I'm using CDH Hadoop with Flink option "classloader.resolve-order: parent-first"
> The failure log is below.
> {noformat}
> Using the result of 'hadoop classpath' to augment the Hadoop classpath: /etc/hadoop/conf:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop/lib/*:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop/.//*:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/./:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop-yarn/.//*:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop-mapreduce/lib/*:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop-mapreduce/.//*
> 2017-12-26 14:13:21,160 INFO org.apache.flink.runtime.taskmanager.TaskManager - --------------------------------------------------------------------------------
> 2017-12-26 14:13:21,161 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager (Version: 1.4.0, Rev:3a9d9f2, Date:06.12.2017 @ 11:08:40 UTC)
> 2017-12-26 14:13:21,161 INFO org.apache.flink.runtime.taskmanager.TaskManager - OS current user: www
> 2017-12-26 14:13:21,446 INFO org.apache.flink.runtime.taskmanager.TaskManager - Current Hadoop/Kerberos user: www
> 2017-12-26 14:13:21,446 INFO org.apache.flink.runtime.taskmanager.TaskManager - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.131-b11
> 2017-12-26 14:13:21,447 INFO org.apache.flink.runtime.taskmanager.TaskManager - Maximum heap size: 31403 MiBytes
> 2017-12-26 14:13:21,447 INFO org.apache.flink.runtime.taskmanager.TaskManager - JAVA_HOME: (not set)
> 2017-12-26 14:13:21,448 INFO org.apache.flink.runtime.taskmanager.TaskManager - Hadoop version: 2.6.5
> 2017-12-26 14:13:21,448 INFO org.apache.flink.runtime.taskmanager.TaskManager - JVM Options:
> 2017-12-26 14:13:21,448 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Xms32768M
> 2017-12-26 14:13:21,448 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Xmx32768M
> 2017-12-26 14:13:21,448 INFO org.apache.flink.runtime.taskmanager.TaskManager - -XX:MaxDirectMemorySize=8388607T
> 2017-12-26 14:13:21,448 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Djava.library.path=/home/cloudera/parcels/CDH/lib/hadoop/lib/native/
> 2017-12-26 14:13:21,449 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Dlog4j.configuration=file:/home/www/service/flink-1.4.0/conf/log4j-console.properties
> 2017-12-26 14:13:21,449 INFO org.apache.flink.runtime.taskmanager.TaskManager - -Dlogback.configurationFile=file:/home/www/service/flink-1.4.0/conf/logback-console.xml
> 2017-12-26 14:13:21,449 INFO org.apache.flink.runtime.taskmanager.TaskManager - Program Arguments:
> 2017-12-26 14:13:21,449 INFO org.apache.flink.runtime.taskmanager.TaskManager - --configDir
> 2017-12-26 14:13:21,449 INFO org.apache.flink.runtime.taskmanager.TaskManager - /home/www/service/flink-1.4.0/conf
> 2017-12-26 14:13:21,449 INFO org.apache.flink.runtime.taskmanager.TaskManager - Classpath:
> ...:/home/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/libexec/../../hadoop-mapreduce/.//jackson-core-2.2.3.jar:...
> ....
> 2017-12-26 14:14:01,393 INFO org.apache.flink.runtime.taskmanager.Task - Source: Custom Source -> Filter -> Map -> Filter -> Sink: Unnamed (3/10) (fb33a6e0c1a7e859eaef9cf8bcf4565e) switched from RUNNING to FAILED.
> java.lang.NoSuchFieldError: FAIL_ON_SYMBOL_HASH_OVERFLOW
> at org.elasticsearch.common.xcontent.json.JsonXContent.<clinit>(JsonXContent.java:76)
> at org.elasticsearch.common.xcontent.XContentType$1.xContent(XContentType.java:59)
> at org.elasticsearch.common.settings.Setting.arrayToParsableString(Setting.java:726)
> at org.elasticsearch.common.settings.Setting.lambda$listSetting$26(Setting.java:672)
> at org.elasticsearch.common.settings.Setting$2.getRaw(Setting.java:676)
> at org.elasticsearch.common.settings.Setting.lambda$listSetting$24(Setting.java:660)
> at org.elasticsearch.common.settings.Setting.listSetting(Setting.java:665)
> at org.elasticsearch.common.settings.Setting.listSetting(Setting.java:660)
> at org.elasticsearch.common.network.NetworkService.<clinit>(NetworkService.java:50)
> at org.elasticsearch.client.transport.TransportClient.newPluginService(TransportClient.java:91)
> at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:119)
> at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:247)
> at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:125)
> at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:111)
> at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:101)
> at org.apache.flink.streaming.connectors.elasticsearch5.Elasticsearch5ApiCallBridge.createClient(Elasticsearch5ApiCallBridge.java:73)
> at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:281)
> at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
> at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
> at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:48)
> at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:393)
> at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:254)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
> at java.lang.Thread.run(Thread.java:748)
> 2017-12-26 14:14:01,393 INFO org.apache.flink.runtime.taskmanager.Task - Source: Custom Source -> Filter -> Map -> Filter -> Sink: Unnamed (8/10) (e12caa9cc12027738e2426d3a3641bba) switched from RUNNING to FAILED.
> java.lang.NoClassDefFoundError: Could not initialize class org.elasticsearch.common.network.NetworkService
> at org.elasticsearch.client.transport.TransportClient.newPluginService(TransportClient.java:91)
> at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:119)
> at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:247)
> at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:125)
> at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:111)
> at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:101)
> at org.apache.flink.streaming.connectors.elasticsearch5.Elasticsearch5ApiCallBridge.createClient(Elasticsearch5ApiCallBridge.java:73)
> at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:281)
> at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
> at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
> at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:48)
> at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:393)
> at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:254)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The symbol "FAIL_ON_SYMBOL_HASH_OVERFLOW" has been added in 2.4. But CDH Hadoop is using jackson version 2.2. So there is a conflict between the two versions.
> I reverted changes of https://issues.apache.org/jira/browse/FLINK-7477, and the problem is disappeared.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)