You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Dominik Bruhn (JIRA)" <ji...@apache.org> on 2016/06/17 21:25:05 UTC

[jira] [Comment Edited] (FLINK-4091) flink-connector-cassandra has conflicting guava version

    [ https://issues.apache.org/jira/browse/FLINK-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336986#comment-15336986 ] 

Dominik Bruhn edited comment on FLINK-4091 at 6/17/16 9:24 PM:
---------------------------------------------------------------

Ok, I think I was unspecific here:
1. I works when starting the job directly, so when executing it and when using the integrated flink server. 
2. It doesn't work (as described above) if I use the server which is brought up with "start_local.sh" and then execute the job-jar through "flink run [jar]"
3. I just checked, even if I launch flink using the "start_cluster.sh" script the same exception is raised. 

Both, "start_local.sh" and "start_cluster.sh" use the flink-dist jar which is IMHO broken in the way described above.


was (Author: theomega):
Ok, I think I was unspecific here:
1. I works when starting the job directly, so when executing it and when using the integrated flink server. 
2. It doesn't work (as described above) if I use the server which is brought up with "start_local.sh" and then execute the job-jar through "flink run [jar]"

I cannot actually talk about a real cluster

> flink-connector-cassandra has conflicting guava version
> -------------------------------------------------------
>
>                 Key: FLINK-4091
>                 URL: https://issues.apache.org/jira/browse/FLINK-4091
>             Project: Flink
>          Issue Type: Bug
>          Components: Streaming Connectors
>    Affects Versions: 1.1.0
>         Environment: MacOSX, 1.10-SNAPSHOT (head is 1a6bab3ef76805685044cf4521e32315169f9033)
>            Reporter: Dominik Bruhn
>
> The newly merged cassandra streaming connector has an issue with its guava dependency.
> The build-process for flink-connector-cassandra creates shaded JAR file which contains the connector, the datastax cassandra driver plus in org.apache.flink.shaded a shaded copy of guava. 
> The datastax cassandra driver calls into Futures.withFallback ([1]) which is present in this guava version. This also works inside the flink-connector-cassandra jar.
> Now the actual build-process for Flink happens and builds another shaded JAR and creates the flink-dist.jar. Inside this JAR, there is also a shaded version of guava inside org.apache.flink.shaded.
> Now the issue: The guava version which is in the flink-dist.jar is not compatible and doesn't contain the Futures.withFallback which the datastax driver is using.
> This leads into the following issue: You can without any problems launch a flink task which uses the casandra driver locally (so through the mini-cluster) because that is never using the flink-dist.jar. 
> BUT: As soon as you are trying to start this job on a flink cluster (which uses the flink-dist.jar), the job breaks with the following exception:
> https://gist.github.com/theomega/5ab9b14ffb516b15814de28e499b040d
> You can inspect this by opening the flink-connector-cassandra_2.11-1.1-SNAPSHOT.jar and the flink-dist_2.11-1.1-SNAPSHOT.jar in a java decompiler.
> I don't know a good solution here: Perhaps it would be one solution to shade the guava for the cassandra-driver somewhere else than at org.apache.flink.shaded.
> [1]: https://google.github.io/guava/releases/19.0/api/docs/com/google/common/util/concurrent/Futures.html#withFallback(com.google.common.util.concurrent.ListenableFuture, com.google.common.util.concurrent.FutureFallback, java.util.concurrent.Executor)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)