You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2018/08/04 20:27:00 UTC

[jira] [Updated] (SPARK-25026) Binary releases don't contain Kafka integration modules

     [ https://issues.apache.org/jira/browse/SPARK-25026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen updated SPARK-25026:
------------------------------
    Target Version/s: 2.4.0  (was: 2.3.2, 2.4.0)
            Priority: Major  (was: Blocker)

Yeah, I felt like it must have been on purpose. It seems a little odd that we make a user bring Spark code to Spark, and therefore have to match Spark versions more finely? That is, I have to bring spark-kafka with my app that also matches up with whatever internal dependencies it has on Spark, not just matches on the user-facing API my app uses.

Or that there's an assembly JAR (if it's meant to be bundled as a dependency of an app)? and that we bundle other things like K8S integration, though that's a resource manager not data source.

OK downgrading this for now, probably a WontFix. Maybe will leave it open another day to see if anyone believes this logic should change going forward.

> Binary releases don't contain Kafka integration modules
> -------------------------------------------------------
>
>                 Key: SPARK-25026
>                 URL: https://issues.apache.org/jira/browse/SPARK-25026
>             Project: Spark
>          Issue Type: Bug
>          Components: Build, Structured Streaming
>    Affects Versions: 2.2.2, 2.3.1, 2.4.0
>            Reporter: Sean Owen
>            Assignee: Sean Owen
>            Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org