You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2017/11/28 15:07:00 UTC

[jira] [Updated] (SPARK-22634) Update Bouncy castle dependency

     [ https://issues.apache.org/jira/browse/SPARK-22634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen updated SPARK-22634:
------------------------------
      Priority: Minor  (was: Major)
    Issue Type: Task  (was: Bug)

(Not a bug)
Wouldn't this have to follow a jets3t update, or can that happen separately?
One issue here is that many envs get these dependencies from the Hadoop installation and they won't be updated.
It may be that Spark is compatible with both at the same time, but then I'm not sure why you're not able to provide an updated version at runtime with the "hadoop-provided" build?

> Update Bouncy castle dependency
> -------------------------------
>
>                 Key: SPARK-22634
>                 URL: https://issues.apache.org/jira/browse/SPARK-22634
>             Project: Spark
>          Issue Type: Task
>          Components: Spark Core, SQL, Structured Streaming
>    Affects Versions: 2.2.0
>            Reporter: Lior Regev
>            Priority: Minor
>
> Spark's usage of jets3t library as well as Spark's own Flume and Kafka streaming uses bouncy castle version 1.51
> This is an outdated version as the latest one is 1.58
> This, in turn renders packages such as [spark-hadoopcryptoledger-ds|https://github.com/ZuInnoTe/spark-hadoopcryptoledger-ds] unusable since these require 1.58 and spark's distributions come along with 1.51
> My own attempt was to run on EMR, and since I automatically get all of spark's dependecies (bouncy castle 1.51 being one of them) into the classpath, using the library to parse blockchain data failed due to missing functionality.
> I have also opened an [issue|https://bitbucket.org/jmurty/jets3t/issues/242/bouncycastle-dependency] with jets3t to update their dependecy as well, but along with that Spark would have to update it's own or at least be packaged with a newer version



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org