You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jose Luis Pedrosa (JIRA)" <ji...@apache.org> on 2019/07/05 10:44:00 UTC

[jira] [Updated] (SPARK-28258) Incopatibility betweek spark docker image and hadoop 3.2 and azure tools

     [ https://issues.apache.org/jira/browse/SPARK-28258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jose Luis Pedrosa updated SPARK-28258:
--------------------------------------
    Description: 
Currently the docker images generated by the distro uses openjdk8 based on alpine.
 This means that the version shipped of libssl is 1.1.1b-r1:
  
{noformat}
sh-4.4# apk list | grep ssl
libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
{noformat}
The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by [https://issues.jboss.org/browse/JBEAP-16425].

This results on error running on the executor:
{noformat}
2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.1b 26 Feb 2019
2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking for metadata directory.
Exception in thread "main" java.lang.NullPointerException
 at org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
{noformat}
In my tests creating a Docker image with an updated version of wildfly, solves the issue: 1.0.7.Final

Not sure if this is an Spark problem, if so, where would be the right place to solve it. 

It seems they may take care of it in Hadoop directly, but tickets are open.

https://issues.apache.org/jira/browse/HADOOP-16410

https://issues.apache.org/jira/browse/HADOOP-16405

  was:
Currently the docker images generated by the distro uses openjdk8 based on alpine.
 This means that the version shipped of libssl is 1.1.1b-r1:
 
{noformat}
sh-4.4# apk list | grep ssl
libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
{noformat}

The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by https://issues.jboss.org/browse/JBEAP-16425.

This results on error running on the executor:
{noformat}
2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.1b 26 Feb 2019
2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking for metadata directory.
Exception in thread "main" java.lang.NullPointerException
 at org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
{noformat}

In my tests creating a Docker image with an updated version of wildfly, solves the issue: 1.0.7.Final

Not sure if this is Spark problem, if so, where would be the right place to solve it.



> Incopatibility betweek spark docker image and hadoop 3.2 and azure tools
> ------------------------------------------------------------------------
>
>                 Key: SPARK-28258
>                 URL: https://issues.apache.org/jira/browse/SPARK-28258
>             Project: Spark
>          Issue Type: Bug
>          Components: Kubernetes
>    Affects Versions: 2.4.3
>            Reporter: Jose Luis Pedrosa
>            Priority: Minor
>
> Currently the docker images generated by the distro uses openjdk8 based on alpine.
>  This means that the version shipped of libssl is 1.1.1b-r1:
>   
> {noformat}
> sh-4.4# apk list | grep ssl
> libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
> {noformat}
> The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by [https://issues.jboss.org/browse/JBEAP-16425].
> This results on error running on the executor:
> {noformat}
> 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version OpenSSL 1.1.1b 26 Feb 2019
> 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking for metadata directory.
> Exception in thread "main" java.lang.NullPointerException
>  at org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
> {noformat}
> In my tests creating a Docker image with an updated version of wildfly, solves the issue: 1.0.7.Final
> Not sure if this is an Spark problem, if so, where would be the right place to solve it. 
> It seems they may take care of it in Hadoop directly, but tickets are open.
> https://issues.apache.org/jira/browse/HADOOP-16410
> https://issues.apache.org/jira/browse/HADOOP-16405



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org