You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "M. Le Bihan (JIRA)" <ji...@apache.org> on 2018/12/05 13:54:00 UTC
[jira] [Comment Edited] (SPARK-24417) Build and Run Spark on JDK11
[ https://issues.apache.org/jira/browse/SPARK-24417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710091#comment-16710091 ]
M. Le Bihan edited comment on SPARK-24417 at 12/5/18 1:53 PM:
--------------------------------------------------------------
Hello,
Unaware if the problem with the JDK 11, I used it with _Spark 2.3.x_ without troubles for months, calling most of the times _lookup()_ functions on RDDs.
But when I attempted a _collect()_, I had a failure (an _IllegalArgumentException_). I upgraded to _Spark 2.4.0_ and a message from a class in _org.apache.xbean_ explained: "_Unsupported minor major version 55._".
Is it a trouble coming from memory management or from _Scala_ language ?
If, eventually, _Spark 2.x_ cannot support _JDK 11_ and that we have to wait for _Spark 3.0,_ when this version is planned to be released ?
Sorry if it's out of subject, but :
Will this next major version still be built over _Scala_ (meaning that it has to wait that _Scala_ project can follow _Java_ JDK versions) or only over _Java_, with _Scala_ offered as an independant option ?
Because it seems to me, who do not use _Scala_ for programming _Spark_ but plain _Java_ only, that _Scala_ is a cause of underlying troubles. Having a _Spark_ without _Scala_ like it is possible to have a _Spark_ without _Hadoop_ would confort me : a cause of issues would disappear.
Regards,
was (Author: mlebihan):
Hello,
Unaware if the problem with the JDK 11, I used it with _Spark 2.3.2_ without troubles for months, calling most of the times _lookup()_ functions on RDDs.
But when I attempted a _collect()_, I had a failure (an _IllegalArgumentException_). I upgraded to _Spark 2.4.0_ and a message from a class in _org.apache.xbean_ explained: "_Unsupported minor major version 55._".
Is it a trouble coming from memory management or from _Scala_ language ?
If, eventually, _Spark 2.x_ cannot support _JDK 11_ and that we have to wait for _Spark 3.0,_ when this version is planned to be released ?
Sorry if it's out of subject, but :
Will this next major version still be built over _Scala_ (meaning that it has to wait that _Scala_ project can follow _Java_ JDK versions) or only over _Java_, with _Scala_ offered as an independant option ?
Because it seems to me, who do not use _Scala_ for programming _Spark_ but plain _Java_ only, that _Scala_ is a cause of underlying troubles. Having a _Spark_ without _Scala_ like it is possible to have a _Spark_ without _Hadoop_ would confort me : a cause of issues would disappear.
Regards,
> Build and Run Spark on JDK11
> ----------------------------
>
> Key: SPARK-24417
> URL: https://issues.apache.org/jira/browse/SPARK-24417
> Project: Spark
> Issue Type: New Feature
> Components: Build
> Affects Versions: 2.3.0
> Reporter: DB Tsai
> Priority: Major
>
> This is an umbrella JIRA for Apache Spark to support JDK11
> As JDK8 is reaching EOL, and JDK9 and 10 are already end of life, per community discussion, we will skip JDK9 and 10 to support JDK 11 directly.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org