You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "liyunzhang (JIRA)" <ji...@apache.org> on 2017/11/30 08:34:00 UTC
[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272344#comment-16272344 ]
liyunzhang commented on SPARK-22660:
------------------------------------
ok, will put all the modifications about scala-2.12 and jdk9(SPARK-22660,SPARK-22659,SPARK-22661) in this jira.
> Compile with scala-2.12 and JDK9
> --------------------------------
>
> Key: SPARK-22660
> URL: https://issues.apache.org/jira/browse/SPARK-22660
> Project: Spark
> Issue Type: Improvement
> Components: Build
> Affects Versions: 2.2.0
> Reporter: liyunzhang
> Priority: Minor
>
> build with scala-2.12 with following steps
> 1. change the pom.xml with scala-2.12
> ./dev/change-scala-version.sh 2.12
> 2.build with -Pscala-2.12
> ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn -Pparquet-provided -Dhadoop.version=2.7.3
> get following error
> #Error1
> {code}
> /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: error: cannot find symbol
> Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory));
> {code}
> This is because sun.misc.Cleaner has been moved to new location in JDK9. HADOOP-12760 will be the long term fix
> #Error2
> {code}
> spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: ambiguous reference to overloaded definition, method limit in class ByteBuffer of type (x$1: Int)java.nio.ByteBuffer
> method limit in class Buffer of type ()Int
> match expected type ?
> val resultSize = serializedDirectResult.limit
> error
> {code}
> The limit method was moved from ByteBuffer to the superclass Buffer and it can no longer be called without (). The same reason for position method.
> #Error3
> {code}
> home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: ambiguous reference to overloaded definition, [error] both method putAll in class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: Object])Unit [error] match argument types (java.util.Map[String,String])
> [error] properties.putAll(propsMap.asJava)
> [error] ^
> [error] /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: ambiguous reference to overloaded definition, [error] both method putAll in class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: Object])Unit [error] match argument types (java.util.Map[String,String])
> [error] props.putAll(outputSerdeProps.toMap.asJava)
> [error] ^
> {code}
> This is because the key type is Object instead of String which is unsafe.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org