You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Paul Dennis (JIRA)" <ji...@apache.org> on 2016/06/26 12:09:33 UTC

[jira] [Comment Edited] (SPARK-4820) Spark build encounters "File name too long" on some encrypted filesystems

    [ https://issues.apache.org/jira/browse/SPARK-4820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15350090#comment-15350090 ] 

Paul Dennis edited comment on SPARK-4820 at 6/26/16 12:09 PM:
--------------------------------------------------------------

I am seeing this with on Ubuntu 16.04 building from scratch. 

{noformat}
[INFO] Compiling 480 Scala sources and 74 Java sources to /home/pd40/git/spark/core/target/scala-2.11/classes...
[WARNING] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:78: class Accumulator in package spark is deprecated: use AccumulatorV2
[WARNING]     accumulator: Accumulator[JList[Array[Byte]]])
[WARNING]                  ^
[WARNING] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:71: class Accumulator in package spark is deprecated: use AccumulatorV2
[WARNING] private[spark] case class PythonFunction(
[WARNING]                           ^
[WARNING] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:873: trait AccumulatorParam in package spark is deprecated: use AccumulatorV2
[WARNING]   extends AccumulatorParam[JList[Array[Byte]]] {
[WARNING]           ^
[WARNING] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:459: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[WARNING]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[WARNING]                             ^
[ERROR] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala:297: File name too long
This can happen on some encrypted or legacy file systems.  Please see SI-3623 for more details.
[ERROR]           logInfo(s"Asked to remove non-existent executor $executorId")
[ERROR]                   ^
[ERROR] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala:306: File name too long
This can happen on some encrypted or legacy file systems.  Please see SI-3623 for more details.
[ERROR]       reason.map(r => s" (reason: $r)").getOrElse(""))
[ERROR]                    ^
[ERROR] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala:306: File name too long
This can happen on some encrypted or legacy file systems.  Please see SI-3623 for more details.
[ERROR]       reason.map(r => s" (reason: $r)").getOrElse(""))
[ERROR]                                                   ^
[WARNING] four warnings found
[ERROR] three errors found
{noformat}

The workaround worked with scalacOptions inserted at line 260 [here|https://github.com/pd40/spark/commit/020e9340e14ef9488fec0d07e23351d155c2da8d]
{noformat}
@@ -257,6 +257,7 @@ object SparkBuild extends PomBuild {
     publishMavenStyle in MavenCompile := true,
     publishLocal in MavenCompile <<= publishTask(publishLocalConfiguration in MavenCompile, deliverLocal),
     publishLocalBoth <<= Seq(publishLocal in MavenCompile, publishLocal).dependOn,
+    scalacOptions in Compile ++= Seq("-Xmax-classfile-name", "128"), 
{noformat}



was (Author: pd40):
I am seeing this with on Ubuntu 16.04 building from scratch. 

{{monospaced}}
[INFO] Compiling 480 Scala sources and 74 Java sources to /home/pd40/git/spark/core/target/scala-2.11/classes...
[WARNING] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:78: class Accumulator in package spark is deprecated: use AccumulatorV2
[WARNING]     accumulator: Accumulator[JList[Array[Byte]]])
[WARNING]                  ^
[WARNING] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:71: class Accumulator in package spark is deprecated: use AccumulatorV2
[WARNING] private[spark] case class PythonFunction(
[WARNING]                           ^
[WARNING] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:873: trait AccumulatorParam in package spark is deprecated: use AccumulatorV2
[WARNING]   extends AccumulatorParam[JList[Array[Byte]]] {
[WARNING]           ^
[WARNING] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:459: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[WARNING]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[WARNING]                             ^
[ERROR] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala:297: File name too long
This can happen on some encrypted or legacy file systems.  Please see SI-3623 for more details.
[ERROR]           logInfo(s"Asked to remove non-existent executor $executorId")
[ERROR]                   ^
[ERROR] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala:306: File name too long
This can happen on some encrypted or legacy file systems.  Please see SI-3623 for more details.
[ERROR]       reason.map(r => s" (reason: $r)").getOrElse(""))
[ERROR]                    ^
[ERROR] /home/pd40/git/spark/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala:306: File name too long
This can happen on some encrypted or legacy file systems.  Please see SI-3623 for more details.
[ERROR]       reason.map(r => s" (reason: $r)").getOrElse(""))
[ERROR]                                                   ^
[WARNING] four warnings found
[ERROR] three errors found
{{monospaced}}

The workaround worked with scalacOptions inserted at line 260 [here|https://github.com/pd40/spark/commit/020e9340e14ef9488fec0d07e23351d155c2da8d]
{{quote}}
@@ -257,6 +257,7 @@ object SparkBuild extends PomBuild {
     publishMavenStyle in MavenCompile := true,
     publishLocal in MavenCompile <<= publishTask(publishLocalConfiguration in MavenCompile, deliverLocal),
     publishLocalBoth <<= Seq(publishLocal in MavenCompile, publishLocal).dependOn,
+    scalacOptions in Compile ++= Seq("-Xmax-classfile-name", "128"), 
{{quote}}


> Spark build encounters "File name too long" on some encrypted filesystems
> -------------------------------------------------------------------------
>
>                 Key: SPARK-4820
>                 URL: https://issues.apache.org/jira/browse/SPARK-4820
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation
>            Reporter: Patrick Wendell
>            Assignee: Theodore Vasiloudis
>            Priority: Minor
>             Fix For: 1.4.0
>
>
> This was reported by Luchesar Cekov on github along with a proposed fix. The fix has some potential downstream issues (it will modify the classnames) so until we understand better how many users are affected we aren't going to merge it. However, I'd like to include the issue and workaround here. If you encounter this issue please comment on the JIRA so we can assess the frequency.
> The issue produces this error:
> {code}
> [error] == Expanded type of tree ==
> [error] 
> [error] ConstantType(value = Constant(Throwable))
> [error] 
> [error] uncaught exception during compilation: java.io.IOException
> [error] File name too long
> [error] two errors found
> {code}
> The workaround is in maven under the compile options add: 
> {code}
> +              <arg>-Xmax-classfile-name</arg>
> +              <arg>128</arg>
> {code}
> In SBT add:
> {code}
> +    scalacOptions in Compile ++= Seq("-Xmax-classfile-name", "128"),
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org