You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Randy Gelhausen <rg...@gmail.com> on 2015/08/23 00:32:50 UTC

"File name too long" error in Spark paragraphs

Hi All,

Anyone see something similar to this:

%spark
import org.apache.spark.sql._
import org.apache.phoenix.spark._

val input = "/user/root/crimes/atlanta"

val df =
sqlContext.read.format("com.databricks.spark.csv").option("header",
"true").option("DROPMALFORMED", "true").load(input)
val columns = df.columns.map(x => x.toUpperCase + " varchar,\n")
column

The result is an error:
File name too long

I tried commenting out various lines, and then ALL lines, but everything
(even in new paragraphs) passed to the interpreter results in "File name
too long".

Am I doing something silly?

Thanks,
-Randy

Re: "File name too long" error in Spark paragraphs

Posted by Randy Gelhausen <rg...@gmail.com>.
Is this something which can be fixed in the Spark Interpreter?

Maybe an auto-restart if "File name too long" is the result?

On Sun, Aug 23, 2015 at 12:47 PM, Silvio Fiorito <
silvio.fiorito@granturing.com> wrote:

> I've seen this recently as well. Seems to be an issue with the Scala REPL
> after running and rerunning notebooks with a lot of code.
>
> Only solution I found was too restart the interpreter.
>
> Even Databricks cloud seems to have this issue:
> https://forums.databricks.com/questions/427/why-do-i-see-this-error-when-i-run-my-notebook-jav.html
>
>
> Thanks,
> Silvio
> ------------------------------
> From: Randy Gelhausen <rg...@gmail.com>
> Sent: ‎8/‎22/‎2015 6:33 PM
> To: users@zeppelin.incubator.apache.org
> Subject: "File name too long" error in Spark paragraphs
>
> Hi All,
>
> Anyone see something similar to this:
>
> %spark
> import org.apache.spark.sql._
> import org.apache.phoenix.spark._
>
> val input = "/user/root/crimes/atlanta"
>
> val df =
> sqlContext.read.format("com.databricks.spark.csv").option("header",
> "true").option("DROPMALFORMED", "true").load(input)
> val columns = df.columns.map(x => x.toUpperCase + " varchar,\n")
> column
>
> The result is an error:
> File name too long
>
> I tried commenting out various lines, and then ALL lines, but everything
> (even in new paragraphs) passed to the interpreter results in "File name
> too long".
>
> Am I doing something silly?
>
> Thanks,
> -Randy
>

RE: "File name too long" error in Spark paragraphs

Posted by Silvio Fiorito <si...@granturing.com>.
I've seen this recently as well. Seems to be an issue with the Scala REPL after running and rerunning notebooks with a lot of code.

Only solution I found was too restart the interpreter.

Even Databricks cloud seems to have this issue: https://forums.databricks.com/questions/427/why-do-i-see-this-error-when-i-run-my-notebook-jav.html


Thanks,
Silvio
________________________________
From: Randy Gelhausen<ma...@gmail.com>
Sent: ‎8/‎22/‎2015 6:33 PM
To: users@zeppelin.incubator.apache.org<ma...@zeppelin.incubator.apache.org>
Subject: "File name too long" error in Spark paragraphs

Hi All,

Anyone see something similar to this:

%spark
import org.apache.spark.sql._
import org.apache.phoenix.spark._

val input = "/user/root/crimes/atlanta"

val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("DROPMALFORMED", "true").load(input)
val columns = df.columns.map(x => x.toUpperCase + " varchar,\n")
column

The result is an error:
File name too long

I tried commenting out various lines, and then ALL lines, but everything (even in new paragraphs) passed to the interpreter results in "File name too long".

Am I doing something silly?

Thanks,
-Randy