You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/12/29 10:10:49 UTC

[jira] [Updated] (SPARK-12551) Not able to load CSV package

     [ https://issues.apache.org/jira/browse/SPARK-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen updated SPARK-12551:
------------------------------
          Flags:   (was: Important)
         Labels:   (was: features)
    Component/s:     (was: R)

[~chintan1309] This has nothing to do with the CSV package; can you update the title? This looks like a local environment problem in that it can't invoke the R interpreter. Did you verify basic SparkR works?

Please read https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark before making a JIRA as you set several items that shouldn't be

> Not able to load CSV package
> ----------------------------
>
>                 Key: SPARK-12551
>                 URL: https://issues.apache.org/jira/browse/SPARK-12551
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>    Affects Versions: 1.5.2
>         Environment: Rstudio under Windows
>            Reporter: chintan
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When ever i am trying load CSV package spark dont work, it gives Invoke java error
> Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.2.0" "sparkr-shell"')
> > Sys.setenv(SPARK_MEM="1g")
> > sc <- sparkR.init(master = "local")
> Launching java with spark-submit command C:/spark/bin/spark-submit.cmd   "--packages" "com.databricks:spark-csv_2.10:1.2.0" "sparkr-shell" C:\Users\shahch07\AppData\Local\Temp\RtmpigvXMn\backend_port98840b15c5a 
> > sqlContext <- sparkRSQL.init(sc)
> > DF <- createDataFrame(sqlContext, faithful)
> Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) : 
>   org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.NullPointerException
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:455)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
> 	at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873)
> 	at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853)
> 	at org.apache.spark.util.Utils$.fetchFile(Utils.scala:381)
> 	at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:405)
> 	at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:397)
> 	at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org