You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Darshan Mehta (JIRA)" <ji...@apache.org> on 2016/04/19 12:36:25 UTC
[jira] [Reopened] (SPARK-14727) NullPointerException while trying
to launch local spark job
[ https://issues.apache.org/jira/browse/SPARK-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Darshan Mehta reopened SPARK-14727:
-----------------------------------
Not a duplicate
> NullPointerException while trying to launch local spark job
> -----------------------------------------------------------
>
> Key: SPARK-14727
> URL: https://issues.apache.org/jira/browse/SPARK-14727
> Project: Spark
> Issue Type: Bug
> Reporter: Darshan Mehta
> Attachments: Logs.log, SparkCrud.java
>
>
> OS : Windows 10
> Spark Version : 1.6.1
> Java version : 1.8
> I am trying to launch a simple Spark job from eclipse, after starting spark master and registering one worker. JavaRDDs are created successfully, however, a NPE is thrown while collect() operation is executed. Below are the steps that I performed:
> 1. Downloaded Spark 1.6.1
> 2. Built it locally with 'sbt package' and 'sbt assembly' commands
> 3. Started Master with 'spark-class org.apache.spark.deploy.master.Master'
> 4. Started Worker with 'spark-class org.apache.spark.deploy.worker.Worker spark://master:7077 -c 2'
> 5. Verified both Master and Worker are up, and have enough resources in Spark UI
> 6. Created a maven project in eclipse, with spark dependency
> 7. Executed attached "SparkCrud.java" in eclipse
> 8. NPE is thrown, logs are attached "Logs.log"
> It seems it's trying to execute Hadoop binaries, however, I am not using Hadoop anywhere at all. Also, I tried placing winutil.exe in C:\\ and configured "hadoop.home.dir" System property (as suggested in another JIRA), however that doesn't seem to have done the trick.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org