You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Rick Bross (JIRA)" <ji...@apache.org> on 2016/06/22 15:08:58 UTC

[jira] [Comment Edited] (SPARK-15221) error: not found: value sqlContext when starting Spark 1.6.1

    [ https://issues.apache.org/jira/browse/SPARK-15221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15344423#comment-15344423 ] 

Rick Bross edited comment on SPARK-15221 at 6/22/16 3:08 PM:
-------------------------------------------------------------

I'd like to point out that I used the spark-ec2 script to create a cluster and I'm running into the same issue.  Not really sure what I should do to solve it:

The first error is:

java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx--x--x

I assume that some env script is specifying this, since /tmp/hive doesn't exist..  Thought that this would be taken care of by the spark-ec2 script so you could just go.

I have no experience with HDFS.  I run Spark off of Cassandra and off of S3.  Tried this:

root@ip-172-31-57-109 ephemeral-hdfs]$ bin/hadoop fs -ls
Warning: $HADOOP_HOME is deprecated.

ls: Cannot access .: No such file or directory.

I did see that under /mnt there is the ephemeral-hdfs folder which appears to be the HADOOP_HOME folder, but there is no tmp folder.  Should I create one?

But I'm a total noob other than Spark itself.  I just need the sqlContext to begin work.

Any feedback is appreciated.



was (Author: rabinnh):
I'd like to point out that I used the spark-ec2 script to create a cluster and I'm running into the same issue.  Not really sure what I should do to solve it:

The first error is:

java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx--x--x

I assume that some env script is specifying this, since /tmp/hive doesn't exist..  Thought that this would be taken care of by the spark-ec2 script so you could just go.

I have no experience with HDFS.  I run Spark off of Cassandra and off of S3.  Tried this:

root@ip-172-31-57-109 ephemeral-hdfs]$ bin/hadoop fs -ls
Warning: $HADOOP_HOME is deprecated.

ls: Cannot access .: No such file or directory.

But I'm a total noob other than Spark itself.  I just need the sqlContext to begin work.

Any feedback is appreciated.


> error: not found: value sqlContext when starting Spark 1.6.1
> ------------------------------------------------------------
>
>                 Key: SPARK-15221
>                 URL: https://issues.apache.org/jira/browse/SPARK-15221
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.1
>         Environment: Ubuntu 14.0.4, 8 GB RAM, 1 Processor
>            Reporter: Vijay Parmar
>            Priority: Blocker
>              Labels: build, newbie
>
> When I start Spark (version 1.6.1), at the very end I am getting the following error message:
> <console>:16: error: not found: value sqlContext
>          import sqlContext.implicits._
>                 ^
> <console>:16: error: not found: value sqlContext
>          import sqlContext.sql
> I have gone through some content on the web about editing the /.bashrc file and including the "SPARK_LOCAL_IP=127.0.0.1" under SPARK variables. 
> Also tried editing the /etc/hosts file with :-
>  $ sudo vi /etc/hosts
>  ...
>  127.0.0.1  <HOSTNAME>
>  ...
> but still the issue persists.  Is it the issue with the build or something else?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org