You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Eliano Marques (JIRA)" <ji...@apache.org> on 2016/06/14 18:12:27 UTC

[jira] [Commented] (SPARK-15949) Spark 2.0.

    [ https://issues.apache.org/jira/browse/SPARK-15949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330060#comment-15330060 ] 

Eliano Marques commented on SPARK-15949:
----------------------------------------

Sean, thanks for your reply, I will read carefully your link. Please note that before I started this issue I investigated a few of the reasons of such behaviour and it's not the case I'm running with a wrong user. 

Both APIs are trying to start the job into an incorrect HDFS location and they fail accordingly. I can provide you with the details of the error but as the behaviour in the spark-shell being different i though this was something you guys wanted to check. As it stands the hive context is not really working. 

> Spark 2.0.
> ----------
>
>                 Key: SPARK-15949
>                 URL: https://issues.apache.org/jira/browse/SPARK-15949
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SparkR
>    Affects Versions: 2.0.0
>         Environment: Hadoop 2.6 with cloudera
>            Reporter: Eliano Marques
>
> When using pyspark or sparkR to connect to Hive there is a permissions error back to hdfs. In the beginning it sounded like a permissions problem but we realise this might have to do with the APIs itself since with scala this issue doesn't show up. 
> Pyspark: 
> spark and sc is available and you can see the new SparkSession. 
> when you do: 
> sqlContext("show databases").show() 
> An error shows due to permissions in "hdfs://nameservice1/home/username/etc"/ 
> The same behaviour in SparkR with: 
> databases = sql(sqlContext,"show databases")) 
> In scala (spark-shell) the operation sqlContext("show databases") correctly goes to "hdfs://nameservice1/user/username" which is where all hfds files for a specific user should be. 
> Let me know if you need more details. 
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org