You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Cheolsoo Park (JIRA)" <ji...@apache.org> on 2015/07/23 18:28:04 UTC
[jira] [Comment Edited] (SPARK-8105)
sqlContext.table("databaseName.tableName") broke with SPARK-6908
[ https://issues.apache.org/jira/browse/SPARK-8105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639083#comment-14639083 ]
Cheolsoo Park edited comment on SPARK-8105 at 7/23/15 4:27 PM:
---------------------------------------------------------------
[~yhuai], what's the plan for this? SPARK-8107 seems to suggest adding a new function to SqlContext {{table("dbName", "tblName")}}, but the patch isn't committed while the jira is closed.
Why can't we simply support HiveContext {{table("dbName.tblName")}} since it is backward compatible I believe? Any concern with this approach?
was (Author: cheolsoo):
[~yhuai], what's the plan for this? SPARK-8107 seems to suggest adding a new function to SqlContext {{table("dbName", "tblName")}}, but the patch isn't committed while the jira is closed.
Why can we simply support HiveContext {{table("dbName.tblName")}} since it is backward compatible I believe? Any concern with this approach?
> sqlContext.table("databaseName.tableName") broke with SPARK-6908
> ----------------------------------------------------------------
>
> Key: SPARK-8105
> URL: https://issues.apache.org/jira/browse/SPARK-8105
> Project: Spark
> Issue Type: Sub-task
> Components: SQL
> Affects Versions: 1.4.0
> Environment: Spark with Hive
> Reporter: Doug Balog
> Priority: Critical
>
> Since the introduction of Dataframes in Spark 1.3.0 and prior to SPARK-6908 landing into master, a user could get a DataFrame to a Hive table using `sqlContext.table("databaseName.tableName")`
> Since SPARK-6908, the user now receives a NoSuchTableException.
> This amounts to a change in non experimental sqlContext.table() api and will require user code to be modified to work properly with 1.4.0.
> The only viable work around I could find is
> `sqlContext.sql("select * from databseName.tableName")`
> which seems like a hack.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org