You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Cheng Lian (JIRA)" <ji...@apache.org> on 2015/07/27 11:21:05 UTC
[jira] [Updated] (SPARK-8105)
sqlContext.table("databaseName.tableName") broke with SPARK-6908
[ https://issues.apache.org/jira/browse/SPARK-8105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Cheng Lian updated SPARK-8105:
------------------------------
Fix Version/s: 1.5.0
> sqlContext.table("databaseName.tableName") broke with SPARK-6908
> ----------------------------------------------------------------
>
> Key: SPARK-8105
> URL: https://issues.apache.org/jira/browse/SPARK-8105
> Project: Spark
> Issue Type: Sub-task
> Components: SQL
> Affects Versions: 1.4.0
> Environment: Spark with Hive
> Reporter: Doug Balog
> Priority: Critical
> Fix For: 1.5.0
>
>
> Since the introduction of Dataframes in Spark 1.3.0 and prior to SPARK-6908 landing into master, a user could get a DataFrame to a Hive table using `sqlContext.table("databaseName.tableName")`
> Since SPARK-6908, the user now receives a NoSuchTableException.
> This amounts to a change in non experimental sqlContext.table() api and will require user code to be modified to work properly with 1.4.0.
> The only viable work around I could find is
> `sqlContext.sql("select * from databseName.tableName")`
> which seems like a hack.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org