You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "kevin yu (JIRA)" <ji...@apache.org> on 2015/10/29 18:46:27 UTC

[jira] [Commented] (SPARK-6043) Error when trying to rename table with alter table after using INSERT OVERWITE to populate the table

    [ https://issues.apache.org/jira/browse/SPARK-6043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14980899#comment-14980899 ] 

kevin yu commented on SPARK-6043:
---------------------------------

Hello Trystan: I tried your testcase, and it works on spark 1.5, seems the problem has been fixed. Can you verify and close this jira? Thanks.

> Error when trying to rename table with alter table after using INSERT OVERWITE to populate the table
> ----------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-6043
>                 URL: https://issues.apache.org/jira/browse/SPARK-6043
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.2.1
>            Reporter: Trystan Leftwich
>            Priority: Minor
>
> If you populate a table using INSERT OVERWRITE and then try to rename the table using alter table it fails with:
> {noformat}
> Error: org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. (state=,code=0)
> {noformat}
> Using the following SQL statement creates the error:
> {code:sql}
> CREATE TABLE `tmp_table` (salesamount_c1 DOUBLE);
> INSERT OVERWRITE table tmp_table SELECT
>    MIN(sales_customer.salesamount) salesamount_c1
> FROM
> (
>       SELECT
>          SUM(sales.salesamount) salesamount
>       FROM
>          internalsales sales
> ) sales_customer;
> ALTER TABLE tmp_table RENAME to not_tmp;
> {code}
> But if you change the 'OVERWRITE' to be 'INTO' the SQL statement works.
> This is happening on our CDH5.3 cluster with multiple workers, If we use the CDH5.3 Quickstart VM the SQL does not produce an error. Both cases were spark 1.2.1 built for hadoop2.4+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org