You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Herman van Hovell (JIRA)" <ji...@apache.org> on 2016/10/08 07:26:20 UTC

[jira] [Updated] (SPARK-17820) Spark sqlContext.sql() performs only first insert for HiveQL "FROM target INSERT INTO dest" command to insert into multiple target tables from same source

     [ https://issues.apache.org/jira/browse/SPARK-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Herman van Hovell updated SPARK-17820:
--------------------------------------
    Description: 
I am executing a HiveQL in spark-shell, I intend to insert a record into 2 destination tables from the same source table using same statement. But it inserts in only first destination table. My statement:
{noformat}
scala>val departmentsData = sqlContext.sql("from sqoop_import.departments insert into sqoop_import.names_count1 select department_name, count(1) where department_id=2 group by department_name insert into sqoop_import.names_count2 select department_name, count(1) where department_id=4 group by department_name")
{noformat}
Same query inserts into both destination tables on hive shell:
{noformat}
from sqoop_import.departments 
insert into sqoop_import.names_count1 
select department_name, count(1) 
where department_id=2 group by department_name 
insert into sqoop_import.names_count2 
select department_name, count(1) 
where department_id=4 group by department_name;
{noformat}
Both target table definitions are:
{noformat}
hive>use sqoop_import;
hive> create table names_count1 (department_name String, count Int);
hive> create table names_count2 (department_name String, count Int);
{noformat}
Not sure why it is skipping next one.

  was:
I am executing a HiveQL in spark-shell, I intend to insert a record into 2 destination tables from the same source table using same statement. But it inserts in only first destination table. My statement:

scala>val departmentsData = sqlContext.sql("from sqoop_import.departments insert into sqoop_import.names_count1 select department_name, count(1) where department_id=2 group by department_name insert into sqoop_import.names_count2 select department_name, count(1) where department_id=4 group by department_name")

Same query inserts into both destination tables on hive shell:
from sqoop_import.departments 
insert into sqoop_import.names_count1 
select department_name, count(1) 
where department_id=2 group by department_name 
insert into sqoop_import.names_count2 
select department_name, count(1) 
where department_id=4 group by department_name;

Both target table definitions are:
hive>use sqoop_import;
hive> create table names_count1 (department_name String, count Int);
hive> create table names_count2 (department_name String, count Int);

Not sure why it is skipping next one.


> Spark sqlContext.sql() performs only first insert for HiveQL "FROM target INSERT INTO dest" command to insert into multiple target tables from same source
> ----------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17820
>                 URL: https://issues.apache.org/jira/browse/SPARK-17820
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.0
>         Environment: Cloudera Quickstart VM 5.7
>            Reporter: Kiran Miryala
>
> I am executing a HiveQL in spark-shell, I intend to insert a record into 2 destination tables from the same source table using same statement. But it inserts in only first destination table. My statement:
> {noformat}
> scala>val departmentsData = sqlContext.sql("from sqoop_import.departments insert into sqoop_import.names_count1 select department_name, count(1) where department_id=2 group by department_name insert into sqoop_import.names_count2 select department_name, count(1) where department_id=4 group by department_name")
> {noformat}
> Same query inserts into both destination tables on hive shell:
> {noformat}
> from sqoop_import.departments 
> insert into sqoop_import.names_count1 
> select department_name, count(1) 
> where department_id=2 group by department_name 
> insert into sqoop_import.names_count2 
> select department_name, count(1) 
> where department_id=4 group by department_name;
> {noformat}
> Both target table definitions are:
> {noformat}
> hive>use sqoop_import;
> hive> create table names_count1 (department_name String, count Int);
> hive> create table names_count2 (department_name String, count Int);
> {noformat}
> Not sure why it is skipping next one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org