You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ayush Anubhava (JIRA)" <ji...@apache.org> on 2018/08/06 12:16:00 UTC

[jira] [Updated] (SPARK-25032) Create table is failing, after dropping the database . It is not falling back to default database

     [ https://issues.apache.org/jira/browse/SPARK-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ayush Anubhava updated SPARK-25032:
-----------------------------------
    Description: 
*Launch spark-beeline for both the scenarios*

*Scenario 1*

create database cbo1;

use cbo1;

create table test2 ( a int, b string , c int) stored as parquet;

drop database cbo1 cascade;

create table test1 ( a int, b string , c int) stored as parquet;

{color:#ff0000}Output : Exception is thrown at this point {color}

{color:#ff0000}Error: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'cbo1' not found; (state=,code=0){color}

*Scenario 2:*

create database cbo1;

use cbo1;

create table test2 ( a int, b string , c int) stored as parquet;

drop database cbo1 cascade;

create database cbo1;

create table test1 ( a int, b string , c int) stored as parquet;

{color:#ff0000}Output : Table is getting created in the database  "*cbo1*", even on not using the database.It should have been created in default db.{color}

 

In beeline session, after dropping the database , it is not falling back to default db

 

  was:
*Launch spark-beeline for both the scenarios*

*Scenario 1*

create database cbo1;

create table test2 ( a int, b string , c int) stored as parquet;

drop database cbo1 cascade;

create table test1 ( a int, b string , c int) stored as parquet;

{color:#FF0000}Output : Exception is thrown at this point {color}

{color:#FF0000}Error: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'cbo1' not found; (state=,code=0){color}

*Scenario 2:*

create database cbo1;

create table test2 ( a int, b string , c int) stored as parquet;

drop database cbo1 cascade;

create database cbo1;

create table test1 ( a int, b string , c int) stored as parquet;

{color:#FF0000}Output : Table is getting created in the database  "*cbo1*", even on not using the database.It should have been created in default db.{color}

 

In beeline session, after dropping the database , it is not falling back to default db

 


> Create table is failing, after dropping the database . It is not falling back to default database
> -------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-25032
>                 URL: https://issues.apache.org/jira/browse/SPARK-25032
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Shell
>    Affects Versions: 2.1.0, 2.3.0, 2.3.1
>         Environment: Spark 2.3.1 
> Hadoop 2.7.3
>  
>            Reporter: Ayush Anubhava
>            Priority: Minor
>
> *Launch spark-beeline for both the scenarios*
> *Scenario 1*
> create database cbo1;
> use cbo1;
> create table test2 ( a int, b string , c int) stored as parquet;
> drop database cbo1 cascade;
> create table test1 ( a int, b string , c int) stored as parquet;
> {color:#ff0000}Output : Exception is thrown at this point {color}
> {color:#ff0000}Error: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'cbo1' not found; (state=,code=0){color}
> *Scenario 2:*
> create database cbo1;
> use cbo1;
> create table test2 ( a int, b string , c int) stored as parquet;
> drop database cbo1 cascade;
> create database cbo1;
> create table test1 ( a int, b string , c int) stored as parquet;
> {color:#ff0000}Output : Table is getting created in the database  "*cbo1*", even on not using the database.It should have been created in default db.{color}
>  
> In beeline session, after dropping the database , it is not falling back to default db
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org