You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "Ajantha Bhat (Jira)" <ji...@apache.org> on 2020/03/18 07:42:00 UTC

[jira] [Created] (CARBONDATA-3744) Fix select query failure when warehouse directory is default (not configured)

Ajantha Bhat created CARBONDATA-3744:
----------------------------------------

             Summary: Fix select query failure when warehouse directory is default (not configured)
                 Key: CARBONDATA-3744
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3744
             Project: CarbonData
          Issue Type: Improvement
            Reporter: Ajantha Bhat
            Assignee: Ajantha Bhat


Problem:

select query fails when warehouse directory is default (not configured) with below callstak.

0: jdbc:hive2://localhost:10000> create table ab(age int) stored as carbondata;
+---------+--+
| Result |
+---------+--+
+---------+--+
No rows selected (0.093 seconds)
0: jdbc:hive2://localhost:10000> select count(*) from ab;
Error: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'ab' not found in database 'tpch'; (state=,code=0)

caused by
java.io.FileNotFoundException: File hdfs://localhost:54311/home/root1/tools/spark-2.3.4-bin-hadoop2.7/spark-warehouse/tpch.db/ab/Metadata does not exist.

 

cause : When the spark.sql.warehouse.dir is not configured, default local file system *SPARK_HOME* is used. But the describe table shows with *HDFS prefix in cluster.* 

Reason is we are removing the local filesystem scheme , instead if we keep the scheme issue will not come.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)