You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "cen yuhai (JIRA)" <ji...@apache.org> on 2017/07/30 12:46:02 UTC

[jira] [Assigned] (CARBONDATA-1343) Hive can't query data when the carbon table info is store in hive metastore

     [ https://issues.apache.org/jira/browse/CARBONDATA-1343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

cen yuhai reassigned CARBONDATA-1343:
-------------------------------------

    Assignee: cen yuhai

> Hive can't query data when the carbon table info is store in hive metastore
> ---------------------------------------------------------------------------
>
>                 Key: CARBONDATA-1343
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1343
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: cen yuhai
>            Assignee: cen yuhai
>
> set spark.carbon.hive.schema.store=true in spark-defaults.conf
> spark-shell --jars carbonlib/carbondata_2.11-1.2.0-SNAPSHOT-shade-hadoop2.7.2.jar,carbonlib/carbondata-hive-1.2.0-SNAPSHOT.jar
> import org.apache.spark.sql.SparkSession 
> import org.apache.spark.sql.CarbonSession._ 
> val rootPath = "hdfs://mycluster/user/master/carbon" 
> val storeLocation = s"$rootPath/store" 
> val warehouse = s"$rootPath/warehouse" 
> val metastoredb = s"$rootPath/metastore_db" 
> val carbon =SparkSession.builder().enableHiveSupport().getOrCreateCarbonSession(storeLocation, metastoredb) 
> carbon.sql("create table temp.hive_carbon(id short, name string, scale decimal, country string, salary double) STORED BY 'carbondata'") 
> carbon.sql("LOAD DATA INPATH 'hdfs://mycluster/user/master/sample.csv&#39; INTO TABLE temp.hive_carbon") 
> start hive cli
> ```
> set hive.mapred.supports.subdirectories=true;
> set mapreduce.input.fileinputformat.input.dir.recursive=true;
> select * from temp.hive_carbon;
> {code}
> 17/07/30 19:33:07 ERROR [CliDriver(1097) -- 53ea0b98-bcf0-4b86-a167-58ce570df284 main]: Failed with exception java.io.IOException:java.io.IOException: File does not exist: hdfs://bipcluster/user/master/carbon/store/temp/hive_carbon/Metadata/schema
> java.io.IOException: java.io.IOException: File does not exist: hdfs://bipcluster/user/master/carbon/store/temp/yuhai_carbon/Metadata/schema
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:521)
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:428)
>         at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146)
>         at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2187)
>         at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:252)
>         at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
>         at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776)
>         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: File does not exist: hdfs://bipcluster/user/master/carbon/store/temp/hive_carbon/Metadata/schema
>         at org.apache.carbondata.hadoop.util.SchemaReader.readCarbonTableFromStore(SchemaReader.java:70)
>         at org.apache.carbondata.hadoop.CarbonInputFormat.populateCarbonTable(CarbonInputFormat.java:147)
>         at org.apache.carbondata.hadoop.CarbonInputFormat.getCarbonTable(CarbonInputFormat.java:124)
>         at org.apache.carbondata.hadoop.CarbonInputFormat.getAbsoluteTableIdentifier(CarbonInputFormat.java:221)
>         at org.apache.carbondata.hadoop.CarbonInputFormat.getSplits(CarbonInputFormat.java:234)
>         at org.apache.carbondata.hive.MapredCarbonInputFormat.getSplits(MapredCarbonInputFormat.java:51)
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextSplits(FetchOperator.java:372)
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:304)
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:459)
>         ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)