You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "anubhav tarar (JIRA)" <ji...@apache.org> on 2017/07/12 06:15:00 UTC

[jira] [Assigned] (CARBONDATA-1031) spark-sql can't read the carbon table

     [ https://issues.apache.org/jira/browse/CARBONDATA-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

anubhav tarar reassigned CARBONDATA-1031:
-----------------------------------------

    Assignee: anubhav tarar

> spark-sql can't read the carbon table
> -------------------------------------
>
>                 Key: CARBONDATA-1031
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1031
>             Project: CarbonData
>          Issue Type: Bug
>    Affects Versions: 1.1.0
>            Reporter: cen yuhai
>            Assignee: anubhav tarar
>
> I create a carbon table by spark-shell
> And then I use this command  "spark-sql --jars carbon*.jar" to start spark-sql cli.
> When the first time I execute this "select * from temp.test-schema", spark will throw exception. After I execute another command, It will be ok.
> {code}
> 17/05/06 21:43:12 ERROR [org.apache.spark.sql.hive.thriftserver.SparkSQLDriver(91) -- main]: Failed in [select * from temp.test_schema]
> java.lang.AssertionError: assertion failed: No plan for Relation[id#10,name#11,scale#12,country#13,salary#14] CarbonDatasourceHadoopRelation(org.apache.spark.sql.SparkSession@42d9ea3b,[Ljava.lang.String;@70a0e9c6,Map(path -> hdfs:////user/hadoop/carbon/store/temp/test_schema, serialization.format -> 1, dbname -> temp, tablepath -> hdfs:////user/hadoop/carbon/store/temp/test_schema, tablename -> test_schema),None,ArrayBuffer())
>         at scala.Predef$.assert(Predef.scala:170)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
>         at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>         at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>         at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>         at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
>         at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>         at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
>         at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
>         at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
>         at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
>         at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
>         at org.apache.spark.sql.execution.QueryExecution.hiveResultString(QueryExecution.scala:119)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:335)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:247)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:742)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:186)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:211)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)