You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "kiran (JIRA)" <ji...@apache.org> on 2016/11/07 18:53:58 UTC

[jira] [Updated] (SPARK-18315) Cannot save Solr or JDBC tables in Parquet format in Spark 2.0.1

     [ https://issues.apache.org/jira/browse/SPARK-18315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

kiran updated SPARK-18315:
--------------------------
    Summary: Cannot save Solr or JDBC tables in Parquet format in Spark 2.0.1  (was: Cannot save JDBC tables in Parquet format in Spark 2.0.1)

> Cannot save Solr or JDBC tables in Parquet format in Spark 2.0.1
> ----------------------------------------------------------------
>
>                 Key: SPARK-18315
>                 URL: https://issues.apache.org/jira/browse/SPARK-18315
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.1
>            Reporter: kiran
>
> After upgrading to 2.0.1 in the [spark-solr|https://github.com/LucidWorks/spark-solr] library, creating a table with Parquet format fails. These statements used to run successfully in 1.6.x
>  
> {code}
> 1: jdbc:hive2://localhost:10000> CREATE TABLE test2 USING solr OPTIONS (zkhost "localhost:9987", collection "test", fields "id" );
> +---------+--+
> | Result  |
> +---------+--+
> +---------+--+
> No rows selected (0.487 seconds)
> 1: jdbc:hive2://localhost:10000> show tables;
> +------------+--------------+--+
> | tableName  | isTemporary  |
> +------------+--------------+--+
> | test       | false        |
> | test2      | false        |
> +------------+--------------+--+
> 2 rows selected (0.036 seconds)
> 1: jdbc:hive2://localhost:10000> CREATE TABLE test_stored STORED AS PARQUET LOCATION  '/Users/kiran/spark/test.parquet' AS SELECT * FROM test2;
> Error: java.lang.AssertionError: assertion failed: No plan for InsertIntoTable Relation[id#61] parquet, true, false
> +- Relation[id#60] com.lucidworks.spark.SolrRelation@69cf7701 (state=,code=0)
> {code}
> Full stacktrace: 
> {code}
> 2016-11-02 22:41:50,651 [pool-25-thread-11] ERROR SparkExecuteStatementOperation  - Error executing query, currentState RUNNING, 
> java.lang.AssertionError: assertion failed: No plan for InsertIntoTable Relation[id#40] parquet, true, false
> +- Relation[id#2] com.lucidworks.spark.SolrRelation@15f428ea
>         at scala.Predef$.assert(Predef.scala:170)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:61)
>         at org.apache.spark.sql.execution.SparkPlanner.plan(SparkPlanner.scala:47)
>         at org.apache.spark.sql.execution.SparkPlanner$$anonfun$plan$1$$anonfun$apply$1.applyOrElse(SparkPlanner.scala:51)
>         at org.apache.spark.sql.execution.SparkPlanner$$anonfun$plan$1$$anonfun$apply$1.applyOrElse(SparkPlanner.scala:48)
>         at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
>         at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
>         at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
>         at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:300)
>         at org.apache.spark.sql.execution.SparkPlanner$$anonfun$plan$1.apply(SparkPlanner.scala:48)
>         at org.apache.spark.sql.execution.SparkPlanner$$anonfun$plan$1.apply(SparkPlanner.scala:48)
>         at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>         at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:78)
>         at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:76)
>         at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:83)
>         at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:83)
>         at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
>         at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
>         at org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand.run(CreateHiveTableAsSelectCommand.scala:93)
>         at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:60)
>         at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:58)
>         at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>         at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
>         at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
>         at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>         at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
>         at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
>         at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
>         at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
>         at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186)
>         at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167)
>         at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
>         at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
>         at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682)
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:222)
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:166)
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:176)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> 2016-11-02 22:41:50,652 [pool-25-thread-11] ERROR SparkExecuteStatementOperation  - Error running hive query: 
> org.apache.hive.service.cli.HiveSQLException: java.lang.AssertionError: assertion failed: No plan for InsertIntoTable Relation[id#40] parquet, true, false
> +- Relation[id#2] com.lucidworks.spark.SolrRelation@15f428ea
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:260)
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:166)
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:176)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> 2016-11-02 22:42:40,859 [SIGTERM handler] ERROR HiveThriftServer2  - RECEIVED SIGNAL TERM
> {code}
> It happens with the JDBC datasource too
> {code}
> 1: jdbc:hive2://localhost:10000> CREATE TABLE test USING jdbc OPTIONS ("url" "jdbc:mysql://localhost/test", "driver" "com.mysql.jdbc.Driver", "dbtable" "stats");
> +---------+--+
> | Result  |
> +---------+--+
> +---------+--+
> No rows selected (3.356 seconds)
> 1: jdbc:hive2://localhost:10000> show tables;
> +------------+--------------+--+
> | tableName  | isTemporary  |
> +------------+--------------+--+
> | test       | false        |
> +------------+--------------+--+
> 1 row selected (0.403 seconds)
> 1: jdbc:hive2://localhost:10000> CREATE TABLE test_stored STORED AS PARQUET LOCATION  '/Users/kiran/spark/test5.parquet' AS SELECT * FROM test;
> Error: java.lang.AssertionError: assertion failed: No plan for InsertIntoTable Relation[id#20,stat_repository_type#21,stat_repository_id#22,stat_holder_type#23,stat_holder_id#24,stat_coverage_type#25,stat_coverage_id#26,stat_membership_type#27,stat_membership_id#28,context#29] parquet, true, false
> +- Relation[id#10,stat_repository_type#11,stat_repository_id#12,stat_holder_type#13,stat_holder_id#14,stat_coverage_type#15,stat_coverage_id#16,stat_membership_type#17,stat_membership_id#18,context#19] JDBCRelation(stats) (state=,code=0)
> {code}
> Similar to JDBCRelation, [SolrRelation|https://github.com/lucidworks/spark-solr/blob/master/src/main/scala/com/lucidworks/spark/SolrRelation.scala] also extends the BaseRelation and InsertableRelation
> I am wondering if there is something that needs to be implemented for classes that extend BaseRelation and InsertableRelation
> Is this a known issue in Spark 2.0.x ? (I haven't seen any jira mentions). Are there any temporary workarounds until this is fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org