You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/10/29 02:46:27 UTC

[jira] [Commented] (SPARK-10890) "Column count does not match; SQL statement:" error in JDBCWriteSuite

    [ https://issues.apache.org/jira/browse/SPARK-10890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14979642#comment-14979642 ] 

Apache Spark commented on SPARK-10890:
--------------------------------------

User 'ckadner' has created a pull request for this issue:
https://github.com/apache/spark/pull/9345

> "Column count does not match; SQL statement:" error in JDBCWriteSuite
> ---------------------------------------------------------------------
>
>                 Key: SPARK-10890
>                 URL: https://issues.apache.org/jira/browse/SPARK-10890
>             Project: Spark
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 1.5.0
>            Reporter: Rick Hillegas
>
> I get the following error when I run the following test...
> mvn -Dhadoop.version=2.4.0 -DwildcardSuites=org.apache.spark.sql.jdbc.JDBCWriteSuite test
> {noformat}
> JDBCWriteSuite:
> 13:22:15.603 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 13:22:16.506 WARN org.apache.spark.metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
> - Basic CREATE
> - CREATE with overwrite
> - CREATE then INSERT to append
> - CREATE then INSERT to truncate
> 13:22:19.312 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 23.0 (TID 31)
> org.h2.jdbc.JdbcSQLException: Column count does not match; SQL statement:
> INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
> 	at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
> 	at org.h2.message.DbException.get(DbException.java:179)
> 	at org.h2.message.DbException.get(DbException.java:155)
> 	at org.h2.message.DbException.get(DbException.java:144)
> 	at org.h2.command.dml.Insert.prepare(Insert.java:265)
> 	at org.h2.command.Parser.prepareCommand(Parser.java:247)
> 	at org.h2.engine.Session.prepareLocal(Session.java:446)
> 	at org.h2.engine.Session.prepareCommand(Session.java:388)
> 	at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
> 	at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
> 	at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
> 	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
> 	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:88)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> 13:22:19.312 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 23.0 (TID 32)
> org.h2.jdbc.JdbcSQLException: Column count does not match; SQL statement:
> INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
> 	at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
> 	at org.h2.message.DbException.get(DbException.java:179)
> 	at org.h2.message.DbException.get(DbException.java:155)
> 	at org.h2.message.DbException.get(DbException.java:144)
> 	at org.h2.command.dml.Insert.prepare(Insert.java:265)
> 	at org.h2.command.Parser.prepareCommand(Parser.java:247)
> 	at org.h2.engine.Session.prepareLocal(Session.java:446)
> 	at org.h2.engine.Session.prepareCommand(Session.java:388)
> 	at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
> 	at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
> 	at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
> 	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
> 	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:88)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> 13:22:19.325 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 23.0 (TID 32, localhost): org.h2.jdbc.JdbcSQLException: Column count does not match; SQL statement:
> INSERT INTO TEST.INCOMPATIBLETEST VALUES (?, ?, ?) [21002-183]
> 	at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
> 	at org.h2.message.DbException.get(DbException.java:179)
> 	at org.h2.message.DbException.get(DbException.java:155)
> 	at org.h2.message.DbException.get(DbException.java:144)
> 	at org.h2.command.dml.Insert.prepare(Insert.java:265)
> 	at org.h2.command.Parser.prepareCommand(Parser.java:247)
> 	at org.h2.engine.Session.prepareLocal(Session.java:446)
> 	at org.h2.engine.Session.prepareCommand(Session.java:388)
> 	at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1189)
> 	at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:72)
> 	at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:277)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:72)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:100)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:229)
> 	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:228)
> 	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
> 	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$32.apply(RDD.scala:892)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1856)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:88)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> 13:22:19.327 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 23.0 failed 1 times; aborting job
> - Incompatible INSERT to append
> - INSERT to JDBC Datasource
> - INSERT to JDBC Datasource with overwrite
> Run completed in 6 seconds, 390 milliseconds.
> Total number of tests run: 7
> Suites: completed 2, aborted 0
> Tests: succeeded 7, failed 0, canceled 0, ignored 0, pending 0
> All tests passed.
> {noformat}
> The test completes successfully but it spits out an alarming stack trace. I think it would be better if the stack trace were swallowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org