You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@druid.apache.org by mike <mi...@shujike.com> on 2018/12/17 09:56:48 UTC
Bug report!
Hello, Could anybody give me a hand?
Recently, I upgraded my druid server version from 0.9.2 to lastest 0.12.3, I encountered into a trouble when I ran my previous application over new druid. It worked fine on 0.9.2. after check up the logs, I found that for double fields in my schema, it returned 0 instead of 0.0 in the query result, which caused my json parser unhappy and throwed an exception something like that:
18/12/11 19:36:37 ERROR thriftserver.SparkExecuteStatementOperation: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 11.0 failed 1 times, most recent failure: Lost task 0.0 in stage 11.0 (TID 11, localhost): java.lang.ClassCastException: scala.math.BigInt cannot be cast to java.lang.Double
at scala.runtime.BoxesRunTime.unboxToDouble(BoxesRunTime.java:114)
at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getDouble(rows.scala:44)
at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getDouble(rows.scala:221)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Could anybody respond to me or fix this bug or help me get out of this disgusting trouble, thanks million!
Michael
Re: Bug report!
Posted by Gian Merlino <gi...@apache.org>.
Hey Mike,
I would look to Hive to fix this - it should be able to handle either a 0
or 0.0 in the response equally well. I suppose I wouldn't consider it to be
a bug in Druid.
On Mon, Dec 17, 2018 at 10:15 AM mike <mi...@shujike.com> wrote:
> Hello, Could anybody give me a hand?
>
> Recently, I upgraded my druid server version from 0.9.2 to lastest
> 0.12.3, I encountered into a trouble when I ran my previous application
> over new druid. It worked fine on 0.9.2. after check up the logs, I found
> that for double fields in my schema, it returned 0 instead of 0.0 in the
> query result, which caused my json parser unhappy and throwed an exception
> something like that:
>
> 18/12/11 19:36:37 ERROR thriftserver.SparkExecuteStatementOperation: Error
> running hive query:
> org.apache.hive.service.cli.HiveSQLException:
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
> in stage 11.0 failed 1 times, most recent failure: Lost task 0.0 in stage
> 11.0 (TID 11, localhost): java.lang.ClassCastException: scala.math.BigInt
> cannot be cast to java.lang.Double
> at scala.runtime.BoxesRunTime.unboxToDouble(BoxesRunTime.java:114)
> at
> org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getDouble(rows.scala:44)
> at
> org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getDouble(rows.scala:221)
> at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
> Source)
> at
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
> at
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
> at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
> at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
> at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
> at org.apache.spark.scheduler.Task.run(Task.scala:86)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> Could anybody respond to me or fix this bug or help me get out of this
> disgusting trouble, thanks million!
>
> Michael