You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Wenchen Fan (JIRA)" <ji...@apache.org> on 2016/11/24 05:26:58 UTC

[jira] [Commented] (SPARK-18468) Flaky test: org.apache.spark.sql.hive.HiveSparkSubmitSuite.SPARK-9757 Persist Parquet relation with decimal column

    [ https://issues.apache.org/jira/browse/SPARK-18468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692245#comment-15692245 ] 

Wenchen Fan commented on SPARK-18468:
-------------------------------------

looks like the executor was lost due to some unknown reasons, and is not related to this specific test. BTW this failure doesn't show up frequently(I only saw it once), shall we close this ticket?

> Flaky test: org.apache.spark.sql.hive.HiveSparkSubmitSuite.SPARK-9757 Persist Parquet relation with decimal column
> ------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18468
>                 URL: https://issues.apache.org/jira/browse/SPARK-18468
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.0
>            Reporter: Yin Huai
>            Priority: Critical
>
> https://amplab.cs.berkeley.edu/jenkins/job/spark-branch-2.1-test-sbt-hadoop-2.4/71/testReport/junit/org.apache.spark.sql.hive/HiveSparkSubmitSuite/SPARK_9757_Persist_Parquet_relation_with_decimal_column/
> https://spark-tests.appspot.com/builds/spark-branch-2.1-test-sbt-hadoop-2.4/71
> Seems we failed to stop the driver
> {code}
> 2016-11-15 18:36:47.76 - stderr> org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
> 2016-11-15 18:36:47.76 - stderr> 	at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
> 2016-11-15 18:36:47.76 - stderr> 	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
> 2016-11-15 18:36:47.76 - stderr> 	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.util.Try$.apply(Try.scala:192)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.util.Failure.recover(Try.scala:216)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> 2016-11-15 18:36:47.76 - stderr> 	at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.Promise$class.complete(Promise.scala:55)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
> 2016-11-15 18:36:47.76 - stderr> 	at scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153)
> 2016-11-15 18:36:47.76 - stderr> 	at org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:205)
> 2016-11-15 18:36:47.76 - stderr> 	at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:239)
> 2016-11-15 18:36:47.76 - stderr> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2016-11-15 18:36:47.76 - stderr> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2016-11-15 18:36:47.76 - stderr> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> 2016-11-15 18:36:47.76 - stderr> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> 2016-11-15 18:36:47.76 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2016-11-15 18:36:47.76 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2016-11-15 18:36:47.76 - stderr> 	at java.lang.Thread.run(Thread.java:745)
> 2016-11-15 18:36:47.761 - stderr> Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 120 seconds
> 2016-11-15 18:36:47.761 - stderr> 	... 8 more
> 2016-11-15 18:36:48.081 - stderr> 16/11/15 18:36:48 ERROR StandaloneSchedulerBackend: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
> 2016-11-15 18:36:48.081 - stderr> org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
> 2016-11-15 18:36:48.081 - stderr> 	at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
> 2016-11-15 18:36:48.081 - stderr> 	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
> 2016-11-15 18:36:48.081 - stderr> 	at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
> 2016-11-15 18:36:48.081 - stderr> 	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
> 2016-11-15 18:36:48.081 - stderr> 	at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216)
> 2016-11-15 18:36:48.081 - stderr> 	at scala.util.Try$.apply(Try.scala:192)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.util.Failure.recover(Try.scala:216)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> 2016-11-15 18:36:48.082 - stderr> 	at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.Promise$class.complete(Promise.scala:55)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
> 2016-11-15 18:36:48.082 - stderr> 	at scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153)
> 2016-11-15 18:36:48.082 - stderr> 	at org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:205)
> 2016-11-15 18:36:48.082 - stderr> 	at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:239)
> 2016-11-15 18:36:48.082 - stderr> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2016-11-15 18:36:48.082 - stderr> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2016-11-15 18:36:48.082 - stderr> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> 2016-11-15 18:36:48.082 - stderr> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> 2016-11-15 18:36:48.082 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2016-11-15 18:36:48.082 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2016-11-15 18:36:48.082 - stderr> 	at java.lang.Thread.run(Thread.java:745)
> 2016-11-15 18:36:48.082 - stderr> Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 120 seconds
> 2016-11-15 18:36:48.082 - stderr> 	... 8 more
> {code}
> Please note that {{Caused by: java.lang.UnsupportedOperationException: Parquet does not support decimal. See HIVE-6384}} is an expected exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org