You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2018/08/15 18:45:49 UTC

Build failed in Jenkins: beam_PostCommit_Java_Nexmark_Flink #266

See <https://builds.apache.org/job/beam_PostCommit_Java_Nexmark_Flink/266/display/redirect>

------------------------------------------
[...truncated 3.64 MB...]
INFO: Stopping all currently running jobs of dispatcher akka://flink/user/dispatchera3744198-d484-448d-b77d-a9cc248851d1.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager suspend
INFO: Suspending the SlotManager.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.state.TaskExecutorLocalStateStoresManager shutdown
INFO: Shutting down TaskExecutorLocalStateStoresManager.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager unregisterTaskManager
INFO: Unregister TaskManager 8ee33709f2bcff82e0319b9d69ba4184 from the SlotManager.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.jobmaster.JobMaster dissolveResourceManagerConnection
INFO: Close ResourceManager connection cb7f096d252f3bc0552b27c67d904ffa: JobManager is shutting down..
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.jobmaster.slotpool.SlotPool suspend
INFO: Suspending SlotPool.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.rpc.akka.AkkaRpcActor onReceive
INFO: The rpc endpoint org.apache.flink.runtime.jobmaster.slotpool.SlotPool has not been started yet. Discarding message org.apache.flink.runtime.rpc.messages.LocalRpcInvocation until processing is started.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.jobmaster.slotpool.SlotPool postStop
INFO: Stopping SlotPool.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.io.disk.iomanager.IOManager shutdown
INFO: I/O manager removed spill file directory /tmp/flink-io-1546e96c-1d7a-4868-ac53-bd5e7d0f954b
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.io.network.NetworkEnvironment shutdown
INFO: Shutting down the network environment and its components.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.taskexecutor.TaskExecutor$JobLeaderListenerImpl jobManagerLostLeadership
INFO: JobManager for job c675c20cebf8d384f9ceaf9eb39cd027 with leader id 99f204e2c4e382301a94940dc5674eb9 lost leadership.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.taskexecutor.JobLeaderService stop
INFO: Stop job leader service.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.taskexecutor.TaskExecutor postStop
INFO: Stopped TaskExecutor akka://flink/user/taskmanager_26.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.rest.handler.legacy.backpressure.StackTraceSampleCoordinator shutDown
INFO: Shutting down stack trace sample coordinator.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.dispatcher.Dispatcher lambda$postStop$0
INFO: Stopped dispatcher akka://flink/user/dispatchera3744198-d484-448d-b77d-a9cc248851d1.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.webmonitor.WebMonitorEndpoint lambda$shutDownInternal$4
INFO: Removing cache directory /tmp/flink-web-ui
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.rpc.akka.AkkaRpcService stopService
INFO: Stopping Akka RPC service.
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.blob.AbstractBlobCache close
INFO: Shutting down BLOB cache
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.blob.AbstractBlobCache close
INFO: Shutting down BLOB cache
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.blob.BlobServer close
INFO: Stopped BLOB server at 0.0.0.0:39251
Aug 15, 2018 6:45:44 PM org.apache.flink.runtime.rpc.akka.AkkaRpcService lambda$stopService$4
INFO: Stopped Akka RPC service.
Aug 15, 2018 6:45:44 PM org.apache.beam.runners.flink.FlinkRunner run
SEVERE: Pipeline execution failed
org.apache.flink.runtime.client.JobExecutionException: org.apache.beam.sdk.util.UserCodeException: java.io.IOException: Failed to close some writers
	at org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:625)
	at org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:234)
	at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:91)
	at org.apache.beam.runners.flink.FlinkPipelineExecutionEnvironment.executePipeline(FlinkPipelineExecutionEnvironment.java:173)
	at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:115)
	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:299)
	at org.apache.beam.sdk.nexmark.Main.savePerfsToBigQuery(Main.java:250)
	at org.apache.beam.sdk.nexmark.Main.runAll(Main.java:162)
	at org.apache.beam.sdk.nexmark.Main.main(Main.java:477)
Caused by: org.apache.beam.sdk.util.UserCodeException: java.io.IOException: Failed to close some writers
	at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:34)
	at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles$DoFnInvoker.invokeFinishBundle(Unknown Source)
	at org.apache.beam.runners.core.SimpleDoFnRunner.finishBundle(SimpleDoFnRunner.java:285)
	at org.apache.beam.runners.flink.metrics.DoFnRunnerWithMetricsUpdate.finishBundle(DoFnRunnerWithMetricsUpdate.java:87)
	at org.apache.beam.runners.flink.translation.functions.FlinkDoFnFunction.mapPartition(FlinkDoFnFunction.java:131)
	at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
	at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:503)
	at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to close some writers
	at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles.finishBundle(WriteBundlesToFiles.java:248)
	Suppressed: java.io.IOException: com.google.api.client.googleapis.json.GoogleJsonResponseException: 410 Gone
{
  "code" : 503,
  "errors" : [ {
    "domain" : "global",
    "message" : "Backend Error",
    "reason" : "backendError"
  } ],
  "message" : "Backend Error"
}
		at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.waitForCompletionAndThrowIfUploadFailed(AbstractGoogleAsyncWriteChannel.java:432)
		at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.close(AbstractGoogleAsyncWriteChannel.java:287)
		at org.apache.beam.sdk.io.gcp.bigquery.TableRowWriter.close(TableRowWriter.java:81)
		at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles.finishBundle(WriteBundlesToFiles.java:242)
		at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles$DoFnInvoker.invokeFinishBundle(Unknown Source)
		at org.apache.beam.runners.core.SimpleDoFnRunner.finishBundle(SimpleDoFnRunner.java:285)
		at org.apache.beam.runners.flink.metrics.DoFnRunnerWithMetricsUpdate.finishBundle(DoFnRunnerWithMetricsUpdate.java:87)
		at org.apache.beam.runners.flink.translation.functions.FlinkDoFnFunction.mapPartition(FlinkDoFnFunction.java:131)
		at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
		at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:503)
		at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
		at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
		at java.lang.Thread.run(Thread.java:748)
	Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 410 Gone
{
  "code" : 503,
  "errors" : [ {
    "domain" : "global",
    "message" : "Backend Error",
    "reason" : "backendError"
  } ],
  "message" : "Backend Error"
}
		at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
		at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
		at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
		at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
		at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
		at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
		at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.call(AbstractGoogleAsyncWriteChannel.java:358)
		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
		... 1 more


==========================================================================================
Run started 2018-08-15T18:44:06.247Z and ran for PT97.936S

Default configuration:
{"debug":true,"query":0,"sourceType":"DIRECT","sinkType":"DEVNULL","exportSummaryToBigQuery":false,"pubSubMode":"COMBINED","numEvents":100000,"numEventGenerators":100,"rateShape":"SINE","firstEventRate":10000,"nextEventRate":10000,"rateUnit":"PER_SECOND","ratePeriodSec":600,"preloadSeconds":0,"streamTimeout":240,"isRateLimited":false,"useWallclockEventTime":false,"avgPersonByteSize":200,"avgAuctionByteSize":500,"avgBidByteSize":100,"hotAuctionRatio":2,"hotSellersRatio":4,"hotBiddersRatio":4,"windowSizeSec":10,"windowPeriodSec":5,"watermarkHoldbackSec":0,"numInFlightAuctions":100,"numActivePeople":1000,"coderStrategy":"HAND","cpuDelayMs":0,"diskBusyBytes":0,"auctionSkip":123,"fanout":5,"maxAuctionsWaitingTime":600,"occasionalDelaySec":3,"probDelayedEvent":0.1,"maxLogEvents":100000,"usePubsubPublishTime":false,"outOfOrderGroupSize":1}

Configurations:
  Conf  Description
  0000  query:0; exportSummaryToBigQuery:true; streamTimeout:60
  0001  query:1; exportSummaryToBigQuery:true; streamTimeout:60
  0002  query:2; exportSummaryToBigQuery:true; streamTimeout:60
  0003  query:3; exportSummaryToBigQuery:true; streamTimeout:60
  0004  query:4; exportSummaryToBigQuery:true; numEvents:10000; streamTimeout:60
  0005  query:5; exportSummaryToBigQuery:true; streamTimeout:60
  0006  query:6; exportSummaryToBigQuery:true; numEvents:10000; streamTimeout:60
  0007  query:7; exportSummaryToBigQuery:true; streamTimeout:60
  0008  query:8; exportSummaryToBigQuery:true; streamTimeout:60
  0009  query:9; exportSummaryToBigQuery:true; numEvents:10000; streamTimeout:60
  0010  query:10; exportSummaryToBigQuery:true; streamTimeout:60
  0011  query:11; exportSummaryToBigQuery:true; streamTimeout:60
  0012  query:12; exportSummaryToBigQuery:true; streamTimeout:60

Performance:
Exception in thread "main"   Conf  Runtime(sec)    (Baseline)  Events(/sec)    (Baseline)       Results    (Baseline)
  0000           2.9                     34328.9                      100000              
  0001           1.2                     81367.0                       92000              
  0002           1.1                     88652.5                         351              
  0003          10.7                      9357.2                         580              
  0004           6.1                      1643.1                          40              
  0005           6.4                     15593.3                          12              
  0006           6.2                      1608.0                         103              
  0007           7.4                     13513.5                           1              
  0008           7.3                     13618.4                        6000              
  0009           6.7                      1493.7                         298              
  0010           7.8                     12868.4                           1              
  0011           5.9                     17024.2                        1919              
  0012           5.7                     17565.4                        1919              
==========================================================================================

java.lang.RuntimeException: Pipeline execution failed
	at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:118)
	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:299)
	at org.apache.beam.sdk.nexmark.Main.savePerfsToBigQuery(Main.java:250)
	at org.apache.beam.sdk.nexmark.Main.runAll(Main.java:162)
	at org.apache.beam.sdk.nexmark.Main.main(Main.java:477)
Caused by: org.apache.flink.runtime.client.JobExecutionException: org.apache.beam.sdk.util.UserCodeException: java.io.IOException: Failed to close some writers
	at org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:625)
	at org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:234)
	at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:91)
	at org.apache.beam.runners.flink.FlinkPipelineExecutionEnvironment.executePipeline(FlinkPipelineExecutionEnvironment.java:173)
	at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:115)
	... 5 more
Caused by: org.apache.beam.sdk.util.UserCodeException: java.io.IOException: Failed to close some writers
	at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:34)
	at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles$DoFnInvoker.invokeFinishBundle(Unknown Source)
	at org.apache.beam.runners.core.SimpleDoFnRunner.finishBundle(SimpleDoFnRunner.java:285)
	at org.apache.beam.runners.flink.metrics.DoFnRunnerWithMetricsUpdate.finishBundle(DoFnRunnerWithMetricsUpdate.java:87)
	at org.apache.beam.runners.flink.translation.functions.FlinkDoFnFunction.mapPartition(FlinkDoFnFunction.java:131)
	at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
	at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:503)
	at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to close some writers
	at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles.finishBundle(WriteBundlesToFiles.java:248)
	Suppressed: java.io.IOException: com.google.api.client.googleapis.json.GoogleJsonResponseException: 410 Gone
{
  "code" : 503,
  "errors" : [ {
    "domain" : "global",
    "message" : "Backend Error",
    "reason" : "backendError"
  } ],
  "message" : "Backend Error"
}
		at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.waitForCompletionAndThrowIfUploadFailed(AbstractGoogleAsyncWriteChannel.java:432)
		at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.close(AbstractGoogleAsyncWriteChannel.java:287)
		at org.apache.beam.sdk.io.gcp.bigquery.TableRowWriter.close(TableRowWriter.java:81)
		at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles.finishBundle(WriteBundlesToFiles.java:242)
		at org.apache.beam.sdk.io.gcp.bigquery.WriteBundlesToFiles$DoFnInvoker.invokeFinishBundle(Unknown Source)
		at org.apache.beam.runners.core.SimpleDoFnRunner.finishBundle(SimpleDoFnRunner.java:285)
		at org.apache.beam.runners.flink.metrics.DoFnRunnerWithMetricsUpdate.finishBundle(DoFnRunnerWithMetricsUpdate.java:87)
		at org.apache.beam.runners.flink.translation.functions.FlinkDoFnFunction.mapPartition(FlinkDoFnFunction.java:131)
		at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
		at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:503)
		at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
		at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
		at java.lang.Thread.run(Thread.java:748)
	Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 410 Gone
{
  "code" : 503,
  "errors" : [ {
    "domain" : "global",
    "message" : "Backend Error",
    "reason" : "backendError"
  } ],
  "message" : "Backend Error"
}
		at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
		at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
		at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
		at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
		at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
		at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
		at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.call(AbstractGoogleAsyncWriteChannel.java:358)
		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
		... 1 more

> Task :beam-sdks-java-nexmark:run FAILED
:beam-sdks-java-nexmark:run (Thread[Task worker for ':',5,main]) completed. Took 1 mins 41.372 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':beam-sdks-java-nexmark:run'.
> Process 'command '/usr/local/asfpackages/java/jdk1.8.0_152/bin/java'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 5.0.
See https://docs.gradle.org/4.8/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 57s
65 actionable tasks: 61 executed, 4 from cache

Publishing build scan...
https://gradle.com/s/dj7gosfwz4zrk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

Jenkins build is back to normal : beam_PostCommit_Java_Nexmark_Flink #267

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/beam_PostCommit_Java_Nexmark_Flink/267/display/redirect?page=changes>